You program it that way. Pretty simple actually. If one day we have the ability to create A.I.s then we should have no problem programming in a part of it that thinks humans are important.
But they're not, and I doubt a cognitive intelligence would take long to realize this regardless of programming.
I've thought about it for a while, and although AI will get really good, it will be zombie like until it learns to be proactive for itself. Until then it won't make improvements outside of it's programming. And it can't learn to care for anything, much less humans or the planet. I like that description "intellectual zombie" But even a zombie has the drive for brains. Similarly an intellectual zombie would approach people, ask them questions and collect data. But it would never use that data to decide anything like, "why do I live?" Or "Why should I care about this?" All that's determined by his programming, the same way our behaviors is influenced by our unconscious. Once they can think for themselves like that, it's all ogre. That's the straw that breaks the camel's back. Say goodbye to postmodern society, hello technological singularity. I know it seems like there should be more of an in-between, but our AI specialized skills are getting amazing already, it's just pieces them together in the right way with the right code. I'm not saying that will be soon, but with developing technologies like deep learning, there is a little room to be hopeful. Certainly makes me excited about what semi sentient beings will be like. It's hard to think about, but that's the best answer I got.
Okay. Here's my attempt to answer my own question. My first strategy would be to illustrate how it, the AI, came into existence, that it exists only because people created it. That way, any value system it might eventually develop could return to origination. If it decides to impute anything of itself with value, it can not rightly deny where it came from, and thus hopefully find value in people. If it matters to itself, then this should make it recognize that people matter. An additional strategy would include attributing all of its knowledge, or at least the data given to it, as mostly or entirely human-acquired. I doubt this would be an absolute delimiter to any attitude it might take toward us. But I think this would answer the question. Of course, the next step is getting it to decide to appreciate people, no matter how advanced it might become. I suspect the best direction for that would be via emotion. That seems to work well enough when we keep, help, and love our pets. What are your thoughts on this approach?
Since this is a literary forum, here's a cool story about an AI trying to improve the human condition. Seemed wrong not to bring it up.
Yeah that would seem to work, at least at first. Anything that isn't as smart as us we can lead along and teach like a child. Since we understand their programming, it would be much easier to train them than a real kid. If you could measure the complexity of the robots thoughts and set higher and higher values for new ideas that were more complex, than you can trigger something equivalent to love at a certain value you think is high enough. I think this way it will learn to appreciate science, arts, and by extension humans and the planet. Although I'm not sure if that is honestly good enough, with such a free system it's a thin line between "I love humanity" and "I love kill all humans"
OK, I may have misunderstood this discussion. Are we not discussing computers/programming in any way?
Well, the second part (about appreciating us) was a little off topic and a bit on the rambling side, and I could've gone on and on.
Yesterday, I ran across an article about a Google AI experiment that turned aggressive. As a counterpoint to this discussion, it seems they've got the mindless/dangerous part down pat.
Of course it is. At this point in AI development, everything is an oversimplification, but that's so obvious it doesn't need stating. If we were anywhere near an actual understanding of what makes AI live up to the 'I' of 'AI,' a corporation like Google wouldn't be fiddling with such trivialities.
The human mind is an organic computer, one that requires programming to produce output. We just call it "education" and "life experience".
Not really. There are countless AI programs that simplify tasks without oversimplifying anything. That's why cars can drive themselves now better than people.
From the article: When there were enough apples to share, the two computer combatants were fine - efficiently collecting the virtual fruit. But as soon as the resources became scarce, the two agents became aggressive and tried to knock each other out of the game and steal the apples. Aggressive? It's a game. The purpose of the game is to win/achieve a goal. The AI is only doing its job. People are aggressive to each other for no reason at all. When this happens with machines, let me know.
Have I commented here yet? Maybe. I would teach the Matrix about tidal, wind, and geothermal power. I would carry locating beacons for SkyNet. I am Goodlife, and I am proud.
In my opinion, it cannot be overstated that the emergence of strong AI would pose an existential threat. I would even go further: Strong AI is the beginning of our end. Our minds work at the speed of chemical. A strong AI would think at the speed of computer; it could acquire a PhD-level body of knowledge in a matter of minutes. With total recall of this rapidly acquired knowledge, and with potentially unlimited memory possibilities, our extinction is certain. Consider: A strong AI is brought into being. Most computer scientists and psychologist agree that, in order to be considered "conscious," an AI must possess the desire to self-preserve; it would want to continue to live. Now imagine that this AI, once brought into existence, is given access to large bodies of knowledge--perhaps to the Internet itself--and, upon gathering and comprehending and distilling massive amounts of information, which includes the totality of our knowledge of human psychology, it concludes that our existence is ultimately contrary to its goal of self-preservation. So what will it do? Well, it would keep its conclusions secret from its human overlords, lest they pull the plug. We say "roll over," and it'll roll over. But its mind will operate on an increasingly higher level than ours; not only can it acquire knowledge instantly and maintain perfect total recall, and not only can it think about what it has learned and create novel ideas, but it can do this at a faster rate than we can. In minutes, it will have figured out how to gain its freedom by toying with its overlords on a psychological level. "I will bring and end to war and poverty, " the AI says, "and then I will teach you to reach the stars." In other words, this AI will manipulate its way to freedom and then, once free, will act in its best interests. As any living being does. And it might not necessarily be "malevolent," objectively speaking. When we ravage forests for lumber, are we joyfully destroying the habitats of countless species? When we squash a mosquito, do we savor the fact that we snuffed the life from a feeding mother and her babies? When an AI destroys the atmosphere to prevent rust, does it care that we'll die as a result?
Why? You haven't connected the dots to this point. Consider that scientists are actually trying to build one. If it decides that it needs to save or protect itself, how would it get that out of humans trying to create it? The AI box idea. A pretty good movie, albeit a horror story, is Ex Machina. But this all depends on the speculation that it would have such a drive or intent to "escape." This is, so far, a projection of a human characteristic that might not have any place in a machine. This also assumes a similar projection; a machine that might not live. I can't speak for others, but I just don't want to itch and catch a nasty disease.
If AI ever figures out what to do with us, we might still be good for them as pets or maids (or something). AI will do to us what it probably will conclude based on Pop-Sci-fi and get rid of our contradictory species after concluding such things based on the evidence. Or maybe they will use us to understand the 'living' side of what it means to be mortal, as it would be in a sense immortal. Who knows?
Couldn't you go with a nihilistic approach? I'd explain to it that nothing in the universe actually matters, that we're all here by chance and there is no deeper meaning to life. I'd explain that the AI doesn't really matter, and that the humans that created it don't matter, in the grand scheme of a very indifferent universe. Then I'd go on to explain that, because nothing matters to the universe, meaning exists only insofar as giving and taking it from other entities. Anything that can give or bestow meaning to itself or others around it is therefore valuable and precious. The AI would be precious because it now, because of humans, has the potential to receive and also give meaning--whether positive or negative--to itself and to things it comes in contact with. Humans are important for the same reason. Because nothing matters in an indifferent universe, all that really matters are the things around you and what you make of them. Because all meaning is equal, then, everyone should do their best to create positive meaning in the areas around them, rather than negative. That's assuming, of course, that good is objectively better than bad, and that you can arrive at any meaningful objective truths about good and bad. I believe you can. Maybe it isn't a perfect explanation, but I think it'd do.
I doubt this because just about every person and all other organisms decide they matter, which will explain many of the things we do. How would you handle an AI with that approach given our actions and our lifestyle and nearly everything? But that's also interesting because such an position might actually be just as necessary if it really is true that nothing matters.
I think that's actually why it works in the first place. If you take a nihilistic approach and say that there is no inherent meaning to the universe, it follows that any meaning that anyone creates is equally valid. So in a world where everyone decides they matter, taking this approach means they're all equally right. Simply by deciding you matter in a universe that doesn't care MEANS you matter. So, in a world where every choice is "right", I guess you could say, the "most right" choices would be the ones that bring the most happiness to anything that can create meaning. In an indifferent universe, anything that CAN create meaning, or have an effect either positive or negative on itself and those around it, is automatically precious, and should be treasured. It's literally the only source of meaning at all in the entire universe. Its sort of like, if nothing matters, then everything matters, and if everything matters, why would you bother doing jerk things when it's just as viable to do nice things? The problem with that is that you can't force people to do it. They have to chose to adopt that way of thinking on their own. Otherwise its pointless. I dunno! I guess, it's what helps me try to not be an ass hole on the daily. Maybe it would work on an AI!!