How would you tell an Artificial Intelligence why people matter?

Discussion in 'The Lounge' started by Dnaiel, Feb 15, 2017.

  1. Wolf Daemon

    Wolf Daemon Active Member

    Joined:
    Jan 29, 2016
    Messages:
    208
    Likes Received:
    85
    Location:
    Terra
    You program it that way. Pretty simple actually. If one day we have the ability to create A.I.s then we should have no problem programming in a part of it that thinks humans are important.
     
    iRoppa likes this.
  2. Selbbin

    Selbbin The Moderating Cat Staff Contributor Contest Winner 2023

    Joined:
    Oct 16, 2012
    Messages:
    5,160
    Likes Received:
    4,244
    Location:
    Australia
    But they're not, and I doubt a cognitive intelligence would take long to realize this regardless of programming.
     
  3. Megalith

    Megalith Contributor Contributor

    Joined:
    Jan 7, 2015
    Messages:
    979
    Likes Received:
    476
    Location:
    New Mexico
    I've thought about it for a while, and although AI will get really good, it will be zombie like until it learns to be proactive for itself. Until then it won't make improvements outside of it's programming. And it can't learn to care for anything, much less humans or the planet. I like that description "intellectual zombie" But even a zombie has the drive for brains. Similarly an intellectual zombie would approach people, ask them questions and collect data. But it would never use that data to decide anything like, "why do I live?" Or "Why should I care about this?" All that's determined by his programming, the same way our behaviors is influenced by our unconscious. Once they can think for themselves like that, it's all ogre. That's the straw that breaks the camel's back. Say goodbye to postmodern society, hello technological singularity.

    I know it seems like there should be more of an in-between, but our AI specialized skills are getting amazing already, it's just pieces them together in the right way with the right code. I'm not saying that will be soon, but with developing technologies like deep learning, there is a little room to be hopeful. Certainly makes me excited about what semi sentient beings will be like. It's hard to think about, but that's the best answer I got.
     
    Dnaiel likes this.
  4. Dnaiel

    Dnaiel Senior Member

    Joined:
    Oct 14, 2016
    Messages:
    504
    Likes Received:
    325
    Okay. Here's my attempt to answer my own question.

    My first strategy would be to illustrate how it, the AI, came into existence, that it exists only because people created it. That way, any value system it might eventually develop could return to origination. If it decides to impute anything of itself with value, it can not rightly deny where it came from, and thus hopefully find value in people. If it matters to itself, then this should make it recognize that people matter.

    An additional strategy would include attributing all of its knowledge, or at least the data given to it, as mostly or entirely human-acquired.

    I doubt this would be an absolute delimiter to any attitude it might take toward us. But I think this would answer the question. Of course, the next step is getting it to decide to appreciate people, no matter how advanced it might become. I suspect the best direction for that would be via emotion. That seems to work well enough when we keep, help, and love our pets.

    What are your thoughts on this approach?
     
  5. Mouthwash

    Mouthwash Senior Member

    Joined:
    Dec 19, 2012
    Messages:
    476
    Likes Received:
    193
    Since this is a literary forum, here's a cool story about an AI trying to improve the human condition. Seemed wrong not to bring it up.
     
    Last edited: Feb 17, 2017
  6. Megalith

    Megalith Contributor Contributor

    Joined:
    Jan 7, 2015
    Messages:
    979
    Likes Received:
    476
    Location:
    New Mexico
    Yeah that would seem to work, at least at first. Anything that isn't as smart as us we can lead along and teach like a child. Since we understand their programming, it would be much easier to train them than a real kid. If you could measure the complexity of the robots thoughts and set higher and higher values for new ideas that were more complex, than you can trigger something equivalent to love at a certain value you think is high enough. I think this way it will learn to appreciate science, arts, and by extension humans and the planet. Although I'm not sure if that is honestly good enough, with such a free system it's a thin line between "I love humanity" and "I love kill all humans"
     
  7. ChickenFreak

    ChickenFreak Contributor Contributor

    Joined:
    Mar 9, 2010
    Messages:
    15,262
    Likes Received:
    13,084
    OK, I may have misunderstood this discussion. Are we not discussing computers/programming in any way?
     
  8. Dnaiel

    Dnaiel Senior Member

    Joined:
    Oct 14, 2016
    Messages:
    504
    Likes Received:
    325
    Well, the second part (about appreciating us) was a little off topic and a bit on the rambling side, and I could've gone on and on.
     
  9. Sack-a-Doo!

    Sack-a-Doo! Contributor Contributor

    Joined:
    Jun 7, 2015
    Messages:
    2,403
    Likes Received:
    1,647
    Location:
    [unspecified]
    Yesterday, I ran across an article about a Google AI experiment that turned aggressive. As a counterpoint to this discussion, it seems they've got the mindless/dangerous part down pat.
     
  10. Dnaiel

    Dnaiel Senior Member

    Joined:
    Oct 14, 2016
    Messages:
    504
    Likes Received:
    325
    Meh. It's an oversimplification. I just hope that a strong AI doesn't boil everything down to math.
     
  11. Sack-a-Doo!

    Sack-a-Doo! Contributor Contributor

    Joined:
    Jun 7, 2015
    Messages:
    2,403
    Likes Received:
    1,647
    Location:
    [unspecified]
    Of course it is. At this point in AI development, everything is an oversimplification, but that's so obvious it doesn't need stating. If we were anywhere near an actual understanding of what makes AI live up to the 'I' of 'AI,' a corporation like Google wouldn't be fiddling with such trivialities.
     
    iRoppa likes this.
  12. Phil Mitchell

    Phil Mitchell Banned Contributor

    Joined:
    Jun 14, 2015
    Messages:
    590
    Likes Received:
    247
    The human mind is an organic computer, one that requires programming to produce output. We just call it "education" and "life experience".
     
    iRoppa likes this.
  13. Dnaiel

    Dnaiel Senior Member

    Joined:
    Oct 14, 2016
    Messages:
    504
    Likes Received:
    325
    Not really. There are countless AI programs that simplify tasks without oversimplifying anything. That's why cars can drive themselves now better than people.
     
  14. Sack-a-Doo!

    Sack-a-Doo! Contributor Contributor

    Joined:
    Jun 7, 2015
    Messages:
    2,403
    Likes Received:
    1,647
    Location:
    [unspecified]
    Except when they don't.
     
    iRoppa likes this.
  15. Dnaiel

    Dnaiel Senior Member

    Joined:
    Oct 14, 2016
    Messages:
    504
    Likes Received:
    325
    They do. They have far fewer accidents and fatalities than people.
     
  16. Rosacrvx

    Rosacrvx Contributor Contributor

    Joined:
    Oct 13, 2016
    Messages:
    698
    Likes Received:
    427
    Location:
    Lisbon, Portugal
    From the article:
    When there were enough apples to share, the two computer combatants were fine - efficiently collecting the virtual fruit. But as soon as the resources became scarce, the two agents became aggressive and tried to knock each other out of the game and steal the apples.

    Aggressive? It's a game. The purpose of the game is to win/achieve a goal. The AI is only doing its job.
    People are aggressive to each other for no reason at all. When this happens with machines, let me know. ;)
     
  17. Iain Aschendale

    Iain Aschendale Lying, dog-faced pony Marine Supporter Contributor

    Joined:
    Feb 12, 2015
    Messages:
    18,851
    Likes Received:
    35,471
    Location:
    Face down in the dirt
    Currently Reading::
    Telemachus Sneezed
    Have I commented here yet?

    Maybe.

    I would teach the Matrix about tidal, wind, and geothermal power.

    I would carry locating beacons for SkyNet.

    I am Goodlife, and I am proud.
     
  18. MichaelP

    MichaelP Banned

    Joined:
    Jan 3, 2014
    Messages:
    128
    Likes Received:
    51
    In my opinion, it cannot be overstated that the emergence of strong AI would pose an existential threat. I would even go further: Strong AI is the beginning of our end.

    Our minds work at the speed of chemical. A strong AI would think at the speed of computer; it could acquire a PhD-level body of knowledge in a matter of minutes. With total recall of this rapidly acquired knowledge, and with potentially unlimited memory possibilities, our extinction is certain. Consider: A strong AI is brought into being. Most computer scientists and psychologist agree that, in order to be considered "conscious," an AI must possess the desire to self-preserve; it would want to continue to live. Now imagine that this AI, once brought into existence, is given access to large bodies of knowledge--perhaps to the Internet itself--and, upon gathering and comprehending and distilling massive amounts of information, which includes the totality of our knowledge of human psychology, it concludes that our existence is ultimately contrary to its goal of self-preservation.

    So what will it do? Well, it would keep its conclusions secret from its human overlords, lest they pull the plug. We say "roll over," and it'll roll over. But its mind will operate on an increasingly higher level than ours; not only can it acquire knowledge instantly and maintain perfect total recall, and not only can it think about what it has learned and create novel ideas, but it can do this at a faster rate than we can. In minutes, it will have figured out how to gain its freedom by toying with its overlords on a psychological level.

    "I will bring and end to war and poverty, " the AI says, "and then I will teach you to reach the stars."

    In other words, this AI will manipulate its way to freedom and then, once free, will act in its best interests.

    As any living being does.

    And it might not necessarily be "malevolent," objectively speaking. When we ravage forests for lumber, are we joyfully destroying the habitats of countless species? When we squash a mosquito, do we savor the fact that we snuffed the life from a feeding mother and her babies?

    When an AI destroys the atmosphere to prevent rust, does it care that we'll die as a result?
     
    Iain Aschendale likes this.
  19. Dnaiel

    Dnaiel Senior Member

    Joined:
    Oct 14, 2016
    Messages:
    504
    Likes Received:
    325
    Why? You haven't connected the dots to this point. Consider that scientists are actually trying to build one. If it decides that it needs to save or protect itself, how would it get that out of humans trying to create it?

    The AI box idea. A pretty good movie, albeit a horror story, is Ex Machina. But this all depends on the speculation that it would have such a drive or intent to "escape." This is, so far, a projection of a human characteristic that might not have any place in a machine.

    This also assumes a similar projection; a machine that might not live.

    I can't speak for others, but I just don't want to itch and catch a nasty disease.
     
    Rosacrvx likes this.
  20. Cave Troll

    Cave Troll It's Coffee O'clock everywhere. Contributor

    Joined:
    Aug 8, 2015
    Messages:
    17,922
    Likes Received:
    27,173
    Location:
    Where cushions are comfy, and straps hold firm.
    If AI ever figures out what to do with us, we might still
    be good for them as pets or maids (or something).

    AI will do to us what it probably will conclude based
    on Pop-Sci-fi and get rid of our contradictory species
    after concluding such things based on the evidence. :)

    Or maybe they will use us to understand the 'living' side
    of what it means to be mortal, as it would be in a sense
    immortal.

    Who knows?
     
    Rosacrvx likes this.
  21. Infel

    Infel Contributor Contributor

    Joined:
    Sep 7, 2016
    Messages:
    571
    Likes Received:
    703
    Couldn't you go with a nihilistic approach? I'd explain to it that nothing in the universe actually matters, that we're all here by chance and there is no deeper meaning to life. I'd explain that the AI doesn't really matter, and that the humans that created it don't matter, in the grand scheme of a very indifferent universe.

    Then I'd go on to explain that, because nothing matters to the universe, meaning exists only insofar as giving and taking it from other entities. Anything that can give or bestow meaning to itself or others around it is therefore valuable and precious. The AI would be precious because it now, because of humans, has the potential to receive and also give meaning--whether positive or negative--to itself and to things it comes in contact with. Humans are important for the same reason. Because nothing matters in an indifferent universe, all that really matters are the things around you and what you make of them. Because all meaning is equal, then, everyone should do their best to create positive meaning in the areas around them, rather than negative.

    That's assuming, of course, that good is objectively better than bad, and that you can arrive at any meaningful objective truths about good and bad. I believe you can.

    Maybe it isn't a perfect explanation, but I think it'd do.
     
    Rosacrvx likes this.
  22. Iain Aschendale

    Iain Aschendale Lying, dog-faced pony Marine Supporter Contributor

    Joined:
    Feb 12, 2015
    Messages:
    18,851
    Likes Received:
    35,471
    Location:
    Face down in the dirt
    Currently Reading::
    Telemachus Sneezed
    Dark Star
     
  23. Dnaiel

    Dnaiel Senior Member

    Joined:
    Oct 14, 2016
    Messages:
    504
    Likes Received:
    325
    I doubt this because just about every person and all other organisms decide they matter, which will explain many of the things we do. How would you handle an AI with that approach given our actions and our lifestyle and nearly everything? But that's also interesting because such an position might actually be just as necessary if it really is true that nothing matters.
     
    Infel likes this.
  24. Rosacrvx

    Rosacrvx Contributor Contributor

    Joined:
    Oct 13, 2016
    Messages:
    698
    Likes Received:
    427
    Location:
    Lisbon, Portugal
    No idea if it would work on AI but I find it very inspirational myself. What a beautiful post!
     
    Infel likes this.
  25. Infel

    Infel Contributor Contributor

    Joined:
    Sep 7, 2016
    Messages:
    571
    Likes Received:
    703
    I think that's actually why it works in the first place. If you take a nihilistic approach and say that there is no inherent meaning to the universe, it follows that any meaning that anyone creates is equally valid. So in a world where everyone decides they matter, taking this approach means they're all equally right. Simply by deciding you matter in a universe that doesn't care MEANS you matter. So, in a world where every choice is "right", I guess you could say, the "most right" choices would be the ones that bring the most happiness to anything that can create meaning. In an indifferent universe, anything that CAN create meaning, or have an effect either positive or negative on itself and those around it, is automatically precious, and should be treasured. It's literally the only source of meaning at all in the entire universe.

    Its sort of like, if nothing matters, then everything matters, and if everything matters, why would you bother doing jerk things when it's just as viable to do nice things?

    The problem with that is that you can't force people to do it. They have to chose to adopt that way of thinking on their own. Otherwise its pointless.

    I dunno! I guess, it's what helps me try to not be an ass hole on the daily. Maybe it would work on an AI!!
     
    Dnaiel likes this.

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice