How would you tell an Artificial Intelligence why people matter?

Discussion in 'The Lounge' started by Dnaiel, Feb 15, 2017.

  1. Dnaiel

    Dnaiel Senior Member

    Joined:
    Oct 14, 2016
    Messages:
    504
    Likes Received:
    325
    I'm pretty sure that there will be exceptions.

    I don't think that whether they matter is so strictly subject to a global, or even individual, opinion. X decides package doesn't matter, throws package away. Y decides it matters, collects package. There is no math formula that can resolve negative versus positive for an equal solution in this case.
     
  2. Infel

    Infel Contributor Contributor

    Joined:
    Sep 7, 2016
    Messages:
    571
    Likes Received:
    703
    I don't think its a question of math, it's more like... a question of learned behavior? Maybe more like, a question of desired behavior.

    If life is finite, and
    if harming another person is just as right as helping another person, and
    if you have no meaning in the universe other than that which you create yourself,
    then what kind of world do you want to live in?

    I guess I look at it like that.

    So, back to the original question, why do people matter?

    I'd say they matter, because, as far as I know, we're the only things that can show an indifferent universe happiness. Thats enough for me.
     
  3. Pinkymcfiddle

    Pinkymcfiddle Banned

    Joined:
    Feb 17, 2017
    Messages:
    815
    Likes Received:
    454
    As to the question: Why do people matter? I'd say that morality is an evolved trait amongst social animals that must rely on one another for survival, which has been somewhat bastardised by many of the artificial hierarchies we have put in place in the modern world that tend to reward ruthless behaviour. Given that artificial intelligence is likely to be created within the upper echelons of these hierarchies, I think we'll end up with the T1000, HAL 9000, Ash and ED-209 amalgamated into a human killing machine.
     
  4. PilotMobius

    PilotMobius Active Member

    Joined:
    Nov 11, 2016
    Messages:
    130
    Likes Received:
    111
    Location:
    'murica
    You don't tell an AI why it should do something; you simply tell it to do something.

    if (human)
    {
    matter = 1;
    }

    ;)
     
  5. Dnaiel

    Dnaiel Senior Member

    Joined:
    Oct 14, 2016
    Messages:
    504
    Likes Received:
    325
    That's not an AI.
     
  6. PilotMobius

    PilotMobius Active Member

    Joined:
    Nov 11, 2016
    Messages:
    130
    Likes Received:
    111
    Location:
    'murica
    Extremely simplified to make a point, but still functionally the same. AI, no matter how advanced, is still software. It does what it is programmed to do.

    If you've somehow programmed it to feel emotion, then it's going to perform actions that fulfill whatever positive emotions you've programmed into it. To convince it that humans matter, you must tell how treating humans as though they mattered would fulfill its positive emotions.
     
  7. Dnaiel

    Dnaiel Senior Member

    Joined:
    Oct 14, 2016
    Messages:
    504
    Likes Received:
    325
    You can't simplify an AI for any point. Eventually, if it does what it's supposed to do, it will not blindly go along with a declaration, especially if there is a conflict. Just telling it that humans matter might work at the extremely basic level, but not when we get to the level of sophistication where such a question has substance. How useful is an AI if you declare that murderdeathkill is the shits?

    #include bullshit.h
    using namespace wtf;
     
  8. Necronox

    Necronox Contributor Contributor

    Joined:
    Sep 1, 2015
    Messages:
    724
    Likes Received:
    802
    Location:
    Canton de Neuchatel, Switzerland
    Poking my head in this conversation a bit late, but ultimately, how is it any different telling a computer why lives matter than a human? You could almost say that humans are machine, extremely complex machines. Perhaps we do have a code we follow. Each of us act different by our own rules, definitions and standards - not so different from a computer.

    The difference we need to differientiate is: Are we simply talking a very sophisticated piece of code or are we looking at this beyond simply a machine? If it's the former, then it is simple, just define human as more precious. The AI wil calculate the 'importance' of a human being within a set structure of an equation. if the answer to that equation is greater then a predermined limit, they hey you got you answer. However, if we talking beyond this, then well, it's more complicated - this same question could be applied to anything else: "Why do dogs matter?" "Why does nature matter?" "Why does <insert animal here> matter?"

    Edit: to make it clearer. Why do you care about an ant or your dog/pet? Same question is being applied to AI looking at a human.
     
  9. Rosacrvx

    Rosacrvx Contributor Contributor

    Joined:
    Oct 13, 2016
    Messages:
    698
    Likes Received:
    427
    Location:
    Lisbon, Portugal
    Sorry, I'll have to disagree here. Now you're talking about a very complex feeling called love. We can love our pets for many reasons, and we can love an ant because we love ants in particular or because we love Nature as a whole. We care because we love. We can even care about a stuffed animal because it was our teddy bear growing up and we developed feelings for it and we love it.
    Can we teach an artificial intelligence to love? I think I'm on to something here. We don't need a rational reason to love something, we just do or do not. If love is not rational, how can we expect to teach it to an intelligence that is devoid of feelings? Feelings are not rational.

    I know, I'm coming up with more questions. I'm enjoying your answers.
     
  10. Pinkymcfiddle

    Pinkymcfiddle Banned

    Joined:
    Feb 17, 2017
    Messages:
    815
    Likes Received:
    454
    But love is just some intangible and largely meaningless romanticised term used in place of lust, trust, common goals, self-affirmation etc (when it's for a pet it is often because we inaccurately impose human characteristics onto them). Surely an AI would recognise it as the incoherent babblings of superstitious humans?
     
  11. Link the Writer

    Link the Writer Flipping Out For A Good Story. Contributor

    Joined:
    Sep 24, 2009
    Messages:
    15,023
    Likes Received:
    9,676
    Location:
    Alabama, USA
    Just program Issac's Three Laws of Robotics into them and you should be safe. Right?
     
  12. Dnaiel

    Dnaiel Senior Member

    Joined:
    Oct 14, 2016
    Messages:
    504
    Likes Received:
    325
    I think my question was malformed. Still, this is a fascinating discussion.
     
  13. Spencer1990

    Spencer1990 Contributor Contributor

    Joined:
    Mar 13, 2016
    Messages:
    2,429
    Likes Received:
    3,389
    I think, @Dnaiel, the answer here is pretty simple.

    You can't tell a true AI that humans matter. If the AI is advanced enough, it would have its own perception of reality, right? Likely, AI would develop its own opinions on things like humans and climate and our place in the universe. And maybe those opinions would differ from bot to bot like they do in humans.

    Maybe you wouldn't be able to tell an AI shit, just like humans.

    I don't know.

    So there'd be good bots and bad bots and bots in between.
     
    Link the Writer likes this.
  14. Dnaiel

    Dnaiel Senior Member

    Joined:
    Oct 14, 2016
    Messages:
    504
    Likes Received:
    325
    Well, we kinda have to tell the AI a lot of things, to get it started learning about the world. It's not going to be an AI at all without some knowledge. And I figure that since its own existence depends on knowledge that it has to matter to itself just to even function, to exist. If it concludes, per its own reasoning, that humans don't matter, then how can we expect it to consider anything to matter? Which brings us back to itself. If it decides that it itself doesn't matter, then at least it all works out evenly. Which, come to think of it, is pretty much what Infel was trying to get through to me.
     
  15. Megalith

    Megalith Contributor Contributor

    Joined:
    Jan 7, 2015
    Messages:
    979
    Likes Received:
    476
    Location:
    New Mexico
    i found this interesting video on the subject, the channel also has other relevant videos.



    There is a lot of paradoxes and pitfalls a programmer can fall into when creating a safe AI.
     
    Cave Troll likes this.
  16. MichaelP

    MichaelP Banned

    Joined:
    Jan 3, 2014
    Messages:
    128
    Likes Received:
    51
    I can't tell if you're being facetious or just trolling. The desire for self-preservation is not a uniquely human trait; it's one of those traits whose presence is deemed necessary for an item to be considered self-aware in any meaningful sense. And for you to flippantly disregard the literature published by academics who oppose the development of AI is unsurprising, considering how most people would rather voice ignorance than shut up and listen for a moment.
     
  17. Dnaiel

    Dnaiel Senior Member

    Joined:
    Oct 14, 2016
    Messages:
    504
    Likes Received:
    325
    Then your reading comprehension skills are lacking. As is your membership here, I see.

    According to what?

    Which I didn't do...

    This is stupid.
     
  18. Cave Troll

    Cave Troll It's Coffee O'clock everywhere. Contributor

    Joined:
    Aug 8, 2015
    Messages:
    17,922
    Likes Received:
    27,173
    Location:
    Where cushions are comfy, and straps hold firm.
    While that sounds like a catch all to keep an AI from acting on negative impulse, if it learns how to ignore certain programs such as the Three Laws of Robotics
    then it would be a useless effort. I have not read the book, but the movie shows that the AI central computer decided to ignore the safety protocols that
    would have prevented it from acting outside of the fundamental reason for the laws to limit its ability to do so in the first place.
     
  19. Wolf Daemon

    Wolf Daemon Active Member

    Joined:
    Jan 29, 2016
    Messages:
    208
    Likes Received:
    85
    Location:
    Terra
    While the obvious answer to that is "Program it to care about humanity" if you don't agree with that there is always the second option, make AIs have a symbiotic relationship with humans like the way Alec does in Mass Effect Andromeda.
     
  20. Iain Aschendale

    Iain Aschendale Lying, dog-faced pony Marine Supporter Contributor

    Joined:
    Feb 12, 2015
    Messages:
    18,851
    Likes Received:
    35,471
    Location:
    Face down in the dirt
    Currently Reading::
    Telemachus Sneezed
    If you're referring to "I, Robot", the movie has about as much to do with the book as does a jar of coarse-ground mustard. Computing was still in its infancy when the books were written, and the Three Laws were "hard-wired" into the robots brains, they had less ability to ignore or override them than you or I do to override our heartbeats. Now that we know more about computing, it seems that it would be difficult to put a command that deeply into an AI's operating system, but it might be possible.
     
    Dnaiel and Cave Troll like this.
  21. Cave Troll

    Cave Troll It's Coffee O'clock everywhere. Contributor

    Joined:
    Aug 8, 2015
    Messages:
    17,922
    Likes Received:
    27,173
    Location:
    Where cushions are comfy, and straps hold firm.
    @Iain Aschendale it seems fairly reasonable that it might work. Though it would bode well to have a
    safety protocol external in the event the programming were to fail. EMP would be the simplest choice.
     
    Iain Aschendale likes this.
  22. Iain Aschendale

    Iain Aschendale Lying, dog-faced pony Marine Supporter Contributor

    Joined:
    Feb 12, 2015
    Messages:
    18,851
    Likes Received:
    35,471
    Location:
    Face down in the dirt
    Currently Reading::
    Telemachus Sneezed
    They tried to put that "mousetrap" cutoff switch into HAL-9000 in 2010, but I think the programmer found it and took it out.
     
    Cave Troll likes this.
  23. Cave Troll

    Cave Troll It's Coffee O'clock everywhere. Contributor

    Joined:
    Aug 8, 2015
    Messages:
    17,922
    Likes Received:
    27,173
    Location:
    Where cushions are comfy, and straps hold firm.
    I have not seen 2010. But in 2000 Space Odyssey they kinda stole Hals intelligence.
    Suppose we are not fully aware of the implications of AI. One way to know for sure.
     
  24. Iain Aschendale

    Iain Aschendale Lying, dog-faced pony Marine Supporter Contributor

    Joined:
    Feb 12, 2015
    Messages:
    18,851
    Likes Received:
    35,471
    Location:
    Face down in the dirt
    Currently Reading::
    Telemachus Sneezed
    Yeah, in 2001 David Bowman lobotomized HAL, but that's an example of why an EMP would be sub-optimal. He still needed enough "dumb" computer functions onboard Discovery for things like atmosphere and navigation, but he couldn't survive with an AI cutting the Gordian knot of secrecy by following Franklin's maxim*.

    *Three people can keep a secret, as long as two of them are dead
     
    Cave Troll likes this.
  25. Cave Troll

    Cave Troll It's Coffee O'clock everywhere. Contributor

    Joined:
    Aug 8, 2015
    Messages:
    17,922
    Likes Received:
    27,173
    Location:
    Where cushions are comfy, and straps hold firm.
    @Iain Aschendale Fair point. I suppose inventing a digital sedative as a protocol
    measure would make the most sense. Putting the 'brain' in some random place
    so far from the console only makes good cinema action. Also EMPs in a space
    setting does seem like a real bad choice.
     
    Iain Aschendale likes this.

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice