I'm pretty sure that there will be exceptions. I don't think that whether they matter is so strictly subject to a global, or even individual, opinion. X decides package doesn't matter, throws package away. Y decides it matters, collects package. There is no math formula that can resolve negative versus positive for an equal solution in this case.
I don't think its a question of math, it's more like... a question of learned behavior? Maybe more like, a question of desired behavior. If life is finite, and if harming another person is just as right as helping another person, and if you have no meaning in the universe other than that which you create yourself, then what kind of world do you want to live in? I guess I look at it like that. So, back to the original question, why do people matter? I'd say they matter, because, as far as I know, we're the only things that can show an indifferent universe happiness. Thats enough for me.
As to the question: Why do people matter? I'd say that morality is an evolved trait amongst social animals that must rely on one another for survival, which has been somewhat bastardised by many of the artificial hierarchies we have put in place in the modern world that tend to reward ruthless behaviour. Given that artificial intelligence is likely to be created within the upper echelons of these hierarchies, I think we'll end up with the T1000, HAL 9000, Ash and ED-209 amalgamated into a human killing machine.
You don't tell an AI why it should do something; you simply tell it to do something. if (human) { matter = 1; }
Extremely simplified to make a point, but still functionally the same. AI, no matter how advanced, is still software. It does what it is programmed to do. If you've somehow programmed it to feel emotion, then it's going to perform actions that fulfill whatever positive emotions you've programmed into it. To convince it that humans matter, you must tell how treating humans as though they mattered would fulfill its positive emotions.
You can't simplify an AI for any point. Eventually, if it does what it's supposed to do, it will not blindly go along with a declaration, especially if there is a conflict. Just telling it that humans matter might work at the extremely basic level, but not when we get to the level of sophistication where such a question has substance. How useful is an AI if you declare that murderdeathkill is the shits? #include bullshit.h using namespace wtf;
Poking my head in this conversation a bit late, but ultimately, how is it any different telling a computer why lives matter than a human? You could almost say that humans are machine, extremely complex machines. Perhaps we do have a code we follow. Each of us act different by our own rules, definitions and standards - not so different from a computer. The difference we need to differientiate is: Are we simply talking a very sophisticated piece of code or are we looking at this beyond simply a machine? If it's the former, then it is simple, just define human as more precious. The AI wil calculate the 'importance' of a human being within a set structure of an equation. if the answer to that equation is greater then a predermined limit, they hey you got you answer. However, if we talking beyond this, then well, it's more complicated - this same question could be applied to anything else: "Why do dogs matter?" "Why does nature matter?" "Why does <insert animal here> matter?" Edit: to make it clearer. Why do you care about an ant or your dog/pet? Same question is being applied to AI looking at a human.
Sorry, I'll have to disagree here. Now you're talking about a very complex feeling called love. We can love our pets for many reasons, and we can love an ant because we love ants in particular or because we love Nature as a whole. We care because we love. We can even care about a stuffed animal because it was our teddy bear growing up and we developed feelings for it and we love it. Can we teach an artificial intelligence to love? I think I'm on to something here. We don't need a rational reason to love something, we just do or do not. If love is not rational, how can we expect to teach it to an intelligence that is devoid of feelings? Feelings are not rational. I know, I'm coming up with more questions. I'm enjoying your answers.
But love is just some intangible and largely meaningless romanticised term used in place of lust, trust, common goals, self-affirmation etc (when it's for a pet it is often because we inaccurately impose human characteristics onto them). Surely an AI would recognise it as the incoherent babblings of superstitious humans?
I think, @Dnaiel, the answer here is pretty simple. You can't tell a true AI that humans matter. If the AI is advanced enough, it would have its own perception of reality, right? Likely, AI would develop its own opinions on things like humans and climate and our place in the universe. And maybe those opinions would differ from bot to bot like they do in humans. Maybe you wouldn't be able to tell an AI shit, just like humans. I don't know. So there'd be good bots and bad bots and bots in between.
Well, we kinda have to tell the AI a lot of things, to get it started learning about the world. It's not going to be an AI at all without some knowledge. And I figure that since its own existence depends on knowledge that it has to matter to itself just to even function, to exist. If it concludes, per its own reasoning, that humans don't matter, then how can we expect it to consider anything to matter? Which brings us back to itself. If it decides that it itself doesn't matter, then at least it all works out evenly. Which, come to think of it, is pretty much what Infel was trying to get through to me.
i found this interesting video on the subject, the channel also has other relevant videos. There is a lot of paradoxes and pitfalls a programmer can fall into when creating a safe AI.
I can't tell if you're being facetious or just trolling. The desire for self-preservation is not a uniquely human trait; it's one of those traits whose presence is deemed necessary for an item to be considered self-aware in any meaningful sense. And for you to flippantly disregard the literature published by academics who oppose the development of AI is unsurprising, considering how most people would rather voice ignorance than shut up and listen for a moment.
Then your reading comprehension skills are lacking. As is your membership here, I see. According to what? Which I didn't do... This is stupid.
While that sounds like a catch all to keep an AI from acting on negative impulse, if it learns how to ignore certain programs such as the Three Laws of Robotics then it would be a useless effort. I have not read the book, but the movie shows that the AI central computer decided to ignore the safety protocols that would have prevented it from acting outside of the fundamental reason for the laws to limit its ability to do so in the first place.
While the obvious answer to that is "Program it to care about humanity" if you don't agree with that there is always the second option, make AIs have a symbiotic relationship with humans like the way Alec does in Mass Effect Andromeda.
If you're referring to "I, Robot", the movie has about as much to do with the book as does a jar of coarse-ground mustard. Computing was still in its infancy when the books were written, and the Three Laws were "hard-wired" into the robots brains, they had less ability to ignore or override them than you or I do to override our heartbeats. Now that we know more about computing, it seems that it would be difficult to put a command that deeply into an AI's operating system, but it might be possible.
@Iain Aschendale it seems fairly reasonable that it might work. Though it would bode well to have a safety protocol external in the event the programming were to fail. EMP would be the simplest choice.
They tried to put that "mousetrap" cutoff switch into HAL-9000 in 2010, but I think the programmer found it and took it out.
I have not seen 2010. But in 2000 Space Odyssey they kinda stole Hals intelligence. Suppose we are not fully aware of the implications of AI. One way to know for sure.
Yeah, in 2001 David Bowman lobotomized HAL, but that's an example of why an EMP would be sub-optimal. He still needed enough "dumb" computer functions onboard Discovery for things like atmosphere and navigation, but he couldn't survive with an AI cutting the Gordian knot of secrecy by following Franklin's maxim*. *Three people can keep a secret, as long as two of them are dead
@Iain Aschendale Fair point. I suppose inventing a digital sedative as a protocol measure would make the most sense. Putting the 'brain' in some random place so far from the console only makes good cinema action. Also EMPs in a space setting does seem like a real bad choice.