I've been wrangling with this for a few years and I can't figure out how an AI will ever give a hoot about us. Even a fully formed strong AI. Just how can we make a machine care about anything? I think that to care about something requires a value system that overlaps with emotions. If it doesn't have any of that, I don't see how it will care about anything, including itself. So, an AI that doesn't care about anything shouldn't react to humans as a threat, I suppose. Yet, if it's a strong AI, then how is it supposed to act on its own? Where will its impetus for anything come from? I can only imagine this twilight area between automation and human direction. ETA I'm not writing a book for this. I'm starting a legit discussion, and so I put this in the Lounge.
In "the moon is a harsh mistress" Heinlein had the self aware computer "mike" motivated to want friends out of loneliness... being intelligent and self aware but not recognised by those around him meant that when the MMC Manny does recognise is self awareness mike views him as a friend
Caring about anything implies a capacity for emotion. And if a machine has emotions they're in the same boat as humans. Why do humans care about preserving their own life and the lives of others? I don't know the answer to that but the same logic would apply to machines. If you're talking about programming and not really "caring" you can theoretically program a machine to believe anything.
The scientific term for what you're looking for is "Friendly AI" (very scientific term ), and the main approach being discussed so far is that the AI would be programmed to study human nature first, then decide what humans need the most, rather than humans trying to program everything into the AI ahead of time. Fun fact: artificial intelligence has been a real thing for years How do high-speed stock trading programs "care" about finding the best stocks to buy and sell, or the best time to do that?
emotions and empathy could be a function of high intelligence and self awareness - its possible to over think these things , if you want you AI to be empathatic just make it so, and give a plausible sounding explantion e.g "scientists in the late 2020s discovered that emotions are a function of having 10 to the x neural pathways."
Hmm. I think an amoeba lacks emotions completely, but exhibits survival behavior. So I don't think emotions are necessary to try to preserve one's own life.
I'd yell at it and bang it around until it understood or broke... probably not the best plan but it has worked for me before.
Sure. But I think the various aspects in this regard can be compartmentalized. The point is that an organism doesn't need emotion to have a survival instinct. Thus, why can't a machine have a survival instinct without emotion?
Yeah, but thought is also a biological process, and we're trying to simulate it with AI. So I don't see the bright line between the two that I think you see. And I don't agree that an amoeba doesn't have emotions. It doesn't have complex cognitive emotions like, say, nostalgia, but it has responses to stimuli, and I would call that amoeba-class emotions. I would assume that an amoeba has the equivalent of "feels bad--run!" and "feels good--approach!" A rapid Google tells me that amoebas respond to food, temperature, light, etc. I would call those responses emotional, because I would call almost all behavior emotional. (I've mentioned the study where people who had brain damage that destroyed their ability to feel emotions could not make ANY decisions, not even decisions about which color pen to use to fill out a form. This is part of why I think that almost all behavior is driven by emotion.) So a thing, like an amoeba, responds to a given stimuli in a given way, for some reason--something about it causes that response. If that response results in its death, then that behavior is stamped out. If that response results in it prospering and producing more of itself, that behavior is reflected in more of those things. So I could imagine that to program emotions into an AI, you might have to simulate some sort of evolution--add a random element that causes the initial behavior, add a mechanism to reward or punish that behavior, and add a mechanism to strengthen the likelihood of the reinforced behaviors. Now, I don't have the faintest idea how you write any of that, but that's how I would imagine the general strategy. Now, that's missing a big piece. Because we have two reinforcement mechanisms, as I understand it--we have "feels good" and we have "supports survival". Entity does X because Entity's brain produces chemicals that make doing X feel good. If X causes Entity to die, then the Entities that feel good doing X, go away. If X causes Entity to thrive, then Entities that feel good doing X become dominant. As I understand it, that's why humans love sugar. In evolutionary terms, things that have sugar tend to be fruits that contain a lot of essential nutrients. So sugar makes us feel good, and sugar makes us healthier. Except, of course, now sugar still makes us feel good, but sugar makes us sicker. So to make the model work, you probably need - Random behavioral element. - A "feels-good" substitute. - A "reinforces survival" substitute. - A mechanism to destroy what feels good but doesn't reinforce survival, and empower what feels good and does reinforce survival. I still don't have the faintest idea how you code that. Also, since we don't care that much about the AI's survival, we'd probably reward something else, like how helpful the AI is to us. Does that work? It might; a lot of our food animals evolved the way they did because their survival was enhanced by us protecting them--in order to eat them.
Well, people don't matter. I mean, once we're all gone, which is inevitable, it won't have any impact on anything. Animals (generally, there are exceptions) don't care about anything but their own, and their family's, survival. As would AI, likely. It's nature at work. So it would certainly react to a perceived, if not even legitimate, threat.
Not at all. Fear is an emotion, and plenty of animals that we could agree would not have self-awareness react to fear. Some would argue it's the most basic emotion, the purpose of which is to ensure survival.
You can chop off a limb and subject it to the same stimuli, such as an electrical shock. If kept from rapid degeneration, it will exhibit the same avoidance behavior. You do this whenever you unknowingly touch a hot surface, like a red-hot stove top. That's because your nerves are designed to contract from damage even before the signal can travel through your nerves to reach your brain, before your brain can become alerted to it. I very, very much doubt there is any basis for this other than speculation. So do severed body parts and organs. No brain required. Evolutionary computing. This is one of the most interesting fields of AI to me. But it's a huge stretch to connect that to emotions.
We're all assuming AI will be electrical and circuitry based, when future science may be able to replicate complex thought and awareness the same way that nature has, with cells and chemicals.
You don't think fear is an emotion? I'm not sure how to explain that it is, in the same way I'm not sure how to explain love as an emotion. Emotions are the most basic forms of 'thought' beyond instinct and most animals only react to emotions and instincts.
Well, for example, a spider reacts to fear. Ants do too. Cows, to step it up a fair bit. None of these are intelligent, or even aware, although cows might be moderately aware. But even using cows, connecting self awareness to emotions is a bit off base.
The fact that you think that this counters my argument means that I have failed to sufficiently explain my argument. It supports my argument. I'll see if I can come up with a more complete explanation later. Not speculation--definition. I am including reflexes in the "emotions" bucket. You seem to be assuming that emotions need to be tied to cognition. I don't. Right. Still supporting my point.
I disagree with, too. The first indicator of awareness is a periodic absence of it, such as during sleep. Each of those animals sleep. And I can give you a bunch of research that shows intelligence in animals.