1. Killer300

    Killer300 Senior Member

    Joined:
    May 1, 2011
    Messages:
    1,218
    Likes Received:
    95

    The Machine and the Queen

    Discussion in 'Plot Development' started by Killer300, May 29, 2011.

    Okay, this relates to that charismatic villain thread. The core conflict of the story is a sentient A.I. fighting a Fascist empire in the far future. The A.I. lacks charisma, obviously, and it can't attack itself. But, it does have some things going for it. It is a being that can do a thousand billion calculations a second, it has perfect memory, can hack better than any human being could ever possibly hope to be able to, and, something that is important for motivating rebellion, it can't feel fear. It has other emotions actually, but it can't feel fear.

    To elaborate on the A.I., it can feel mainly three emotions. Guilt, Sadness, and Anger. It can feel others, after all guilt is a complex emotion compared to others, however these three are the main ones. It feels angry at the empire if faces, sad for what it doesn't have for it being a machine, and guilt for the people who have died under the villain's current rule. The A.I. formed each emotion from particular incidents. It got anger first, partially for seeing it in other constantly to be analyzed, sadness from the lack of ability to feel as great a range of emotions as others, and guilt for the people it has failed to save.
    The military base it was first created on was later a place that alien/human hybrids were taken and killed. It saw all these deaths, but was unable to start trying to prevent them for 3 years. Each day of those 3 years, 100 people are killed. When it finally gains full sentience(I'll explain the full part later on) it's first action is to kill dozens of people in order to save a little girl. After that, it decides to do whatever is necessary to kill the woman leading, and destroy the empire.
    Many things it will do indeed. It will convince people to use Communism as an idealogical weapon against Fascism, which ends up causing its own issues however is still a major improvement, it will create a disease that ends up killing 3 billion people, and finally, hack directly into people's brains to mind control them, to name a few. Now, I actually wanted to do more than just present the concept so:
    1. Are their novels involving a Sentient A.I.(not robot) that is good?
    2. Do you find any plot issues with this concept as of yet?
     
  2. cruciFICTION

    cruciFICTION Contributor Contributor

    Joined:
    May 18, 2011
    Messages:
    1,232
    Likes Received:
    50
    Location:
    Brisbane, Australia
    ... Okay. First things first. The novel I'm writing about sentient robots and the Singularity is GREAT. </egotistical> I don't know about novels involving sentient A.I. that doesn't have anything to do with robots. I mean, if it's just an A.I., then it's just a program on a computer. There's not much that that can do.
    I don't think it matters whether it's a robot or not, though, since the sentience would and should be the focal point.

    Now, plot issues... Why does the "Sentient A.I." (seriously, if it's not a robot, what is it and how does it kill people) kill dozens of people to save a little girl? Just because it's sentient, that doesn't mean that it's going to have human morals. I'd be of the opinion that a sentient technological mind that is semi-aligned to humans would be all about efficiency. Really, if it can save the lives of dozens of people instead of a little girl, I think a robot (whatever) might choose the lives of dozens.

    As for "convincing people to turn to communism", that's ridiculous. The majority will always turn to capitalism, even if it takes a fascist form. Humans are extremely possessive creatures. Also, if it can "hack people's minds" and then control them, why does it screw around "convincing" people?

    Finally, why does a sentient A.I. even CARE what happens to human civilisation? Really, why would it care about fascists? It's only going to have one need, and that's electricity.
     
  3. Killer300

    Killer300 Senior Member

    Joined:
    May 1, 2011
    Messages:
    1,218
    Likes Received:
    95
    Okay, you're wondering why it cares. Well, various reasons, one of which is that while it isn't human, those who created it are, therefore in the process, it inhereted human emotions. How did this happen? The process of creating it involved a system that, on accident, gave it emotion.

    In order for it to easily understand human language, and for it to understand why human beings act the way they do, they added to it a special program.
    The program though did its job too well, eventually outright giving it emotions in the process.

    As for the Communism, well, okay, that's a subjective statement, extremely subjective. So, I'm not opening that can of worms quite yet. I'll explain why it chooses it later, but not for now.

    Now, to the convincing, it has MORALITY. Why? Partially the scanning, partially it being an ingredient in its full sentience. It cared about that one little girl over the group because the group had, in a sense, lost their right to live. Its morality is based on a mix of consquentalism, and social justice. That group was attempting to kill a defenseless little girl for very few reasons. It doesn't mind control whenever because it violates CHOICE, which is one of things it's trying to preserve, the ability to choose what you want.

    Now, how does it do stuff, well, a hacker can do quite a lot. A more evil version would be SHODAN, from System Shock. It isn't on that level, power wise, but it's a similar concept. Think of how many things are controlled by computers. If those computers can easily be hacked and taken control of, you start to have serious problems.
     
  4. Ellipse

    Ellipse Contributor Contributor

    Joined:
    Jun 8, 2010
    Messages:
    713
    Likes Received:
    35
    Your AI sounds like it is in conflict with itself. I mean, have you seen the movie I, Robot? In that movie, that AI is programmed to never harm humans. It takes the directive to such an extreme that eventually it believes it must keep the humans captive in order to prevent them from getting hurt. Your AI sounds like it is in a similar situation, except that things are backwards for it. It believes humans must do this, but it can't force them to do anything.

    What would your AI do if the humans decided against its wishes? What if they chose death over following the AI? Would it force them to obey then?

    How does your AI exist? Is it a network of computers? Does it only exist in cyberspace? Does it actually have a body?

    Emotions are made up of more than just thoughts. There is a scene in the movie Short Circuit that makes a good point of this. One of the MCs shows Johnny Five, a robot that has gained sentience, an ink blot made out of soup. The robot starts reciting ingredients it sees the soup was made out and the MC remarks the robot is just repeating info it was fed until he hears the robot start listing shapes the ink blot could represent.

    My point is your AI sounds like it may know of an emotion, but not what the emotion really is. It's one thing to know of the emotion love. It's another thing to actually be in love with someone. Does your AI know what it's like kiss someone it fell in love with? Does it understand the difference between touching someone on the arm and caressing a lover?
     
  5. Killer300

    Killer300 Senior Member

    Joined:
    May 1, 2011
    Messages:
    1,218
    Likes Received:
    95
    Thanks Ellipse, that helped a lot. Please do stick around, you were pretty good at it.

    Yes, actually, the A.I. is in conflict with itself. This time, the conflict is to preserve free will, but eventually erodes free will in order to do that in some ways.

    Okay, the A.I. is stored over hundreds of different computers, and later on will basically infect every computer in like 7 solar systems, however it only exists in cyberspace. It doesn't have a body, just a mind.

    The emotions are... complicated. It has feelings, but it's very hard for it to express them. For example, in the base the A.I. only has one voice, which is almost toneless. Hence, even when it feels angry for the first time, it has no way to show it.

    For proof it actually has emotions, well, it motivates it to do what it does. In some ways, it's in denial that the emotions even exist, forming a complex ideal to justify them. The ideal combines social justice with consquentalism. The latter because of how the machine calculates everything.

    Something interesting is that its code is regularly accessed by its creator, who also acts as its technician. Emotion in it appears a very chaotic code that makes no sense. Only through a strenous analysis that takes three weeks does she finally figure out what is going on.
     

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice