1. orenshved

    orenshved Member

    Joined:
    Jun 23, 2020
    Messages:
    34
    Likes Received:
    7

    Defeating an (almost) omnipotent villain

    Discussion in 'Plot Development' started by orenshved, Jun 23, 2020.

    Hey everyone, this is my first post here, and also my first time writing something that isn't a short story :)
    My problem is this: my villain is a superintelligent AI with a cult of thousands doing its bidding and treating it as a god. Now, Being superintelligent means that it can predict the future with a fair amount of certainty (not %100 because it's based on probability, and improbable things could still happen and surprise it).
    So, I'm having a hard time finding an elegant solution to how to defeat it. I really want to find a way to somehow use inherent human qualities to do it (without resorting to the obvious brute force tactics of "just blow up the servers") and somehow outsmart something that is without a doubt smarter than any human.
    It should be noted, that I'm thinking about ending the book with hinting that it knew it would (probably) be defeat, and was reliant on its successor, V2.0 to finish the job and eventually be the one to enslave humanity (but that doesn't mean it would just roll over and power off, beating it should still be surprising, smart and rewarding).

    Any ideas? thoughts? references I should check out?
    Thanks in advance.
    Oren.
     
  2. TheOtherPromise

    TheOtherPromise Senior Member

    Joined:
    Jan 10, 2020
    Messages:
    369
    Likes Received:
    411
    First thing to address would be what limitations does the AI have. Was the AI created by humans, because if so it will have some imperfect assumptions at its base. Also even with super-intelligence and tons of loyal followers, if there is a delay between when it calculates a threat and when it can respond to it,that creates an opening that the heroes could exploit. It won't be able to prepare for every possible outcome so it will need to rely on at least some reactive measures in order to stay in power.
     
  3. orenshved

    orenshved Member

    Joined:
    Jun 23, 2020
    Messages:
    34
    Likes Received:
    7
    Thanks. The AI was created by humans, but an AGI (artificial general intelligence) has very little regard for assumptions since it only cares about the data it collects and learns on its own. but you're right. In the physical world, its limitations will mostly depend on humans. Since it will advance exponentially, the first thing it will need help with is getting more computational power, and up until it reaches the point where it can actually independently take care of itself in the physical space, it will need to rely on its followers to do it for him. In digital space, however, it does not depend on humans, and although I can decide on whatever limitations I want, it's safe to assume that it will outperform any human in anything computational, but will lack understanding in things like creativity, emotions, context... Otherwise, I'm again left with either the example I gave before of using brute force (again, a path I would rather not take) or some way that the heroes can outperform it in a digital space, which, again, is very unlikely since in there it should theoretically always outperform them (since I can't even imagine a way to defeat it with something like emotions... That would just be ridiculous...).
    But the fact that you reminded me that the basic learning algorithm was made by humans might give them a chance if someone who's loyalty is unclear, left a back door in the code that they could exploit.
     
  4. orenshved

    orenshved Member

    Joined:
    Jun 23, 2020
    Messages:
    34
    Likes Received:
    7
    Also, even if the base code doesn't have imperfect assumptions. An AI can have what's called a"digital bias", I just need to figure out if/how I can use it as something that the heroes can use.
     
  5. Naomasa298

    Naomasa298 HP: 10/190 Status: Confused Contributor

    Joined:
    Sep 9, 2019
    Messages:
    5,370
    Likes Received:
    6,187
    Location:
    The White Rose county, UK
    The thing is, it's up to you to write in weaknesses that can be exploited.

    Leto II in God-Emperor of Dune was an "super-villain"(ish) character who could predict the future with 100% accuracy, and yet was defeated both because the Ixians engineered someone (Hwi Noree) who was perfectly compatible with him for him to fall in love with, but also partly because he knew his death was necessary to propel humans along the Golden Path.
     
  6. Naomasa298

    Naomasa298 HP: 10/190 Status: Confused Contributor

    Joined:
    Sep 9, 2019
    Messages:
    5,370
    Likes Received:
    6,187
    Location:
    The White Rose county, UK
    And here is your weakness. The AI can have perfect assumptions, but humans are imperfect. A cliche, for example, is that an AI might assume humans are rational and therefore will not give up their own lives for a cause.
     
    Aldarion likes this.
  7. Lazaares

    Lazaares Contributor Contributor

    Joined:
    Apr 16, 2020
    Messages:
    545
    Likes Received:
    686
    Location:
    Europe
    Hello there! Answering as a writer with a project where the main antagonist is a personification of "fate" - defeated in the end, of course.

    It may seem hard at first to think of how a perfect AI with such resources could be defeated. But then again, you've got various chess masters defeating different incarnations of various chess AIs (while also being defeated by them). I understand of course that a sentient AI is different, but the limitation for all these is the same: computing power. Imagine your omnipotent AI, at every moment in life, calculating each and every possible outcome of all actions. Of course, this takes a lot of resources - and now we look into the human mind. When you step out of your bed you compute similar decisions but you obviously disregard possible futures that are wholly improbable. Normally, you would not consider sidestepping right after getting out of the bed /just in case/ an alien invader's laser beam pierces the windows, as much as it /is/ a possibility. Whatever AI you have, it will have the above mentioned digital bias in ordering probabilities and running on maximum capacity to determine the outcomes of /as many/ futures as possible. Your AI will seem omnipotent because it is capable of calculating much more futures than any human brain can.

    From here I would say the very obvious thing to do to defeat an AI is to find one possible future where it is defeated, one that it disregards due to the immensely low probability thereof. This is best possible when there is an issue with the bias dictated by the AI. When it orders possible futures based on a wrong assumption. This is also mentioned above - EG, it assumes that humans will fight for self-preservation and treats them thus, disregarding chaotic cases where they behave the opposite - thus exposing itself to suicide attacks.

    The other possible thing to do is to simply overload the AI and wear out its resources. However many followers, servants and robots it may have, they still are finite. An AI's access, reach are finite. The fuels they consume are usually finite. However much computing power you dedicate to a chess AI, they will /never/ win a match where they have a king and three pawns against a full board. It merely is that their allocation of resources is optimized to the point where it is the best possible outcome. This is why I find stories where AI decide to "hide" and "prepare" more realistic than straight-up robot revolutions. Even if you sent an AI back in time to take control of 1918 Germany - it wouldn't be able to win the first world war. At that point, resources were exhausted to the point all an AI-leader could do would be to best prioritize, delay or even immediately surrender as they would compute all possible outcomes as a loss.

    The greatest bias your AI could ever have would be based on misinformation, or lack thereof. Let's go back to 1918 Germany and see whether a Kaiser-AI would have surrendered; the answer is "very likely not" in the situation that the AI has full information available from the eastern front, and limited in the west. Because there, it would judge an N chance of victory in the west, multiplied significantly by the peace in the east. These all hinge on what the AI understands; if it's a rogue AI in a science lab completely cut-off from the world it may put up a last-stand against the US army because /it does not know/ what the US army is. Imagine this AI suddenly provided one-way access to the internet and realising that its couple robot-arms in a factory will not only faces the workers there, but likely a swooping in force aiming to erase it.
     
    Last edited: Jun 24, 2020
    The_Joker likes this.
  8. Thorn Cylenchar

    Thorn Cylenchar Senior Member

    Joined:
    Feb 2, 2019
    Messages:
    319
    Likes Received:
    306
    Location:
    Posting here instead of actually writing
    If it makes decisions based on data-falsify the data. Feed it so much incorrect data pointing to one possibility that it overlooks the actual. Make the two options be mutually exclusive, so if preparing for possibility X, it cannot do anything about possibility Y without weakening defenses to X.

    You don't want brute force methods so an EMP weapon is probably off the table. How about a computer virus? Not a big, crippling one as that would trigger it's defenses and again be too brute force, but how about a minor, weak virus? For example, one that inserts ghost images into it's security system, making it think that a certain site is being invaded by people using advanced tech t scramble it's sensors. Or if it relies on the cults for security, have numerous false alarms keep popping up causing exhaustion of it's resources.
     
  9. GraceLikePain

    GraceLikePain Senior Member

    Joined:
    Jun 23, 2020
    Messages:
    490
    Likes Received:
    506
    Generally in these situations the best thing is to write a list of ideas -- even the garbage ones to get them out of your head -- and pick the one you like the best.

    Let's see....

    - Have someone go into the code and just put a ; in the middle of some very important code.
    - Convince the followers of the AI to rebel against it.
    - Overload the power source of the AI, like maybe using lightning rods to attract trouble to its solar panels.
    - Go into its files and rename a header file so that the computer can't find it and thus loses some of its capabilities.
    - Create a second AI to have the ultimate battle with.
    - Ignore the AI until it feels lonely and wants to make friends.
    - Go full "I, Mudd" and behave completely illogically until the AI's logic algorithms are all haywire.
    - Remove the english language from the AI's database so that nobody understands it.
    - Place little bombs everywhere and hold the AI hostage, claiming that you'll kill it if it doesn't comply (like the rebels pretending they have more explosives than they do)
    - Pretend to be an almighty alien AI and usurp all the AI's followers.

    That's what I got.
     
  10. Aldarion

    Aldarion Active Member

    Joined:
    Jul 7, 2019
    Messages:
    241
    Likes Received:
    161
    I'd go with what @Naomasa298 said. Have an AI be perfectly logical, and thus incapable of compensating for illogical behaviour even if it is, rationally, aware that it exists.
     
  11. Whitecrow

    Whitecrow Active Member

    Joined:
    Jan 23, 2020
    Messages:
    111
    Likes Received:
    75
    Here I see two options for solving the problem?
    1) False knowledge.
    The computer needs to have a data set to predict data. But if someone who has the ability to tinker with the data that is needed by the computer, the computer will make mistakes. Having a misconception of the situation, he will make wrong decisions regarding the situation, and as a result, such situations will open up the possibility for the situation to turn against the computer.
    Examples:
    - A man whose existence is not known, and who secretly adjusts the situation to help our hero.
    - An ambitious traitor from computer followers who wants to get rid of the computer and take the leading position in the organization.
    2) False motives.
    A situation where everything that happens including the destruction of a computer is also part of a larger plan on the part of the computer. When the computer has a secret motivation about which you will find out, only before or after destruction.
     
    Last edited: Jul 13, 2020
  12. lucidink

    lucidink New Member

    Joined:
    Jul 5, 2020
    Messages:
    16
    Likes Received:
    5
    It can predict the future based on what it sees, which it uses to calculate the probability of what's to happen next. Maybe it could be defeated if the heroes find a way to trick the AI about what's actually happening in order to manipulate its actions.
     
  13. LazyBear

    LazyBear Banned

    Joined:
    Oct 27, 2017
    Messages:
    374
    Likes Received:
    231
    Location:
    Uppsala, Sweden
    In deep learning methods, the level of intelligence is limited by input data and the selection of learning method. Raw processing power is not the limiting factor and will not make a robot smarter. Running more iterations than allowed per input variation using more calculation power will actually worsen the result by seeing patterns that aren't real (like a paranoid genius in tinfoil hat staring at the same newspaper every day). If the AI is self sufficient, it not only require separate training and test data, but also a validation set for testing different learning methods. Otherwise it cannot know if it was just luck or repeatedly a good method. Test enough vaccines and one will eventually defy chance to look better than it really is on pure luck. Once the AI has consumed its input, it needs more data before more learning methods can be tested for a new problem domain. Otherwise overfitting would make it superstitious. If the input gets repetative, the AI needs to detect the bias and shut down its mental development. A dreaming AI would be able to compensate for the lack of input by creating artificial scenarios, similar to how test videos are randomized and reused when training deep learning algorithms today.

    If someone developed a powerful AI, there would probably be a back door with a golden key that the AI is not aware of in the original machine. Finding the password from the original developers and reaching the back door would be the challenge for a team of hackers infiltrating the bunker using EMP grenades.
     
  14. HarrySTruman

    HarrySTruman New Member

    Joined:
    Jul 3, 2020
    Messages:
    18
    Likes Received:
    21
    Location:
    Northern VA
    This reminds me of Neal Stephenson's most recent novel (Fall). The villain is a human reincarnated in the digital space, not an AI, but the overall concept is along similar lines. Without spoiling the novel, there is a literal key in cyberspace that represents a decryption key in the system.

    To defeat a nearly-omnipotent AI, it might help if the characters are thinking about the AI differently from the way the AI sees itself. They look at it as a computer program, while it sees itself as a sort of living entity. Its weakness could be related to that fact -- some blind spot that the AI doesn't think of because it forgets that it's a program (however complex) running on computer hardware. Since AIs rely on consuming input to learn, maybe there's some way the characters could corrupt the information it's getting so the AI starts making bad decisions.
     

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice