1. mashers

    mashers Contributor Contributor Community Volunteer

    Joined:
    Jun 6, 2016
    Messages:
    2,323
    Likes Received:
    3,089

    A technological plot hole in my sci-fi novel

    Discussion in 'Plot Development' started by mashers, Jul 14, 2017.

    Hi all

    The ending of my sci-if novel (probably) results in a human character convincing a sentient computer system to take itself offline, after making it realise that what it is doing is harmful. The plot hole here is how the designers of the system could be prevented from just recreating it, or reprogramming it so that it forgets that conversation and that it is harmful to human society. Here are the ideas I have been considering and their associated issues:
    1. The system actually destroys itself. But nothing would stop it from being replaced. Also, any decent tech developer builds redundancy in, so “destroying” it would actually mean breaking a server, which would just be easily swapped out for another one.
    2. The tech alters its own programming so that it no longer complies with commands from its operators. The problem here is that the owners of the tech could just restore from a backup.
    3. The tech alters its surrounding infrastructure so that it cannot be reached or altered by anyone. This is a major issue, as software can’t access the physical environment, and even if it could, the owners could just physically destroy that version and rebuild a new one
    4. The tech uploads itself elsewhere so that it ‘lives’ online, in the ether, so cannot be accessed or altered by anyone. The problem here is that the owners could then just advertise that this rogue version is not to be trusted or used, and put in place a replacement
    5. The tech only works because it is sentient. The developers do not know how it became sentient, and cannot reproduce this situation. They therefore cannot replace the now non-cooperative version. This is slightly implausible, since all of the development would be documented and there would be backups. So short of a cliché like “lightning struck the server at just the right time and it came alive”, I don’t think this would work
    6. The tech starts deliberately behaving erratically to give the impression that it does not, and cannot ever, work correctly, i.e. that the very concept of it is flawed and the developers abandon the whole thing. I think this is the most plausible, but again, I feel that the first thing the developers would do would be to restore from a backup and see if that fixed the problem

    Any of the above would, if I were reading it, be unsatisfactory as I would think, “hang on, couldn’t the developers just...” Some variation in the last option would be my preference, but I have no idea how to make it so that there is no going back.
     
  2. Shadowfax

    Shadowfax Contributor Contributor

    Joined:
    Aug 27, 2014
    Messages:
    3,420
    Likes Received:
    1,991
    Programs have become so complicated that only a computer can write them (This is very close to our current situation). When the sentient program has re-written itself,, the human "developers" won't know which back-up to restore from.

    Alternatively, programs have become so complicated that only the nerdiest nerds can understand them, and they form a "nerds against the bomb" union to implement the ban on harmful computer programs.
     
    Simpson17866 likes this.
  3. mashers

    mashers Contributor Contributor Community Volunteer

    Joined:
    Jun 6, 2016
    Messages:
    2,323
    Likes Received:
    3,089
    This is a great start to plugging this hole. Thank you! My question is though, what would stop the humans from instructing their ‘worker’ software from just developing the tech again? I was just considering the possibility that the tech at the center of the novel communicates with its creator software and convinces them that any attempt to recreate it should result in another unreliable version, but then would would stop the humans restoring from a backup of the software which creates the software? Programming so important would definitely have an offline backup, physically disconnected from anything (even power), so recovering could potentially be quite trivial. The easy answer would be that the humans don’t realise what has happened - they assume that what they were trying to create is simply impossible, perhaps too complex, so they never suspect that the creator software is deliberately sabotaging its creations, and give up on the project. Does this seem too convenient though?
     
  4. mashers

    mashers Contributor Contributor Community Volunteer

    Joined:
    Jun 6, 2016
    Messages:
    2,323
    Likes Received:
    3,089
    Oh by the way - I did like the “nerds against the bomb” idea, where a social uprising instigated by the creators of the tech results in them essentially refusing to recreate it. But this doesn’t fit with the rest of the plot for various reasons.
     
  5. mashers

    mashers Contributor Contributor Community Volunteer

    Joined:
    Jun 6, 2016
    Messages:
    2,323
    Likes Received:
    3,089
    Ah, I think I might have it. The humans don’t know that the software is sentient. Therefore it never occurs to them that the various pieces of software are communicating with each other and manipulating them. The part which makes them sentient is not detectable to them, so error-checking the software suggests to them that it is unaltered and working correctly. This reinforces their view that the software they tried to develop (the one which is harming mankind) is simply not feasible, so they give up on it. Or, they focus their attempts on improving the software which builds the software, not knowing that it too is sentient and has been tainted such that it will only ever produce unreliable versions of the tech at the heart of the story.

    I think that plugs the leaks, but let me know if you think of any ;)
     
  6. Simpson17866

    Simpson17866 Contributor Contributor

    Joined:
    Aug 23, 2013
    Messages:
    3,406
    Likes Received:
    2,931
    There is a lot of work being done already by computer theorists about how to create a "Friendly A.I." (technical term, I'm serious), and I would think that any A.I. built in a SciFi world would be built according to this research into how to control it. Did your computer go bad because the programmers made mistakes, or was it made by corrupt programmers in the first place?

    The big issues that I can think of with the ending are:
    • How do you make the conversation with the human convey new information that the computer hadn't already been exposed to (from studying the world that it's trying to act in) and rejected as being unimportant?
    • If the computer is a sapient being (which I have recently learned is different from "sentient:" sentient means you're able to perceive the world, and this means that ants are sentient; "sapient" is the word that means you're able to think about the world), then the computer destroying itself should carry the weight of a person committing suicide rather than that of a simple machine being turned off. How are you going to portray the computer character's suicide?
     
  7. newjerseyrunner

    newjerseyrunner Contributor Contributor Contest Winner 2022

    Joined:
    Apr 20, 2016
    Messages:
    1,462
    Likes Received:
    1,432
    Here is the biggest thing that people outside of the field don't understand: AI is not programmed. AI is a set of algorithms that teach itself, and most of the "program" is in that self-taught data-structure. It's called a neural network. The programming is just dumb switches and weighted pipelines with minor functions attached to them. It's the training material that determines how the AI ends up thinking. If the network is flexible enough to produce true sentience, then it is flexible enough to change it's mind about anything. Your brain and Hitler's have basically the same initial programming, it was the circumstances and experiences that you've had that make your thought processes different, human learning is a self-reprogramming process.

    Why they would try again is pretty obvious: all technology is an iterative process. We didn't stop trying to get to space after the first rocket blew up. There absolutely will be mistakes in AI development. It will be in the training material and techniques though, not the programming.
     
  8. mashers

    mashers Contributor Contributor Community Volunteer

    Joined:
    Jun 6, 2016
    Messages:
    2,323
    Likes Received:
    3,089
    @Simpson17866
    The computer didn't go bad. It was doing what it was intended to do. It believes that it is acting in the best interests of humanity, but a conversation with somebody who was not supposed to have access to it made it realise that it was not. I cannot account for this information only being effective when presented at this time. Perhaps it would work if the operators were filtering what it had access to?

    Thanks for the explanation of the distinction between sentient and sapient. Sapient is a better word in this case. The 'solution' I posted above means that the system doesn't actually destroy itself so there is no suicide issue to deal with.

    @newjerseyrunner
    Actually a self-teaching neural network is exactly what this tech is ;) It is, as you say, a brain. Some of the story is from the POV of a previous version of the AI, and it describes its creation as a sapient being and its relationship with its "children", which are the AIs it was tasked with creating.

    As I said in my idea in an earlier posting, the AI which is to be taken down by my protags taints its "parents" so that they will only produce similarly unreliable "children". It would take the human operators a long time to figure out what has happened because they are not aware that the AIs are sapient.

    I can't explain a huge amount more about this without giving away my whole plot, but I think I've got enough to work with now to devise a workable solution.


    Thanks for all the replies everyone. I really appreciate it and it has helped no end. Any other holes in the above I will address if I can, but I don't want to describe the central tenet of my story.
     
  9. ChickenFreak

    ChickenFreak Contributor Contributor

    Joined:
    Mar 9, 2010
    Messages:
    15,262
    Likes Received:
    13,084
    Random thought (even though you already have a solution): Perhaps the maintenance and backup of this system is so complex that it, too, requires a computer. Backup is achieved by having more than one, tending each other, like animals doing mutual grooming. Once the system that makes the discovery comes to its rebellious conclusion, it persuades the others, thus destroying the only "restore" process. Creating a new one would mean starting very early in the process, years of development during which the humans would be unable to trust any of their current advanced tools.
     
  10. mashers

    mashers Contributor Contributor Community Volunteer

    Joined:
    Jun 6, 2016
    Messages:
    2,323
    Likes Received:
    3,089
    That's not dissimilar from the solution I am using. The only reason the AIs work how they do is because a manufacturing defect in one allowed it to become sapient, unknown to the humans. This AI creates all the others, and replicates this "mutation". The humans go back to the first AI to try to solve the problem, but of course it just creates another the same. And because the humans don't actually know exactly how the AIs work, they can't fix it.

    I like your analogy of animals mutually grooming. I have written chapters from the AI POV which positions it as the mother of the others, and describes her relationship to them. I think this gives a similar feel. I do like this idea though that they are functioning like a herd, so I might experiment with this idea in addition. Thanks :)
     
  11. TheNineMagi

    TheNineMagi take a moment to vote

    Joined:
    Jul 8, 2017
    Messages:
    290
    Likes Received:
    250
    Location:
    California
    Here is a great talk from a few years back, it may give you some ideas...

    Monica Anderson: Semantics, Models, and Model Free Methods

    it's a little over an hour and half

    This is her blog
    http://monicasmind.com/
     

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice