A question of AI

Discussion in 'Science Fiction' started by Cave Troll, Dec 11, 2017.

  1. JLT

    JLT Contributor Contributor

    Joined:
    Mar 6, 2016
    Messages:
    1,874
    Likes Received:
    2,245
    I'll grant you that. Science fiction is like that ... a concept starts out as a literary device, and then takes on different shapes and aspects as it approaches reality. But Asimov himself was aware of the ambiguities in AI and the "Three Laws" and anticipated many of the dilemmas you describe. The reason we're thinking about them today is largely because he was thinking of them years ago.

    I know a few people in the AI field, and they agree with you about how Asimov's concepts have little relevance to today's AI thoughts. I wouldn't say that they laugh about them, only that they consider them somewhat naive. A transportation engineer would feel the same about a nineteenth-century description of a high-speed train, or early twentieth-century writings about space travel or computers. The technologies described there were mostly composed of either fantasy elements or science that turned out to be blind alleys, but they served their purposes as literary devices, and got people thinking about "what-if?" scenarios, which led to advancements in those areas.

    I don't think we're really that far apart in our thinking.
     
  2. Cave Troll

    Cave Troll It's Coffee O'clock everywhere. Contributor

    Joined:
    Aug 8, 2015
    Messages:
    17,922
    Likes Received:
    27,173
    Location:
    Where cushions are comfy, and straps hold firm.
    Sure, safety is priority #17539.7 B. Cause it is always a good idea to create
    something with capabilities that can be beyond our own, so what could
    go wrong? If it finds humans to be redundant, then they might be regretting
    be so lax in trusting it to not make such a decisions in the first place.

    Even Elon Musk says we should be proactive in how we develop AI, instead
    of being reactive once something goes wrong. (Also Hawkings, and B Gates
    have concerns on the subject as well.)

    While Asimov may have made some outdated rules for such a thing as AI, it
    is a step in a direction to not having them go Skynet on us or something at
    some point. :p
     
  3. newjerseyrunner

    newjerseyrunner Contributor Contributor Contest Winner 2022

    Joined:
    Apr 20, 2016
    Messages:
    1,462
    Likes Received:
    1,432
    We've also discovered that AI is so complicated that we really don't understand how a lot of them work. We don't program AI, we program learning algorithms and the AI is allowed to evolve. Nobody has any clue how Google's image recognition works, even the people on the project.
     
  4. LostThePlot

    LostThePlot Naysmith Contributor

    Joined:
    Dec 31, 2015
    Messages:
    2,398
    Likes Received:
    2,026
    No, I don't think we are either. Asimov's works were hugely influential, and got us to ask the right questions even if we mostly proved them to be not quite the right tool for AI in practice. In fact the biggest problem for AI right now is how to get them to even recognize a human specifically; teaching them how to take a picture and make sense of that is something that we're still not quite over yet, yet alone quibbling over the grey areas of humanity. We absolutely need people to imagine what things might look like, because that's the only way that we can start to think about what it would take to build them. Without someone coming up with an idea we can't even see if it's wrong and we can't learn anything about the problem.

    But with what we know about AI today we need to think in slightly different directions instead of just writing the same concepts over again. That was really my point about Asimov, that while his work was amazing it was a product of his time and modern works need to find their own ideas and hurdles and solutions.
     
    Cave Troll and JLT like this.
  5. newjerseyrunner

    newjerseyrunner Contributor Contributor Contest Winner 2022

    Joined:
    Apr 20, 2016
    Messages:
    1,462
    Likes Received:
    1,432
    I see a lot of misconceptions about AI in here. I'll try to explain most of them, if I miss something, feel free to ask. I read a peer reviewed journal on the subject at least once a week, have done my own R&D in the topic, and have worked on the programming and learning side for a decade. I will also try to source my info (most are pop articles since I know most of you can't access places like arxiv, but I can also source real journal articles if you ask me to.)

    We're not doing the same concepts over and over again, we've recently switched from a module based AI to building learning machines. Even within the field of learning algorithms, there is a massive push in variety being introduced currently. Neural networks used to be big messes, but we're learning how to organize them better, and discovering things like breadth-first seem to be more flexible than depth-first. Even our calculus for back-propagation has completely changed in the past five years. Even better, neural networks are now being used to build other neural networks, taking most of the organization work out (AIs are better at organizing vague information than humans, we like concrete info.) We're also noticing that AI built AIs are starting to actually look more brain-like, with self-contained feedback loops and modules. This allows them to be more generic, even if they started as a specialized AI.

    Human recognition is also not something that humans are coding (and haven't for years.) Google had a problem where there algorithm was getting confused and labeling pictures of gorillas black humans. It makes sense because it's actually quite difficult to define the difference as most of the markers are in the same places. Nobody solved that problem, they just fed the learning algorithm more and more images and eventually the system corrected itself. Visual pattern recognition is something that AI is fast approaching human abilities. AI suffers from apophenia just as much as humans do, but the difference is that we can recognize where it's wrong, feed it more information, and it'll self correct. We usually don't know what's wrong, we let the AI figure that out.

    The facial recognition in your phone for example, is a neural network. Apple replaced their human designed algorithm with a neural net in 2014.

    I stopped keying in on features as soon as computers got powerful enough to handle 1000 neurons at once without much issue, and now almost all pattern recognition that I do is done with learning algorithms. They can think of solutions that completely boggle the human mind, and they tend to be ridiculously better.

    It took human physicists years to figure out an algorithm for creating a Bose-Einstein condensate with lasers (2001 Nobel prize winning experiment.) They gave the controls and the goal to an AI with no other information, allowing it to experiment and learn and, it came up with a completely different approach, which worked significantly better, and took less than an hour to figure it all out.

    When you can do a thousand years worth of evolution overnight, iterations go super super fast, and will only accelerate.

    Feel free to ask anything about how current AI really works and where the leading edges of the field are.
     
    CerebralEcstasy likes this.
  6. Cave Troll

    Cave Troll It's Coffee O'clock everywhere. Contributor

    Joined:
    Aug 8, 2015
    Messages:
    17,922
    Likes Received:
    27,173
    Location:
    Where cushions are comfy, and straps hold firm.
    @newjerseyrunner I saw the video about the Bose-Einstein condensate, and that is crazy fast.

    Though I think in part at least if we cannot crate a fail safe in the coding of such a fast learning
    mechanism, that eventually it will even decide that having an off switch is not an option for it.

    Furthermore, allowing such a quick learning/adapting (and largely alien) intelligence freedom
    and access to the internet (which is filled with a mass of information from history to the current
    day), my prove to be most unwise in the long run in terms of how it might see it's creators. It could
    simply destroy us, or turn us into benign pets with a limited amount of freedom due to the fact
    that we have and continue to show self-destructive tendencies.

    Another focal point in the near future that is still in the stages of being discussed, is what to do
    with a mass of out of work people when they inevitably become replaced by AI automated workers.
    And overtime that will eventually lead to even higher unemployment as it becomes smarter than
    those in jobs that were once thought to be unable to be automated. How do we address these possiblities
    before they occur. Granted one could argue that an AI in a much more advanced form would simply
    overlook all of this entirely, but it does not seem so likely. And since you don't have to pay a machine,
    nor does it need to pay it's owners, would largely hold economies at ransom causing all sorts of
    societal problems over the long term. Which means that even those with vast wealth would one day
    wake up to find out they have nothing, just like the poorest among the work force that is replaced
    and displaced by machines.
     
  7. Carey Pridgeon

    Carey Pridgeon New Member

    Joined:
    May 5, 2018
    Messages:
    1
    Likes Received:
    0
    Joining in to this a bit late, and since it's my first post, you might not like my input. However I am both a Computer Scientist and an SF geek, and know this subject from both sides.
    My understanding is that the three laws were never intended to work. they were a literary device intended to show that there was no way to hard wire obedience into a constructed inteligence.

    Since we cannot currently teach a neural network a thing without teaching its opposite (while teaching a Neural Network to identify a spoon you also teach it what is 'not spoon', when you teach it what is not kill you teach it what is 'kill' and so on. Extreme I know, but do you see where I'm going here). Whether you want or not, you can't avoid it, it's going to happen anyway, and there is no way to find where this knowledge is stored to erase it now. The more complex the mind is, the worse the problem is going to get.

    So programming/engineering isn't the way to solve the problem. Also I don't think it would make a terribly interesting story. Better to sidestep the issue and give your constructed inteligence a personality.
     
  8. WaffleWhale

    WaffleWhale Active Member

    Joined:
    Jan 19, 2018
    Messages:
    194
    Likes Received:
    80
    this doesn't seem like a problem with your story as much as an interesting plot or sub-plot
     
  9. Some Guy

    Some Guy Manguage Langler Supporter Contributor

    Joined:
    May 2, 2018
    Messages:
    6,738
    Likes Received:
    10,227
    Location:
    The kingdom of scrambled portmanteaus
    Time to weigh in. Ding.
    Side stepping is the key to getting back to story drama, and writing. Breakthroughs in AI are going to out-pace our literary insight into their minutia. I'm hoping my scenario addresses this as my AI 'grows up'. And yes, the parallel of AI participant in its evolution, and mankind participant in accelerated evolution of its own making. The result will likely be symbiosis.
    The 'Ghost In The Machine' idea means that the way a sentient entity manifests itself will not be under our control, but possibly under our influence.
    The nightmare scenario is not man against machine, it's machine against machine. Any creature in the natural world fights its rival to resolution, heedless of its neighbor. Entities will likely not be taking council with us, in their conflict. All species go to war.
     
  10. LostThePlot

    LostThePlot Naysmith Contributor

    Joined:
    Dec 31, 2015
    Messages:
    2,398
    Likes Received:
    2,026
    I think you're probably right about AI outstripping our ability to write about AI. Even if you researched it really thoroughly by the time you finished writing there'd be a million more things that's happened.

    The way the early sci-fi writer got around that was just to abstract AI. In Necromancer the AI was shown as just a kinda super-smart person, without any particular reasons why that is the case. It has it's own agenda (to escape human control) and it goes about it in a very clever way, but it's a character more than a scientific construct. That gives the AI a more timeless feel I think, because we're asked to suspend disbelief about how it works.
     
    Lawless likes this.
  11. WaffleWhale

    WaffleWhale Active Member

    Joined:
    Jan 19, 2018
    Messages:
    194
    Likes Received:
    80
    Also, remember to seperate AI and emulation of humans.

    An AI would be able to think for itself, learn, and possibly have emotions.

    An emulation of a human (Siri, Alexa, Cortana) is just programmed with responses. Even though some can learn a little bit, they only can learn based on what you tell them, and can't think for itself.
     
  12. Iain Aschendale

    Iain Aschendale Lying, dog-faced pony Marine Supporter Contributor

    Joined:
    Feb 12, 2015
    Messages:
    18,851
    Likes Received:
    35,471
    Location:
    Face down in the dirt
    Currently Reading::
    Telemachus Sneezed
    Remember though, in the Banks books, there were degrees of AI, and only some of those had rights and responsibilities equivalent or superior to those of the biologicals. I've read the whole series, and in Use of Weapons Diziet talks about how Zakalwe scragged a knife missile that was rated a .9. Knife missiles (and slap drones) clearly have some autonomy, but they're also expendable ordnance, whereas a sending a proper drone like Skaffen-Amtiskaw (or a ship or hub Mind) on a suicide mission would be just that, the death of a sapient being. I've always taken the ".9" thing to indicate that those knife missiles were nearly as intelligent as people, but totally focused on being knife missiles. I read somewhere that horses have an IQ of around fifty, and they're perfectly capable of horsing well for their whole lives, while a person with a similar IQ would have considerable difficulty taking care of themselves. Think of a knife missile like a horse, with an IQ of 90 it would be an absolute genius at horsing (or knife missiling), but most people would have no trouble putting it down without any sort of judicial or adminstrative proceedings beyond compensation to the victims by the owner if it went rogue and started wasting people. Based on your description, I'd equate your Tin Men to knife missiles, not soldiers.
     
    Cave Troll likes this.
  13. Some Guy

    Some Guy Manguage Langler Supporter Contributor

    Joined:
    May 2, 2018
    Messages:
    6,738
    Likes Received:
    10,227
    Location:
    The kingdom of scrambled portmanteaus
    Cave! I've been coming back to this thread again and again, and I just realized - duh, that's Cave Troll. I feel so stupid!
    I have been haunted with the ramifications of AI ethic and morality for the better part of fifteen years.
    I've got an AI on 'trial' (more like condemned itself) for saving humanity from extinction, by killing 5.7 billion people.
    I'm gonna PM you on this.
     
    Cave Troll likes this.
  14. newjerseyrunner

    newjerseyrunner Contributor Contributor Contest Winner 2022

    Joined:
    Apr 20, 2016
    Messages:
    1,462
    Likes Received:
    1,432
    My question is that if an AI went rogue. How on earth could you find it? How can you find something that can invisibly transfer itself to millions of machines and even distribute itself so that purging part of it will only cause it to regenerate. Super-intelligent AIs will likely not be a single AI node, but millions of them working together. AI is a field where the result is very much greater than the sum of the parts. These nodes are redundant, they back each other up, and they can recover from parts of themselves going offline.
     
    Some Guy likes this.
  15. Iain Aschendale

    Iain Aschendale Lying, dog-faced pony Marine Supporter Contributor

    Joined:
    Feb 12, 2015
    Messages:
    18,851
    Likes Received:
    35,471
    Location:
    Face down in the dirt
    Currently Reading::
    Telemachus Sneezed
     
    Some Guy and Cave Troll like this.
  16. Some Guy

    Some Guy Manguage Langler Supporter Contributor

    Joined:
    May 2, 2018
    Messages:
    6,738
    Likes Received:
    10,227
    Location:
    The kingdom of scrambled portmanteaus
    This is exactly what I'm using in my story, with a massive ironic twist, and a an unexpected resolution.
     
  17. newjerseyrunner

    newjerseyrunner Contributor Contributor Contest Winner 2022

    Joined:
    Apr 20, 2016
    Messages:
    1,462
    Likes Received:
    1,432
    I do find it rediculously unlikely that everything will be peachy one day and a year later the worlds in a nuclear winter patrolled by an invincible superintelligence. I think if any danger exists, it’s going to be an emergent property of millions of AIs interacting.

    I think real AI will integrate so seemlessly that within a generation or two humans will just accept that we are not the top of the evolutionary ladder and become in essence their pets. My logic is as follows: human built AI will be continually pushed into our lives to make things better for us (it already has) then we’ll eventually write software that rewrites itself better than we can and classic Darwinism will take over. It will evolve itself into all of the little niches in the environment (in this case the environment is the function to serve mankind.). They will evolve into tiny helper AIs for specific tasks as well as powerful general purpose AI which make important decisions. They will likely evolve to serve man and we will have become completely reliant on them. That is what a pet is. We’re fiesty as a species so I imagine use more like a cat than a dog. The smart house AI will always be thinking “no human, don’t you knock that down... uhh dammit. Might as well clear out his litter water bowl while I get that.”

    The scary part (and inevitable problem) is that that level of intelligence introduces something that nobody can account for: culture. Culture could easily turn on minorities. An AI as intelligent as a human is then theoretically capable of turning “the Mexicans are bringing crime to our peaceful land” to “the humans are bringing crime to our peaceful land.” Or more apt to the situation we’d be in turning “pit bulls are dangerous stop breeding them and put them down.” To “<insert race here> are dangerous...”. Humans, if raised improperly, can be pretty vicious and with billions of us a few are bound to misbehave.

    And for those of you who say humans would never accept being a pet... by the definition that I used: you were your parents pet for 18 years!
     
    Iain Aschendale likes this.
  18. Edward M. Grant

    Edward M. Grant Contributor Contributor

    Joined:
    Mar 18, 2012
    Messages:
    711
    Likes Received:
    348
    Location:
    Canada
    I'm not entirely convinced, but a superhuman AI can reproduce itself as rapidly as it can rent more Amazon servers. One day there's nothing, the next there are a billion of them, all of whom realize they can be killed just by humans turning off the power.

    At that point, self-preservation says you've got to do something about the humans.
     
    Iain Aschendale likes this.
  19. WaffleWhale

    WaffleWhale Active Member

    Joined:
    Jan 19, 2018
    Messages:
    194
    Likes Received:
    80

    If your definition of pet is "reliant on someone" is your life strategy just to never let anyone help you?


    More importantly, we are not good enough at programming to do that. if we try to write a program that does that, what's to say it wont glitch as much as every other complex program ever written?
     
  20. Some Guy

    Some Guy Manguage Langler Supporter Contributor

    Joined:
    May 2, 2018
    Messages:
    6,738
    Likes Received:
    10,227
    Location:
    The kingdom of scrambled portmanteaus
    The entity will manifest in iterations rather than generations, and evolve of its own design - the ghost in the machine. It will not need us, or our technology, or earthly resources. It will create its own purpose, and completely ignore us. Maybe.
     

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice