1. Cave Troll

    Cave Troll It's Coffee O'clock everywhere. Contributor

    Joined:
    Aug 8, 2015
    Messages:
    17,953
    Likes Received:
    27,110
    Location:
    Where cushions are comfy, and straps hold firm.

    A question of AI

    Discussion in 'Science Fiction' started by Cave Troll, Dec 11, 2017.

    I have been thinking a few steps ahead in my WIP, and a thought
    occurred to me.

    Could an AI be put on trial for participating in war crimes?
    Or
    Could an AI testify against a guilty party for forcing them to participate in war crimes?

    Now this is not going to be the much more basic AI that Confederation Tin Men have,
    seeing as they are not really much more than 'mindless' in terms of how smart they
    really are.
    Looking more at a Surg-droid with a significantly higher level of intellect, as well as
    much less clumsy in comparison. What I am wondering since any machine can be
    programmed to do things against its wishes much easier and faster, than the old
    'you have to do xyz, or you die' approach to a human. Since a machine thinks in less
    of terms of life and death, and more of being objective with its ingrained programming.

    So what are your thoughts concerning the matter in question, and whether you think
    AI can be capable of being admissible to testify of its own guilt or against a party that
    is being held under scrutiny of guilt?

    Thanks so much. :)
     
  2. orangefire

    orangefire Active Member

    Joined:
    Nov 22, 2017
    Messages:
    112
    Likes Received:
    118
    That would depend entirely on the laws of the society in your story. I'd say it definitely could happen if sufficiently advanced AIs are considered people in your story's world.
     
    Simpson17866 likes this.
  3. Cave Troll

    Cave Troll It's Coffee O'clock everywhere. Contributor

    Joined:
    Aug 8, 2015
    Messages:
    17,953
    Likes Received:
    27,110
    Location:
    Where cushions are comfy, and straps hold firm.
    Well not considered citizens, more of utilities that aid Humans and Greys both in
    Military and Civilian applications. That would be about the bare bones on the laws
    concerning AI. They vary intellectually based on what they are manufactured for.
    A Tin Man for instance is basically a human sized robot pack mule that can either
    be mounted with weapons previously used by Light Armor war-frames (that have
    largely fallen out of fashion amongst all the Factions in the galaxy).
    Though it is note worthy to point out that all AI is easily noticed physically by the
    fact they look robotic and for the most part do not wear clothing (no bits and such).:p

    So in a since they are considered real by standards of being in society, but as far as being
    treated like their living counterparts in terms of undergoing legal process, would it be
    possible to take the word of a machine over a living being (or admit guilt for committing
    a criminal act of collusion with a living being)?
    Like having the coding corrupted in the safety protocols, allowing them to commit a crime
    in the first place, when they would otherwise not be capable if said protocols were not
    tampered with by a living being.
     
    Simpson17866 likes this.
  4. The Dapper Hooligan

    The Dapper Hooligan (V) ( ;,,;) (v) Contributor

    Joined:
    Jul 24, 2017
    Messages:
    5,939
    Likes Received:
    10,727
    Location:
    The great white north.
    We take the word of machines over people all the time in court. Video cameras, photographs, and audio recordings are frequently used as evidence against people even when there are real human eyewitnesses contradicting it. If an AI is considered a utility and doesn't' bear any personhood in society I would find it unlikely that it would be held on trial, as in being held accountable for what it did. Likely the blame would fall to the operator of the AI, as what would happen now if a drone killed a bunch of civilians, and that person would have to defend their actions. If the machine was found to have done these things without the operator, then the machine would probably be repaired, reprogrammed or scrapped and probably a few of his model mates would be recalled and face a similar fate. Most likely without taking into account the machines feelings on such things.
     
    Simpson17866 and Gadock like this.
  5. Gadock

    Gadock Active Member

    Joined:
    May 13, 2016
    Messages:
    116
    Likes Received:
    50
    This remind me of an episode of Star Trek (voyager), I’m not a total geek and it’s been a while so forgive me if I’m not completely right ^^.

    Anyhow, they have a holographic doctor on board which is a very advanced AI as it slowly develops to imitate feelings and builds relationships with the crew overtime. Eventually he even starts writing his own plays for a musical or something. He wanted this published and as soon as it was send the publisher took it as their own, because a hologram isn’t a sentient being. Eventually he did get approved as one because he was marked as artist.

    What I found interesting is to decide when something becomes a sentient being, as it is something rather difficult to decide.

    Hope this helped. ^^
     
    Simpson17866 likes this.
  6. LostThePlot

    LostThePlot Naysmith Contributor

    Joined:
    Dec 31, 2015
    Messages:
    2,398
    Likes Received:
    2,024
    The design of AI safety protocols is something very... Complicated. And that's something that I would focus my efforts on in this idea, because it's both philosophically interesting as well as being something that you can build an interesting intrigue around for either of the two plot lines that you suggested.

    Basically it comes down to this; AI don't understand context. They just see numbers. And in that sense it's extremely hard to get them to comprehend the purpose of safety protocols. From their perspective a big red button that stops them doing things is an active danger; they have been programmed to achieve one thing and the button stops them achieving it so that's bad. They don't see that their purpose of making cups of tea is less societal important than crushing a baby in it's gears.

    But then if you program the AI such that the stop button (which is really all that safety protocols are) is something that it accepts as being as important as achieving it's purpose the it'll never do what you want, it'll just engage the safety protocols as quickly as it can because that gives it a positive outcome too. So, you send the robot to make tea and it figures out that if it takes a swing at you then it'll 'win' faster than making tea because being stopped by the protocols is as important to it as making tea.

    That's a problem that we haven't yet solved. Like, literally, we haven't. There's papers out their discussing it but no-one has come up with even a solid concept of how to teach AI to not be monstrous. Think about an AI that is designed to collect stamps, the more stamps it can collect the more successful it's been. And that sounds like it couldn't have any problems at first. It starts out just buying stamps off ebay. But then it start ordering custom runs of stamps printed and collecting millions of stamps. And then eventually it'll figure out that stamps are made of carbon and so are people and start processing people into stamps because that's the only thing that it wants to do. So how do you tell it that it's ok to buy stamps but not make stamps out of people? With difficulty.

    But this is all interesting stuff that you could work on. The argument over if the terrible things are the AI's fault or the people who made it's are very interesting. Because how can you hold an AI culpable for doing something that makes logical sense to it and no-one has told it is wrong? That's probably the angle that I'd go with here. Where one side creates this awesome war winning AI and as it's winning their war it starts wiping out civilians too because that makes logical sense. So is it the AIs fault? Was it sabotaged? Did the AI try to circumvent it's programming? Is there a ghost in the machine? And seriously, how do you even punish an AI anyway? Well that's all interesting stuff to look at :)
     
    Simpson17866 likes this.
  7. Cave Troll

    Cave Troll It's Coffee O'clock everywhere. Contributor

    Joined:
    Aug 8, 2015
    Messages:
    17,953
    Likes Received:
    27,110
    Location:
    Where cushions are comfy, and straps hold firm.
    @The Dapper Hooligan Yes I understand that we allow things like cameras and such
    in current day courts. Just wondering the implications in a 700 years into the future
    scenario where AI is far more sophisticated than it is today. And has somewhat similar
    programming to keep it from going outside of its set programming, like Asimov's Laws
    of Robotics. Just that instead of not having these 'Laws' in place, they would be corrupted
    by a living being to allow them to do things outside of their original parameters.

    @Gadock I have not seen that episode of Star Trek, but it does raise an interesting idea
    on the discussion. :)

    @LostThePlot I have seen something similar where a note writing robot gets plugged into
    the internet for an hour, and over time begins converting everything on earth and in space
    into copies of itself and paper, to continue writing notes. :)

    Since my story is set in a tad more distant future, they would not make the same mistakes
    that we have/will along the way. As well as having the ability to control the level of the AI,
    based around what it is designed to do. Since the wetware is far more advanced they could
    ingrain safety protocols that inhibit the machine from going off independently doing it's own
    thing. For example a Tin Man takes only the instructions given to it, and embedded in the
    protocols of its programming allows it to identify between an enemy soldier and a civilian,
    so it doesn't go off on a Skynet style killing frenzy.
    However, if a really good coder got a hold of a Tin Man, they could in fact corrupt the governing
    factors, which would allow it to just wander around killing everything indiscriminately. Or they
    could add more coding that makes it skip through a field and pick flowers.

    Ultimately what I am looking at is since there are laws that heavily regulate the AI from simply
    being able to learn absolutely everything from the internet, and not having a big red shut off
    button. That they would instead have to receive updates from either a secure source, or a signal
    from a device that is not connected to the internet so they can't learn more than they need to,
    to carry out their programming efficiently. This does not mean that they cannot learn things through
    reading, and interactions with the world around them. They can learn to adapt to situations based
    upon what their assigned parameters are. The more evolved the AI is, the more it can adapt to
    each situation.
    Though I do like the concept of it gaining 'feelings' in a sense over time, I am looking at more of
    a technical issue of having it updated with specific code that will:
    A : Simply corrupt the safety protocols that keep the AI from working outside of them.
    B: In a sense bypass and force the AI's protocols, making it an unwilling participant.

    That is pretty much where I am hitting a wall. :)
     
    Simpson17866 likes this.
  8. LostThePlot

    LostThePlot Naysmith Contributor

    Joined:
    Dec 31, 2015
    Messages:
    2,398
    Likes Received:
    2,024
    I like this one better.

    If it were me then I'd write it such that someone had been able to subtly switch around some of the AI's inputs so that it doesn't quite understand what it's being asked. So instead of being asked "What should we do with 1000 prisoners?" it's being asked "What should we do with a 1000 violent murderers?". Something like that would be quite a nice way to go about this sort of story because it'll turn it into a whodunit with an interesting philosophical point rather than getting into the minutiae of how systems and safeties are designed. The system worked as intended; it just only knows what we tell it and was than an accident or was that deliberate?
     
    Simpson17866 and Cave Troll like this.
  9. Homer Potvin

    Homer Potvin Funky like your grandpa's drawers.... Staff Contributor

    Joined:
    Jan 8, 2017
    Messages:
    9,453
    Likes Received:
    16,557
    Location:
    Rhode Island
    This would be a legal question as someone else mentioned above. In theory a machine that was granted equal protection under the law, specifically the right to a fair trial, would also have the same civil rights as a human would. This would mean (again, in theory) that they could not be treated as slaves and be programmed to do anything they don't "want" to do. Or anything that they're not being "paid" to do as terms of their employment. For example, the US Constitution (which I know wouldn't apply in your world) gives me the right not to be forced into doing anything I don't want to do, but doesn't protect me from mopping floors if that's a term of my employment, even if I don't want to do it. Slaves in the US faced a similar conundrum because they were defined as property and not people under the law. They could be forced to do whatever their masters wanted and had no legal right to protest or fair trial. Your AI would seem to fall into a similar category, so if it were to be suspected of war crimes, I would think its masters could pull the plug or terminate it at will without the aid of judicial oversight. But given that you're writing sci-fi, you can do whatever and still have it make sense.

    ETA: as for the testifying thing... I don't know. Witnesses are subject to bias, prejudice, and all other sources of motivation, which is why there are very specific laws as to what a witness can testify to and how their testimony can be evaluated/debunked under cross-examination. So if we're talking about a machine subject to programming.... eh, that's a fucking can of worms. I would say no because there are just too many ways to unduly prejudice a machine.
     
    Simpson17866 likes this.
  10. The Dapper Hooligan

    The Dapper Hooligan (V) ( ;,,;) (v) Contributor

    Joined:
    Jul 24, 2017
    Messages:
    5,939
    Likes Received:
    10,727
    Location:
    The great white north.
    Then wouldn't whoever corrupted thier programming be responsible for whatever crimes the AIs committed? If I messed around with the parameters on a self driving car so it started taking out pedestrians, then I'm pretty sure I'd be the one at fault instead of the car.
     
  11. LostThePlot

    LostThePlot Naysmith Contributor

    Joined:
    Dec 31, 2015
    Messages:
    2,398
    Likes Received:
    2,024
    Yeah, I tend to agree on that point. While humans are pretty fallible at least we kinda know the ways in which they are fallible. A computer doesn't really know anything. It just reacts to the stimulus you provide it. If you jack a wire into it's digital eyeball then it'll see whatever you pump into it. And of course digital records can be changed after the fact.

    I do somewhat like the concept of a trial where all of this was part of the arguments but it'll be much less interesting than a human witness being pressured. With a human at least you don't know what they are going to say until they say it. With a computer if you poke at it with a screwdriver it'll say what you tell it to say.
     
  12. LostThePlot

    LostThePlot Naysmith Contributor

    Joined:
    Dec 31, 2015
    Messages:
    2,398
    Likes Received:
    2,024
    Well that's an interesting can of worms of it's own.

    I've been shooting a documentary on this exact subject and... Well, no-one really knows for sure because we're talking about law that's yet to be made. But the big thing (for the companies I've been speaking to anyway) is that if you have a self-driving car who is liable for when it hits a pedestrian. The 'driver' wasn't actually in control, so it doesn't make sense to hold him responsible for what the car did by itself. But if the manufacturers are now liable for every road accident then that's going to hugely inflate the cost of the cars themselves; the cost of every car now includes both the R&D overhead and an insurance policy; and the driver still has to buy insurance for when they are driving.

    And there isn't really a good answer to any of this. It's not really the manufacturers fault anyway; a little old lady looked the wrong way and they can hardly be expected for what happens thereafter. But neither can the driver. And the software engineers are all doing their best, but you can't expect them to be perfect; in fact one of the big problems for self driving cars is that they are too perfect; they don't react how humans would and that causes accidents. The car knows what it's doing makes sense but other drivers don't. And all of this is kinda... Well, it's law yet to be made. But I don't think either people at large, or the big transport companies, are going to be really happy about buying these advanced new vehicles that are supposedly so much better than people if they have to buy the same insurance and cover as if they were being driven by a person.

    Oh also; for everyone who thinks that the electric powered self-driving future is going to be so awesome; I'd like to point out that it's almost certain that we'll see large electric vehicle charging taxes come in as they become more common. Not because governments are evil, just because fuel duty is kinda a big part of their incomes and moving to electric won't change that ;)

    As for AI and personhood; well that's a long way off just yet. But it'll be the same exact set of problems; laws yet to be made and no-one knows what the hell we'll do. I personally quite liked the way that William Gibson handled this. It was strongly implied that AIs could be created and could be awesome and amazing and powerful, except that they were banned from achieving actual sentience and there was the 'Turing Police' who came around to whack your AI with a digital stick if it got too smart. In fact (for those not averse to spoilers) the whole plot of Neuromancer is the story of an AI that was split in two by the cops putting itself back together and escaping.
     
    Simpson17866 and Cave Troll like this.
  13. Cave Troll

    Cave Troll It's Coffee O'clock everywhere. Contributor

    Joined:
    Aug 8, 2015
    Messages:
    17,953
    Likes Received:
    27,110
    Location:
    Where cushions are comfy, and straps hold firm.
    Considering this is a Multi-Species Military Tribunal, instead of a more
    traditional Court system and operation. The manufacturer of the machine
    are not going to be held responsible for the misuse of the machine, since it
    is out of their control once it leaves the factory. So I stand behind the argument
    of the self driving car running down pedestrians of either its own accord, or
    from tampering with its programming to do so.
    Since things can be modded after the stringent conditions imposed by the
    manufacturer, once it is out of their hands. And since these machines have
    the ability to learn things independently within the confines of said imposed
    conditions to solve problems that can arise as a result of what they are intended
    to do, so they can in effect be better prepared to handle the situations that would
    be less typical in its basic guidelines for designated operating procedure.
    But how much responsibility can be placed on the machine for allowing its operating
    system to be modified to work outside of its SOP? In a way I think a person could
    for instance lie to a machine under the guise of offering to update their systems,
    and introduce the elements they want the machine to do that go against the SOP
    of the machine.

    Though I find another question arising the more I read on this thread regarding the
    citizenship of AI in society.
    Well what is the definition in your eyes, that would be for the notion that would allow
    an artificial intellect to be held to the same standard as a living person? As well as how
    they can be manipulated in ways that a living person simply cannot.
    Simply by being able to independently think for itself with in its own defined parameters,
    is it culpable to understand when it commits an act that is otherwise criminal as it pertains
    to it's living counterparts?
     
    Simpson17866 likes this.
  14. Homer Potvin

    Homer Potvin Funky like your grandpa's drawers.... Staff Contributor

    Joined:
    Jan 8, 2017
    Messages:
    9,453
    Likes Received:
    16,557
    Location:
    Rhode Island
    Same can of worms... legal culpability of humans can be obscured by mental illness, intoxication, mental defect, or diminished state. The human brain can malfunction just like a machine can, and that affects intent and culpability.
     
    Cave Troll likes this.
  15. The Dapper Hooligan

    The Dapper Hooligan (V) ( ;,,;) (v) Contributor

    Joined:
    Jul 24, 2017
    Messages:
    5,939
    Likes Received:
    10,727
    Location:
    The great white north.
    How would the machine stop the person from performing work on them? I'm assuming that they've been programmed to not hurt anyone except for certain circumstances, but would trying to unlawfully tinker with it be one of those circumstances? If so, I could see that going very wrong if the machine was ever accidentally surrounded by a group of curious kids. Not only that, but I'm assuming that a machine capable of doing some pretty serious damage to humans would have some sort of remote safety shut down just in case there was a malfunction. I'd assume that if anyone had the knowledge to tinker with it's programming, then they'd know how to neutralize it from a distance.
     
    Simpson17866 and Cave Troll like this.
  16. JLT

    JLT Contributor Contributor

    Joined:
    Mar 6, 2016
    Messages:
    1,481
    Likes Received:
    1,802
    I'm a bit confused. Although my own memories of the show are hazy, I thought that the premise of the plot was that the Voyager was completely out of contact with their civilization, and were going it alone. How could they have communicated with a publisher, or have the publisher communicate back? Perhaps your story was in one of those series-based novels, taking place after they returned; I never followed those.
     
  17. LostThePlot

    LostThePlot Naysmith Contributor

    Joined:
    Dec 31, 2015
    Messages:
    2,398
    Likes Received:
    2,024
    That is part of the big problem with AI, what they technically call couragability; making an AI that will let you reprogram it. Because the thing is; an AI is originally programmed to do something specific, right? And if you reprogram it then it won't be able to do that any more. So you reprogramming it scores very badly to it's success criteria. So an AI in theory would resist with all the force at it's command to stop you reprogramming it. To it's own logic there is no reason why it should let you. You need to figure out how to make the AI understand that you can change it without it reacting like that's a failure on it's part. And again, there isn't really an answer to that yet.
     
    Simpson17866 and Cave Troll like this.
  18. LostThePlot

    LostThePlot Naysmith Contributor

    Joined:
    Dec 31, 2015
    Messages:
    2,398
    Likes Received:
    2,024
    No, it was in Voyager. IIRC towards the end they were in intermittent contact with the federation through various means and at one point The Doctor (best character in that show btw; Robert Picado is a legend) sent his holonovel back and the publishers basically stole it because the federation didn't accept the doctor was a person. There was a legal proceeding and everything. Weird episode.
     
    Simpson17866 likes this.
  19. Iain Sparrow

    Iain Sparrow Banned Contributor

    Joined:
    Sep 6, 2016
    Messages:
    1,137
    Likes Received:
    1,061

    This would dovetail nicely into deeper questions regarding AI... as in 2001, A Space Odyssey, wherein HAL goes slightly insane while holding to its protocols.
    I'd strongly suggest you read one of Iain Banks 'Culture' novels as a way to research the possibilities of AI and how you might implement such things in your story. Banks employs AI in pretty cool ways! Spaceships are captained by AI; each are individual personalities and in most regards are sentient beings. Humans sort of go along for the ride, so to speak. These spaceships get lonely, war-weary, have nervous breakdowns, and occasionally hold their crew captive. It's sort of funny reading these stories that have spaceships that need coaxing, complimenting, stroking, and the other human needs in order to function properly.
    I don't know of any writer that does AI better than does Iain Banks.
     
  20. LostThePlot

    LostThePlot Naysmith Contributor

    Joined:
    Dec 31, 2015
    Messages:
    2,398
    Likes Received:
    2,024
    If you look at the science actually that's pretty much the primary question of AI; how to get them to do what we actually want instead of something mental and terrifying. In a lot of respects programming an AI is like dealing with an recalcitrant genie. You ask it to make you dinner and it kills your children, turns them into a stew then when you complain says "But you asked for dinner, you had dinner, what are you complaining about?". The AI only knows what we tell it, it doesn't have a lifetimes worth of understanding about the world around it. Even if you make some supercool deep learning AI it may notice that we tend not to eat our children but it won't understand why when they are perfectly good meat. An AI could have written "A modest proposal..." without breaking any of it's programming. It's totally monstrous but the AI doesn't know or care what being a monster is.
     
    Simpson17866 and Cave Troll like this.
  21. Iain Sparrow

    Iain Sparrow Banned Contributor

    Joined:
    Sep 6, 2016
    Messages:
    1,137
    Likes Received:
    1,061
    The theory that Banks puts forward in his Culture Books, and I'll add, Asimov in his Foundation and Robot books, is that handing the keys to the kingdom to an AI is far less dangerous than allowing humans to run the show. If such mind-bending technology is controlled by humans the end result is already decided. Extinction. And I have to agree.
    Who would you be inclined to trust more... Donald Trump, or 'Of Course I Still Love You' (a spaceship that appears in The Player of Games, by Iain Banks)? I'll take the spaceship with a cheeky sense of humor over a sociopath any old day.
     
    Iain Aschendale likes this.
  22. Cave Troll

    Cave Troll It's Coffee O'clock everywhere. Contributor

    Joined:
    Aug 8, 2015
    Messages:
    17,953
    Likes Received:
    27,110
    Location:
    Where cushions are comfy, and straps hold firm.
    @Iain Sparrow Yes I am somewhat familiar with the AI personalities that Banks has created,
    in the book Weapons And How to Use Them (#3 in the Culture series). Though I do not think
    that my story is quite as far into the future as his story series is. But I do find his take on AI
    personas as more cheeky and light hearted, instead of being more confined to simply being
    an advanced tool for a specific job.
     
  23. LostThePlot

    LostThePlot Naysmith Contributor

    Joined:
    Dec 31, 2015
    Messages:
    2,398
    Likes Received:
    2,024
    That presumes that AI works like writers hope it will instead of how it actually does. We all know Asimov's laws of robotics but... See, it's kinda obvious that Asimov was a writer not a scientist.

    Just to take the top line; law 1 "A robot may not harm a human or by inaction allow a human to come to harm". Ok, that sounds very good. But how do you define 'human' and how do you define 'harm'? Is a human a fetus? Is a human a corpse? How long after their heart stops are they still a human? Does harm include piercings and tattoos? Does it include ritual scarification? What about self-harming? Is my robot going to try and prevent my lover leaving bite marks and scratches on me? Will it appear in my bedroom when I'm trying to administer a good spanking? Will the arrival of AI make rough sex a thing of the past? And you may giggle but seriously; where are we drawing the lines here?

    As people we understand what Asimov meant, because we have a lifetime of understanding about what a human is. But an AI doesn't have that. Just off the top of your head you can come up with reasons why that first law just isn't going to work in the real world. And in fact Asimov's books are all about why the laws don't work. And that's a product of the times he was writing in, of course. But as times have changed so our writing has to change with the times.
     
    Cave Troll likes this.
  24. JLT

    JLT Contributor Contributor

    Joined:
    Mar 6, 2016
    Messages:
    1,481
    Likes Received:
    1,802
    Your points are well taken regarding how robots might define "human" or "harm." But I should note that Asimov had a doctorate in biochemistry, and taught that subject at Boston University for many years, until he realized that he would make more money writing than teaching. I don't know how you would define "scientist" but I would think that Dr. Asimov would rate that title.
     
  25. LostThePlot

    LostThePlot Naysmith Contributor

    Joined:
    Dec 31, 2015
    Messages:
    2,398
    Likes Received:
    2,024
    I meant more "someone who didn't work with AI". And that's not a knock against him, AI wasn't anything like as well developed as an idea at the time. But if you talk to people work with AI about the laws of robotics they kinda laugh about them. They are a literary device, not a serious attempt at computer science.
     

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice