I'm toying with an idea where a robot - fully human looking AI - is given empathy after having none. Psychologically it's a bit of a question about what if you could push someone on the sociopath spectrum all the way into the empath spectrum in extremes, but really fast, with side effects. The main twist I'm trying to go for is that whereas people think that such a robot pre-empathy is dangerous because it doesn't "care," that once it gets empathy that's when it becomes dangerous. The logic may not hold up, but I'm trying to make it. So once the robot gets empathy, it gets these clusters of mirror neurons (one theoretical basis in neurobiology for why humans experience empathy, develop a 'theory of mind' --> intentions, model behavior), and it can now feel what others are feeling. It now develops self-empathy, and a more fully integrated self-concept. I may be using the word empathy incorrectly or stretching it semantically as a device. Not sure if it matters. Point is, I want to get to the point where empathy is then viewed by the robot as a kind of virus, because now that it feels what others are feeling, and it experiences its own feelings, it becomes inefficient and ineffective. It now has to navigate relationships and emotions, essentially making it human. It develops then a ironic kind of hatred and shame, which leads it to become violent*. Wanting to subvert the idea of empathy and caring being pinnacles of human integrity. Looking for feedback on this concept. Thanks *Edit to add: It becomes violent toward people who it now views as a threat, because it now experiences fear, and senses (or misperceives?) malicious intentions from others. It isn't indiscriminately violent. although maybe that could be worked into it.