Of course this relates more to using AI in robots than in relation to writing, but still it's AI related.
Oh yeah, it's one of the greatest challenges for the hypothetical general intelligence. Stephen Wolfram has touched on the issue as well (I've time-stamped the podcast): We absolutely take interpretation for granted, especially when it courses through the CNS before the Boss upstairs even knows what's going on. Ultimately, understanding how the human mind operates will be the key to cracking that nut. Or maybe the first iterations will simply fake it in a somewhat believable way. To be fair, we glitch out too: lint = spider that killed my ancestors and so on.
Peterson's suggestion is that he believes an AI powered mobile robot would need to have a set of needs and a set of vulnerabilities like organic beings do. For instance the need for air, shelter, food, water, companionship, etc. And also something analogous to a central nervous system, with a great deal of complexity. Because we don't see objects and understand what they are, it's more like we see utility or function—we see something that could bite us or sting us, and before we have time for the cumbersome conscious apparatus to name it, we've already jumped away. And we see food and water and we know what they are, in their many various forms. How do you get an AI to recognize water whether it's frozen, stagnant, trickling, dripping, saturated into the inner tissues of a cactus, etc? A stagant pond doesn't resemble white-water rapids at all. Nor does snow, but our ancestors understood you can melt snow or ice and drink it. Lol, it might take AI as long to understand its relation to the physical world as it did our ancestors. And that really began with very simple creatures lurching toward and away from patches of light and darkness, either to try to absorb whatever was there, or to run away from it before getting absorbed. But—important—there needs to be a highly functioning and essentially instantaneous feedback loop between the mind (AI) and the body. Ultimately it might be easier to create synthetic living beings with nervous systems than to build robotic semblances of bodies and brains.
Deduction and abstraction. To be fair, a mobile robot would have needs and vulnerabilities, just not the same ones. It needs power, it's vulnerable to stairs like ED-209 etc.. Also its sensors aren't limited to vision, accoustic, physical sensation. A general AI agent in this would could use something like infrared or ultrasound(?) to boost its understanding and form a "bestimate" of its working scenario. You know, JP loves to talk about toddlers. I wonder, if combined with a machine-learning database, is how you would go about teaching those abstractions. Toddler/robot constantly asks "What is X." You answer "X does Y, and it comes from Z." "What is Z?" So on... It could also be that Aristotelian semantics are incompatible with a virtual mind, which isn't necessarily a bad thing. Some for of hybridization between our current semantics system and a better one could result from reconciling those inadequacies. A good possibility. Machine learning may be another dead tree that settles into its niche but never singularity, current sensationalism be damned. We won't know until 20 years from now, or 200, or 2,000.
Here is an example of the problem of using ai in writing. https://www.foxnews.com/tech/ncaa-athlete-claims-she-was-scolded-by-ai-over-message-about-womens-sports
That's very different from the topic of this thread—it's more a problem of censorship being programmed into AI, and is strongly related to how this digital world we live in is rapidly becoming a surveillance state. It would fit much better on one of the AI as writing tools threads. This thread should remain focused more on the problems involved in ai-guided mobile robots navigating the world and recognizing objects in it. A related idea I had this morning as I was waking up—robots driving cars (or AI-assisted self-driving cars I suppose, which really are robots that look like cars as Peterson said in the video at the top) have it pretty easy because the roadways are a man-made system and have a very specific set of rules. For instance the lanes are marked off with yellow lines, and there's a protocol for navigating through intersections etc. It's not too different from navigating through a video game world. But take a car off the road and into a field, even if it's maybe a four wheel drive and can handle the terrain, and how does it navigate in the wild? It's basically what Mars and Lunar rovers do, but then they don't really need to get from a specific point to another specific point—they're explorers on a wide-open and essentially flat surface. Not flat exactly, but there's nothing like weeds or undergrowth, which would make it far more difficult. Just hills and valleys, maybe the occasional rock or crack or crater-wall. If you run into something like this you can just turn and keep going, no need to find your way back to your 'route', because you don't have a route per se. You're just wandering around taking pictures and samples. But try turning a lunar rover loose in an overgrown field on Earth and see how well it does. It would require some very different tools (cutting blades maybe?) and navigation systems. I remember when my mom tried out one of those robotic lawn mowers, like a big heavy-duty Roomba, only it had a lawn-mower blade under it rather than a vacuum cleaner. I believe she had wires installed around the perimeter of the yard that the robot could detect and always turn when it reached the edge. That worked really well, but the yard was a little too wild, with big tree roots and ridges and small valleys here and there, and at times the thing would flip over on its back like a turtle or just get stuck. I remember thinking it could have had a push-rod installed that would flip it back onto its wheels, or use some device like you see in those Robot War shows. And I think she said a few times the thing went right over the buried wire and just started going through neighbor's yards. The concept is good, but in a real-world environment it definitely needs a much better set of navigation tools and fail-safes. Something with four legs, or six legs, or maybe a track system like tanks use, might work better in some situations. But still they'll run into obstacles they can't easily get past or around. How to allow it to actually solve the problem when this happens? We can do it. Animals of all kinds can do it, each with their own set of problems and solutions. But the question is how do you get a robot to solve these problems? So far it's beyond their reach.
I just realized this problem of perception is very similar to certain problems in AI image-making. Often you'll see a person in the forground of an AI image holding onto an object that's depicted way off in the background, as if they're on the same plane. Or a person standing right in front of the car they supposedly just got out of, and the car is perfectly rendered, in perfect perspective, but is way to small for them to possibly fit into. Or there are two or three sets of rear wheels on a car, as if some glitch just made it repeat the design element several times and the AI has no idea it isn't right. I suppose there must also be related 'problems of perception' in AI stories. They just don't understand how certain things work in reality, and it must be because they don't really understand how to navigate their way around in the real world, or to use ordinary objects. A robotic arm assembling cars or working in an automated kitchen has been trained to recognize certain objects that it needs to grasp and manipulate, but if you try to hand it some unfamiliar object it can't function, nor can it understand what's happening. Unless it has a very sensitive system of sensors it would simply try to go through its set of pre-ordained movements and install the pencil you just handed it into the automobile in front of it. It's a machine, plain and simple, like a somewhat more sophisticated power drill. All it can do is what it's been programmed to do.
If this thread doesn't really fit into an AI Writing Tools section, maybe it should be moved into the Lounge or something? Mods feel free to relocate if you see fit.
No, it's fine. I suspect we're going to have so much AI content we'll rename the forum "All the AI Shit" or something. I've been moving all AI content into this forum out of sheer necessity.