In the movie 2001: A Space Odyssey, when a sentient ship computer starts killing off crew members one by one, the last remaining survivor shuts it down to save himself. Made in 1968, the sci-fi classic foreshadowed one of the 21st century’s greatest fears—artificial intelligence (AI) that turns on its creators.
Is this fear well founded, or just sci-fi paranoia? A recent incident involving advanced chatbots going off script shows that there may be more to this question than it seems. In a move eerily reminiscent of 2001: A Space Odyssey, Facebook AI researchers recently shut down a chatbot experiment after the bots went off script. Instead of using normal English to negotiate and barter for virtual goods, the bots started to invent their own language over time, incomprehensible to humans:
Bob: I can can I I everything else.
Alice: Balls have zero to me to me to me to me to me to me to me to me to.
The bots’ newfound language makes no sense to human eyes, but from the viewpoint of the bots, this robo-gibberish was perfectly understandable and helped them communicate and negotiate more efficiently. However, since the results of their robotic experiment were no longer understandable, the researchers shut the bots down and reinstated new rules to keep the chatbots speaking only normal, comprehensible English.
While chatbots negotiating over virtual balls in a lab is no big deal, the implication of AI inventing its own language, incomprehensible to us, does raise new questions. As AI inevitably progresses in capability and responsibility, will we one day have automated systems running important aspects of our economy, or even the military, which run out of our control?
AI may potentially have the ability and scope to harm us, and prominent technologist Elon Musk has argued for regulation to prevent AI from being designed with malicious intent or capabilities. However, the current state of AI development is far from being able to do deliberate harm to humanity.
The AI in use today is almost always “narrow AI,” or AI that’s focused on a single task. Whether it’s autonomous cars, robot food servers, customer service chatbots, or delivery drones, this kind of AI is designed to do a single task and do it well.
The AI threat that technologists and sci-fi movies warn against, on the other hand, is general AI. Instead of focusing on a single task, it’s meant to be able to think and analyze a variety of situations on the level of a human, and may even have some type of sentience, or self-awareness and volition.
The danger of a sentient AI moving on its own self-interests against humanity is not unfounded, but we’re far enough away from that reality that being overly concerned about it may do more harm than good. The current “narrow AI,” in our autonomous cars, our service industry robots, and our drones, on the other hand, has tremendous potential to transform society for the better. Autonomous cars, for instance, use neural networks and advanced machine vision to process driving information and deliver us to our destinations more safely than any human driver could.
Robots and increased automation in the service and industrial markets could also take over jobs that are too dangerous, unpleasant, or otherwise hard to fill. Japan, for instance, faces a growing elderly population and a declining working-age population. Using increased automation can help the country reduce the number of staff needed to man its nursing homes.
These kinds of AI are focused enough in their scope, and limited enough in their “general” intelligence that the risk of them turning sentient and running amok just isn’t there. And the benefits these technologies will give us far outweigh the risks.
With all this in mind, should we simply plunge headfirst into AI, unconcerned about potential issues? As the Facebook chatbot episode shows, perhaps not. Though today’s AI may not have the ability to turn into a sentient, all-powerful intelligence like Skynet from the Terminator movies, that doesn’t mean it doesn’t have the potential go astray. While a chatbot that doesn’t work as intended is simply a nuisance or a failed experiment, an autonomous vehicle that doesn’t work as designed can do considerable harm.
We may not have to worry about Skynet for now. But the real AI threat in the near term may be short-sighted programmers. If they don’t program for scenarios that could occur in the real world, AI might make decisions that turn out to be detrimental to the humans depending on them.
Rudy Ramos is the project manager for the Technical Content Marketing team at Mouser Electronics and holds an MBA from Keller Graduate School of Management. He has over 30 years of professional, technical and managerial experience managing complex, time critical projects and programs in various industries including semiconductor, marketing, manufacturing, and military. Previously, Rudy worked for National Semiconductor, Texas Instruments, and his entrepreneur silk screening business.