What Do Killer Robots Dream Of? - OZY | A Modern Media Company

WHY YOU SHOULD CARE

Because being chased by death-dealing robots is no one’s idea of a good time.

  • Artificial intelligence doesn’t love you. It doesn’t hate you. But you’re made of atoms that it can use.
  • There’s a campaign to stop killer robots. And no, it’s not a joke.

The cinematic depictions are pretty clear. From I, Robot to The Terminator series to The Matrix, humans, either wittingly or unwittingly, manage to clash with the machinery that had previously served them. To the death. Or, at the very least, in some sort of global enslavement scenario.

It’s a narrative that, like some of our best mythos, puts us at the center of the action and more often than not shows the supremacy of human ingenuity under pressure. Because it’s Hollywood, and Hollywood specializes in fictions. Well, at least part of it is a fiction. More than likely, the part where we win.

“Killer robots are definitely something we should be concerned about,” says Elijah Joseph Weber-Han, who manages Cornell’s Communication and Collaborative Technologies Lab with Susan Fussell. “Of course, if we’re talking about malevolent AI, it would have to purposely be built that way so the question becomes: ‘Would they?’ I hope not.”

Facebook humans had to shut down the company’s AI-fueled chatbots because they had started to develop their own language.

And yet, according to Shane Quinlan, a 33-year-old who works in robotic process automation for a group partnered with a very large Ukraine-based company he’s not allowed to name, would we even know if we were building it that way? “I perform the steps a human would in constructing medical records,” says Quinlan. “The AI watches me while I do it and records the process. I don’t see that the system could be tilted to totally malicious ends, but if medical records were purposefully mixed up? Well, it’s a definite possibility.”

Which would largely account for why it’s being taken seriously in sunnier places than the corners of the internet that scream about bloodthirsty bots already enacting robo-takeovers. Human Rights Watch, most specifically, has helped launch a Campaign to Stop Killer Robots, and has thus far corralled almost 30 countries to prohibit fully autonomous lethal weapons. Though China supports the ban on this kind of weapon use, it draws the line at their development and production. Which, apparently, it’s totally OK with. And in a “twist” that should surprise no one, the United States and Russia are nowhere near this list.

“The history of military development is a history of nonhuman systems for killing people,” said Justin Joque, author of Deconstruction Machines: Writing in the Age of Cyberwar and the soon-to-be-published Infidel Mathematics: Algorithms, Statistics and the Logic of Capitalism. Since military systems as directed toward a nation’s right to self-defense will never not be a part of the way humans do business with each other, the goal, perhaps, should be to “focus on what we choose to incentivize and then generate more just and humane systems.”

“The problem is that AI, cyberwar and really computers are anti-anthropocentric,” Joque says. A problem that folks focused on AI have tried to address via human-in-the-loop approaches to machine learning, where people like Quinlan are involved in some sort of “virtuous circle” so that the machines that they train and tune can benefit from some measure of human intelligence being inserted into the loop.

The best example of this “human intelligence” coming into play? When humans at Facebook had to shut down the company’s AI-fueled chatbots because they had started to develop their own language. The language that the robots had created made it easier for them to work. The catch, though, was that the humans “running things” didn’t know what the machines were saying anymore.

“Well, the history of AI is still being written,” says an AI expert at Facebook who is not authorized to speak publicly. “But it is certainly open to malicious involvement.”

Which, if you’re keeping track, has us here: DOOM: 1, Humans: 0.

And even if the Defense Advanced Research Projects Agency has dropped Boston Dynamics, the creator of all of those “cute” YouTube videos with robot dogs and warehouse robots, presumably because their robots weren’t technical enough, that hasn’t stopped the show. Weber-Han regales with tales of Russian robots that can drive cars, by themselves, before they jump out and fire two guns, one in each “hand” with an extremely high level of accuracy at downfield targets.

Sounds wild until you get a gander at the fact that the Department of Defense is also looking at dumping a ton of money into genetically engineering soldiers and artificial super intelligence (ASI). Add this to the fact that China espouses a radically different view in terms of AI in warfare — presumably the South China Sea kerfuffle isn’t about bragging rights but much more about access to rare earth minerals useful in robot development — as does Russia, and you have a quickly warming AI cold war. Alarmist? No. Technology to neutralize current American military hardware is already being used.

So should we all start learning machine languages or natural language processing to toady up to our robotic overlords?

“The more complicated something is,” Weber-Han explains, “the more open it is to being compromised. And AI is very complicated.” So, yes, open to malicious intent and outside actors but just as likely to spinning-ball-of-death-itself out of service before it gets anywhere near you.

Yes, yes, but what if…

“Humans have a drive to survive in most cases,” Weber-Han says. “Do robots have this? I don’t think there’s any AI design based on this.” So in the Monday morning postgame analysis, human incompetence and stubbornness seem to outpace technological sophistication and brute force.

It brings to mind a famous military quote from Pogo, a cartoon character created by the estimable Walt Kelly: “We have met the enemy and he is us.” Indeed.

Sign up for the weekly newsletter!