The U.S. Military's Sensible Machine Project

The U.S. Military's Sensible Machine Project

By Richard Waters


It’s the Holy Grail for AI researchers: computers with intuition.

By Richard Waters

The U.S. Defense Department is preparing to take on one of the biggest challenges in artificial intelligence: How to endow computers with common sense. The effort could lead to military systems with greater awareness of the world around them and the ability to adapt in the way humans do, says Dave Gunning, a program manager at DARPA, the U.S. defense research group best known for funding early work on the internet and autonomous vehicles.

One result would be systems that “don’t drive off a cliff and have the sense to come in out of the rain,” he says. It could also lead to flexible machines that communicate more naturally with people and can adapt to unexpected events.

The pursuit of machines with common sense highlights design weaknesses in today’s AI that could severely limit the usefulness of the technology. “This is the elephant in the room, the 800-pound gorilla. If we don’t make progress on this, we’ll never get beyond the brittle [AI] systems we currently have,” says Gunning.

This is one of the oldest, if not the oldest, dream of AI researchers.

Joshua Tenenbaum, MIT

Programming computers to have the sort of intuitive understanding of the world that comes as second nature to humans was a central hope when the field of AI was founded more than half a century ago. It was abandoned early on but has seen a recent revival of interest in academic circles. Paul Allen, Microsoft co-founder, doubled investment into his own AI research institute earlier this year to push the idea.

“This is one of the oldest, if not the oldest, dream of AI researchers,” says Joshua Tenenbaum, a professor of cognitive science and computing at Massachusetts Institute of Technology. “We think of this as at the heart of what it means to be intelligent.”


Many of the recent breakthroughs in AI have come from systems that crunch vast amounts of data in search of patterns, enabling them to do things such as identify images or make predictions. But with no real understanding of the world, these so-called deep learning systems are unable to take on problems outside the narrow areas they were designed for.

“They don’t generalize well across different topics and aren’t robust in unforeseen situations,” says Yejin Choi, an associate professor at the University of Washington who is among the AI researchers working on common sense.

DARPA typically backs a range of academic and commercial research groups when it takes on a new field of research, hoping the benefits will flow back indirectly to the military. That has made it an important funder of basic research in the U.S. However, the importance of military backing for new technologies has also proved controversial, most recently when Google’s image-recognition work for the Pentagon caused a storm of internal protest.

“It would be a good thing if money wasn’t just coming from the Defense Department,” says Tenenbaum, whose research has had backing from a number of military research arms. He added that while AI researchers actively discussed the ethical issues raised by new AI research like this, the field needed more open debate.

DARPA is open about the wider implications of far-reaching technologies it backs. Building common sense into computers “would certainly make an AI system more intelligent or more capable — and that could be used for good or for evil,” says Gunning. He also calls it “another rung on the ladder” toward computers with full human intelligence, known as artificial general intelligence.

The renewed hopes for bringing common sense to computers stem from advances in a number of fields. These include the availability of more data and the ability to “crowdsource” intelligence from people over the internet to help machines develop a basic understanding, says Choi.

Some researchers have started to tap theories about how the human brain develops in the hope of building new types of learning system. Recent psychology research suggests the brains of babies “have a lot of the structure built in” from the start, rather than being blank slates that are formed by exposure to the world, says Tenenbaum.

This foundational understanding includes a basic grasp of physics, as well as an intuitive sense of psychology that enables very young infants to understand that other agents in the world have their own goals, he says. This has raised the hope of programming a similar foundation for common sense into machines, although that will require the invention of new programming techniques that draw on fields of AI beyond deep learning.

Gunning points to this work as one of the most promising starting points for forging truly intelligent systems. If computers could develop the same basic building blocks for learning that are present in a 1-year-old infant, that could prepare the machines for the next step toward learning language, he says.

DARPA called together outside researchers this year for a brainstorming session to consider which avenues of research into common sense it wanted to back. The agency is now working on a formal proposal before moving ahead with the project, Gunning says.

By Richard Waters

OZY partners with the U.K.'s Financial Times to bring you premium analysis and features. © The Financial Times Limited 2020.