Why you should care
Because in a near future of smoothly slick artificial helpmates, we’re going to build robots to hang out with, not robots that creep us out.
When Guy Hoffman first saw Pixar’s computer-animated short film starring two desk lamps, Luxo Jr., a lightbulb turned on — and not just onscreen. He marveled at how even a few simple hops and swivels had made the everyday appliances remarkably lifelike, even endearing. “You actually feel something for these two pieces of furniture,” he said. “I thought, ‘I have to learn how to do this.’”
To reap the full benefits of robots’ precision and efficiency, we first need to trust them.
Today he’s the co-director of the Media Innovation Lab at IDC Herzilya in Israel, where he combines animation and method-acting techniques to build robots that mimic our own fluid, intuitive movements — creating what he calls robots with “soul.” So far, Hoffman’s creations include a real-life Luxo Jr. desk lamp, which starred in the world’s first human-robot joint theater performance, and a robotic improvisational jazz musician named Shimon. And in 2012, Hoffman unveiled Travis, a smartphone speaker dock that bobs its head and dances to music.
Source: Alumni MIT
Traditional robots analyze human actions, then plan and execute a response, resulting in movements that are often stiff, jerky — and, well, robotic. That’s a problem now that robots are poised to become a major part of our lives; engineers are already designing robotic assistants for nursing homes, hospitals and classrooms — all places where they need to get up close and personal with humans. To reap the full benefits of robots’ precision and efficiency, we first need to trust them. That’s where Hoffman’s expressive, sidekick-like robots come in. “He’s trying to change the thinking in the field, and will be the leader of a movement,” said Wendy Ju, executive director for interaction design research at Stanford University.
In less than a year, Google has bought eight robotics companies.
Hoffman is part of a growing group of human-robot interaction researchers at MIT, Carnegie Mellon University and other universities, who program robots to predict human movement and act accordingly. Anticipating the full range of these movements is tough, and Hoffman’s robots sometimes guess incorrectly. Still, studies show that people prefer these robots to their more traditional, reactive counterparts. It’s the difference between a conversation with a best friend who completes your sentences because he or she knows what you’re going to say, and talking to a stranger who needs to listen to your words before responding to them.
Hoffman speaks softly but wryly, and his pierced ears and long blond hair ooze a rock-star edginess. He caught the tech bug in high school, when a neighborhood kid in Jerusalem taught him to program a 3x3 arrangement of numbers so that each row, column and diagonal added up to the same number. “It was first time I saw the power of a clever algorithm,” he said.
Eliciting an emotional response has more to do with how an object moves than how it looks.
Insatiably curious, he flitted from one field of study to the next, taking jazz piano lessons and working part-time as a film production assistant. “I always felt that there was value in different ways of thinking,” he said. He briefly settled on his first love, computer programming, enjoying a stellar career as a product manager for a security software company.
But after he saw Luxo Jr., he couldn’t shake his amazement. He took the plunge, quitting his cushy job and enrolling in an animation program at Parsons School of Design in Manhattan.
His friends and family called him crazy when Hoffman withdrew his life savings and moved into a crumbling apartment in Harlem; its ceiling collapsed on him not long afterward. But he came to a realization one night during an ice-cold shower. “Suddenly I felt a sense of freedom from being tied down to physical comfort,” he said. Hot water was a small price to pay for feeling inspired every day.
Over the next year, Hoffman learned two crucial lessons. First, eliciting an emotional response has more to do with how an object moves than how it looks. And second, animators should use their own bodies to act out movements and re-create them in their characters.
He realized he needed to blend his artistic interests with engineering. So he quit Parsons and headed to MIT to study human-robot interaction, where he used his animation training to design a robotic desk lamp that moved fluidly — “like a quiet apprentice” — pointing its light based on its users’ movements.
He also learned that robots’ responses to humans are often calculated reactions, whereas humans predict each other’s movements and adjust their own actions accordingly. Then he remembered what he learned at Parsons — that animators were also actors. Maybe he could apply acting techniques to his robots, too. He took a trip to the library and borrowed a dusty 19th-century book on method acting, a technique that involves trying out different postures and gestures to determine which ones convey the intended emotion, rather than planning these movements beforehand.
…the robot shoved its ’head’ toward his face, as if engaged in a heated argument.
“It was like a lightning bolt,” Hoffman said. He enrolled in acting lessons and decided to stage a play with human actors and a robotic desk lamp for his Ph.D. project. The robot would use motion sensors to track an actor’s location, which it would then plug into an algorithm to predict his or her actions and respond appropriately — much like how an actor plays off of a scene partner.
But Hoffman had only six weeks to build and program the robot. Two days before show time, he was still grappling with technical difficulites. He fixed them the day before — and the robot worked seamlessly. In one scene, an actor leaned toward the robot and shouted at it. Predicting that the actor would lunge forward even further, the robot shoved its “head” toward his face, as if engaged in a heated argument.
Hoffman used similar technology to design Shimon, a robotic marimba player that can improvise music by listening to a human player and adjusting its sync and beat accordingly. Shimon’s head can also sway to the music and make “eye contact” with other musicians.
Of course, predictions leave more room for error. But when Hoffman designed a robot with two different “brains” — traditional robotic algorithms paired with his own — human subjects overwhelmingly preferred to complete a tedious task with the robot when it used its more intuitive brain. They reported perceiving it as smarter and more committed to the task, and referred to it as “he” or “she.” “By the end of the task we were good friends and high-fived mentally,” said one subject. Of course, robots don’t actually have souls; they don’t experience the emotions or loyalty that make someone a good friend. But we can engineer them into convincing actors who display emotions that engender the same trust, and maybe even affection, so that we’re more likely to welcome them into our personal space.
In other words, maybe robots need to make mistakes to gain wider acceptance. After all, humans generally interact with each other just fine, despite our imperfections, Hoffman said. “Maybe as humans, robots that are a little less than perfect are just perfect for us.”