Why you should care
You can fool your friend but not your MacBook.
Join us for Third Rail With OZY, a terrific new TV show presented by OZY and WGBH, to debate provocative hot topics with experts and celebrities every Friday night. The subject of this week’s show — “Is the Truth Overrated? Is Lying the American Way?” — features special guests Malcolm Gladwell, Roxane Gay, Seth Stephens-Davidowitz and Seth Weathers. Tune in Friday at 8:30/7:30c on PBS, or online, and please be sure to weigh in on social media (#ThirdRailPBS) and/or email us at email@example.com with your take!
Think your friend looks fat in that dress but don’t want to tell her? It’s unlikely she’ll know you’re faking that supportive smile. After all, humans are notoriously bad at detecting insincerity. But ask 100 people to interpret that look on your face and the jury will probably come back with a swift verdict: You’re lying. That’s due to swarm intelligence, the concept that more minds are greater than one (thus the ask-the-audience segment of Who Wants to Be a Millionaire).
As Stephen Hawking, Elon Musk and others sound the alarm over fears that artificial intelligence will eventually subjugate humans, other scientists and entrepreneurs are developing ways to harness swarm intelligence to reap the benefits while keeping people in the equation, along with their creativity, intuition, judgment and morality. In turn, an increasing number of companies, in fields ranging from marketing and medicine to the military and beyond, are tapping into the products and services being offered by swarm-intelligence startups in the U.S., Italy, Singapore and elsewhere.
A swarm lets you build an artificial super expert that outperforms a typical human expert.
Dr. Louis Rosenberg, CEO, Unanimous A.I.
One such outfit is Unanimous A.I., a Silicon Valley–based company that meshes human swarms with complex algorithms. In a recent study, one of its swarms collectively made 46 percent fewer errors when identifying fake smiles than individual participants. “Humans are not very accurate at telling if someone is being honest or deceptive,” says CEO Dr. Louis Rosenberg, who finds swarm intelligence not only a better predictor of truth but also an effective way of solving problems. He references nature’s swarms, such as birds, bees and fish. When it comes to food, shelter and survival, they outperform individuals and collectively make decisions for the greater good. “If a swarm acted like Congress, it would die,” he says, emphasizing that survival depends on cohesion.
On Rosenberg’s social platform, UNU, human clusters ranging in size from five to 75 members answer a variety of questions. Operating remotely, participants drag magnetized markers to possible answers arranged around a hexagon. The cluster has 60 seconds to provide a group answer. More than 150,000 people are registered on the platform, and the results speak for themselves: Average joes make better Oscar predictions than film critics, and a TechRepublic reporter who made a $1 Kentucky Derby bet based on UNU’s group think ended up pocketing $542 in winnings. “A swarm treats people as a data processor,” Rosenberg says. “A swarm lets you build an artificial super expert that outperforms a typical human expert.”
Detecting emotional states with software alone — known as affective computing — is also a growing field. This relies on physical cues like temperature and sweat to take readings. In one potentially far-reaching application, University of Padova researchers in Italy have developed a novel way of thwarting online identity theft. In a paper published in May, they explained that extra security questions can be used to separate legitimate users from internet con artists. The answers they provide are irrelevant; the extra questions keep users on the screen longer so the software can analyze the real tell — the mannerisms of the users’ mouse movements.
And then there’s the relatively uncharted affective computing territory of emotional intelligence. While at MIT, computer scientist Ehsan Hoque designed an A.I. that could identify smiles within a context — a smile of frustration, for example, is not the same as one of joy. For Hoque, who now leads the Human-Computer Interaction Lab at Rochester University, his original A.I. smile-detection experiment has brought some unexpected guests to his lab door. “The first application that came to mind was helping individuals with autism, since they look at smiles the way machines [do],” he says.
But the U.S. Army sees other uses — for example, espionage — and now is funding Hoque’s deception-detection study. Such technology potentially could be used in the judicial system and corporate human resource departments. Hoque’s other projects include an A.I. that evaluates social skills and one that analyzes physical mannerisms. “Ethically, I still want humans to make decisions, using attributes A.I. [provides] as evidence,” Hoque says. “A.I. is good at quantifying — [things like] how many times people hesitate, how many filler words [they use].”
While it may be disconcerting for a computer to know your moods better than your BFFs, the benefit is that this could help eliminate unconscious bias and provide honest feedback. Hoque saw this play out firsthand when an investor visited his lab and checked out ROC Speak, an A.I. speech-analysis system for public speakers. The potential moneyman was an awkward speaker, and the program picked up on his use of filler words in its critique. “I was worried he’d get embarrassed and not [invest],” Hoque says, “but he appreciated it because no one gives him constructive [criticism] anymore.”
Looking to the future, critics visualize swarms and emotional A.I. used in a number of troubling contexts, including politics, health care and law enforcement. But Katja Grace, an A.I. researcher at the Future of Life Institute, believes those concerns are blown out of proportion. “I expect this eventually will have some social implications, but it does not herald especially humanlike intelligence in general,” Grace says. “Being able to recognize a scared demeanor is very different from understanding what fear is.” But should the technology become that sophisticated, she speculates that it would not be a win for humanity. If an A.I. can “infer a person’s internal state from their external appearance,” people might lose the privacy of their own mind.
Rosenberg, whose company has an enterprise swarm division for marketers, says such anxieties are unfounded. “We are building an emergent intelligence that has its own personality, intuition and insights,” he says. “What comes out can be different from every single participant.”
Let’s give the final word on the subject to Aristotle: “It is the mark of an educated mind to be able to entertain a thought without accepting it.”