Presidential Debates: Ready for Bidenator vs. Trumpinator?

LAS VEGAS, NV - OCTOBER 19: The podiums on stage prior to the start of the third U.S. presidential debate at the Thomas & Mack Center on October 19, 2016 in Las Vegas, Nevada. Tonight is the final debate ahead of Election Day on November 8. (Photo by Joe Raedle/Getty Images)
SourceImages Getty, Composite Sean Culligan/OZY

Presidential Debates: Ready for Bidenator vs. Trumpinator?

By Harith Khawaja


Future generations of presidential candidates may debate machines that are better at it than their brightest aides.

By Harith Khawaja

Imagine an artificial intelligence version of Joe Biden that wouldn’t stumble over answers. Or an AI debating aide that helps the contenders onstage slice through arguments and call one another’s bluffs, in real time.

Although it won’t happen in 2020, we’re on the cusp of a milestone in the development of intelligent machines. Computers can now extract evidence from reams of data, reconstruct arguments and detect spurious ones better than ever. Soon, they’ll be directly informing our opinions — as debaters.

Driving these efforts is IBM’s Project Debater. The program relies on a neural network, an architecture that mimics that of the brain by stacking layers of algorithms — or neurons — on top of one another. However, the system learned to debate very differently compared with us: It was shown about 10 billion assertions — a data set roughly 50 times larger than Wikipedia — and then told which were good arguments.

In 2018, Project Debater defeated Israeli high school debate champion Dan Zafrir.

Fed enormous amounts of computing power, it learned to statistically correlate rigorous propositions in the data set with valid conclusions. When later shown a non sequitur, the system’s parameters correctly flagged it as a flawed argument. Thankfully, today’s debate coaches do not have to show their human students all of Wikipedia 50 times, or we would have very few competitions indeed.

IBM’s program has already outperformed humans. In 2018, it defeated Israeli high school debate champion Dan Zafrir. In February last year, it faced off against Harish Natarajan, the world record holder for debate victories. Despite narrowly losing, Project Debater mesmerized many in attendance. Since then, the system has attained a different caliber of performance: It can contend with a vastly greater range of topics and cite much more appropriate evidence. If pitted against Natarajan today, Project Debater would likely win.

Machines already consistently beat us at games like chess and Go. They detect certain diseases more accurately than radiologists. They recognize risks in nondisclosure agreements better than attorneys. Debating appears to be the next frontier where they will overtake even the smartest humans. Just like its predecessors, Deep Blue and AlphaGo, which dethroned chess and Go masters Garry Kasparov (in 1997) and Lee Sedol (in 2016), respectively, Project Debater is leading the charge against today’s debating champions.

Such rapidly improving artificial systems force us to reckon with central problems regarding intelligent machines. For one, what does it mean for a machine to be intelligent? Can IBM’s debating system think? Can it feel? Does intelligence require thought or feeling? Or are those benchmarks too stringent to be of practical importance?

For several decades, philosophers have been in raging disagreement about these questions.

One camp believes that intelligence is purely behavioral: If a machine performs exceedingly well at tasks like writing or composing or debating, then it must effectively be intelligent. This position was forcefully articulated by British computer scientist Alan Turing, who proposed the Turing test: If a computer communicates well enough that a human interlocutor is unable to identify it as a machine, it should be deemed intelligent.

Diametrically opposed are the experientialists, who hold that intelligent behavior is insufficient for intelligence. It simply does not matter how well the algorithm appears to talk, or write, or play the banjo, they contend. What matters instead are the system’s internal mental states — its emotions, sentience and consciousness

Experientialists, like pioneering philosopher John Searle, hold that machines blindly abide by rules that engineers have programmed into them and do not have conscious experiences like humans do. So, while Project Debater is able to argue issues like paying college athletes more effectively than human debaters, the system is wholly unintelligent, according to this school of thought.

Today’s AI systems bring new relevance to this debate between the behaviorists and experientialists.

Shouldn’t systems like Alexa and Siri, which communicate seamlessly with humans, but which don’t seem conscious, count as intelligent? What about Netflix’s magical algorithm that personalizes our content? Or Project Debater, which may very well argue points of view better than the late Christopher Hitchens? These admittedly narrow-domain systems have acquired such mastery in their respective tasks that they deserve at least some intellectual recognition.

On the other hand, if machines are far too primitive to be intelligent, then it’s worth asking whether we are really going to allow our dumb creations to take over millions of blue- and white-collar jobs. We might then want to pause and reconsider the direction in which our society is headed — or let the machines debate it out.

(Harith Khawaja researches the intersection of philosophy and technology, and is a software engineer.)