Could Robots Develop Prejudice on Their Own? - OZY | A Modern Media Company

Could Robots Develop Prejudice on Their Own?

Could Robots Develop Prejudice on Their Own?

By Sean Braswell


Because prejudice in groups can develop easily, and that has some scary consequences for how the artificially intelligent machines of the future might act.

By Sean Braswell

Some of the smartest humans on Earth are worried about the dangers posed by robots and artificial intelligence. “I fear that AI may replace humans altogether,” the late Stephen Hawking once said. Tesla and SpaceX founder Elon Musk similarly believes that AI poses a “fundamental risk to the existence of civilization.” Dr. Ian Pearson, a leading futurologist, recently told The Sun that someday, robots would treat us like “guinea pigs.”

And if that’s not enough to get you thinking about the potential for a master race of cruel robot overlords, the findings from a new study point to another startling possibility — it may be far easier than we imagined for autonomous machines to develop one of humanity’s less attractive features: prejudice. Among other things, the study from computer science and psychology experts at Cardiff University and MIT finds that:  

Prejudice requires only limited intelligence and cognitive ability to develop and spread in populations of artificially intelligent machines.

robot [web]

Prejudice, as the study defines it, “is a human attitude involving generally negative and unsubstantiated prejudgment of others.” It can lead to sexism, ageism, racism, nationalism, religious extremism and more. To explore how prejudice can evolve organically in groups of individuals, the researchers used computational simulation models in which virtual agents interact with one another in a “give and take” game system in which an individual can decide whether they donate to somebody inside their own group or in an outside group based on their own donating strategy and how it impacts their reputation. As the game unfolds, the agents learn new donation strategies by copying those employed by other agents, including shunning outsiders to bolster their own reputation and takings.

As the simulations demonstrate, prejudice is truly a force of nature, one that lies dormant in any society or large group and can easily surface given the right incentives. Prejudice also often evolves alongside more positive forces like cooperation, and in some ways, they even help sustain each other. “When those with prejudicial behaviors start grouping together and discriminating, it promotes others to form similarly prejudicial groups,” says study co-author Roger Whitaker, a professor of collective intelligence at Cardiff. “Cooperation can still exist in this context, but it becomes focused within groups, rather than between them, effectively resulting in islands of cooperation.”



But here is what is perhaps most concerning about how prejudice spreads: Because it is a strategy that can be learned by merely identifying and copying the behavior of another agent, the adoption of prejudicial attitudes is not a decision that requires very sophisticated cognitive abilities. This may not be all that surprising: Prejudice is not something that many of us consider the mark of sophistication. But the implications of this are nonetheless jarring. The more possible it is for prejudice to develop independently of humanity’s distinct social and psychological capabilities, the more conceivable it becomes that future forms of AI that involve some level of autonomy and interaction with other machines, including the internet of things and self-driving vehicles, could be susceptible to developing the same types of biases that we see among humans.

Does this mean that we can expect racist or sexist AI robots shaping our lives in the near future? In some ways, this is already happening. Remember Tay, the AI-powered Twitter chatbot that Microsoft had to take offline shortly after its debut once it started rattling off a bunch of racist sentiments it had learned from interacting with other Twitter users? Whitaker cautions that AI robots developing their own damaging set of prejudices would likely be a very long way into the future. And the study’s findings also point to some factors that can help limit the effects of prejudice, including the diversity of interactions between simulated agents, diverse types of agents and being able to learn from a wider range of population members. In other words, societies in which in-group diversity is present and that value global learning from interactions with out-group populations are the best equipped to stem the proliferation of prejudice.

Still, it’s hard not to worry about a future in which prejudicial robots go rogue. And what happens if the “outsiders” that they are prejudiced against turn out to be us?

Sign up for the weekly newsletter!

Related Stories