Why you should care

Tech founders have great imagination about how they envision a product being used, and zero imagination about how someone else might use it. 

A few months ago, I reread my childhood copy of A Wrinkle in Time. The 1962 novel pits three kids against an oversize brain, IT, that overwhelms humans into giving up their free will, until planet after planet trades freedom for brutal conformity. Nothing in it resembles our present world, except for the question the novel asks: What happens when you introduce a technology so powerful that people lose the ability to think for themselves?

Sci-fi is designed to look creatively at humanity and technology and ask, “what if?” That’s why Madeleine L’Engle’s novel anticipates, by more than five decades, our current debate about the power of technology to influence what we think, feel and do.

Given the disturbing revelations of recent years – Russia using social media platforms like Facebook and Twitter to disrupt the 2016 election, and Facebook’s efforts to downplay it, or the ease that hate speech spreads online — it’s clear that many tech companies are unable or unwilling to ask difficult “what ifs?” about the technology they create. That’s why they need to hire war rooms of science fiction writers to craft stories about what can go right, and what can go very wrong.

What if a hostile foreign power hacked into everyone’s emotion algorithm and started manipulating their feelings?

Ari Popper, CEO of SciFutures, explains that in Silicon Valley, in particular, the competitive advantage of creating scale — getting as many users as possible as quickly as possible — makes sense from a business perspective, but can be dangerous because these technologies are so powerful. SciFutures specializes in design fiction — creative, narrative works by sci-fi writers that imagine how future products or technologies might be used — with clients that range from greeting card companies to NATO. In early 2017, the company made a concerted effort to take on projects that were more ethically oriented. “No one will deliberately quash any ethical conversation,” Popper says. “But it’s not a priority.”

Take big data. It’s quite possible that soon we’ll be able to cross-reference ride-share information with big data to predict crimes, a kind of JV Minority Report. This could make cities safer, but what happens when the algorithm says an individual has an 80 percent chance of committing a violent crime? Who, exactly, decides what to do with that?

Another potential technology that raises questions is algorithms that can impact mood. “There are companies working on algorithms that could measure our emotions based on our biometrics, and then map that onto external data — traffic, weather, calendar — and find relationships even the smartest scientist couldn’t find,” Popper says. “That could have a tremendously positive impact for humans, but it raises profound ethical issues.”

Like what kind of ethical issues? Well, what if a hostile foreign power hacked into everyone’s emotion algorithm and started manipulating their feelings? What if parents had access to their children’s algorithms? They could intervene to help a depressed child, but they could also try to protect their kid from every bad feeling so she never learns how to cope with hardship.

When I present the algorithm dilemma to Mike Buckley, a sci-fi writer who freelances for SciFutures, he focuses on the storytelling. “In a dystopian version, I’d have a character wake up, jack into a VR interface that connects him to the technology and then show how that character slowly gives up agency — whether it’s political, financial, familial — toward some unknown force,” he says. Perhaps it’s that the algorithm manipulates your feelings around a certain idea, but it doesn’t just do it to you, it does it to your children, your spouse, your friends, your children’s teachers. “It would be hard to parse that from reality,” says Buckley. “At a certain point, that becomes your reality.”

The sharpness of Buckley’s take is how incremental it is. Twenty years ago, if you told us we’d willingly give up personal information like romantic partners or online searches for weird medical ailments, people would say that’s crazy. But inch by inch, here we are.

SciFutures has often identified ethical issues with emerging technology before the company realized it had a problem. While working as a developer for skills for Alexa (for a third-party client, not Amazon) at the beginning of 2016, Popper and his team realized that Alexa’s relationship with children was tricky. Kids started treating her like a slave, with that behavior spilling out into public. Through storytelling, SciFutures created a prototype where Alexa has a Mary Poppins mode and acts more like a guardian or a coach.

It took Amazon until 2018 to adopt the same idea, based on feedback from users. When I asked Popper why they hadn’t contacted Amazon before that, he said that the company hadn’t been their client, but that, “we probably should have.”

Founders have a tremendous amount of vision about building technology. They have much less imagination about the ways others might use it. That’s a question that calls for possible government oversight, and it’s also a question for science fiction writers. After all, it’s a problem they can hack.

OZYImmodest proposal

Propositions that fall on the continuum between controversial and utterly insane. Sometimes we're tongue-in-cheek. Sometimes, dead serious.