Why you should care

Because she’s imagining what the AI-driven workforce will look like.

Celebrating women in the 21st century: OZY and the BBC partner to tackle the challenge of workplace confidence in Silicon Valley.

Rumman Chowdhury is nothing if not a dreamer. She believes that there is no reason why the tools of artificial intelligence should be held by an elite few in Silicon Valley — putting her on a personal mission to “democratize” AI. “People will ask me, ‘Do you agree with Elon Musk or Mark Zuckerberg? Do you think we’re headed this way or that?’ And my answer always is, ‘We are headed in the direction that we send ourselves,’” Chowdhury says.

She points to the open source community as an example of the positive effects of putting tech in the hands of the people. Open source essentially means that the programming tools are free and “open” to anyone, and everyone builds on code that is already written. The same process should translate to AI, Chowdhury argues. “So if we start building AI today to educate people, to close barriers to entry, to make the hiring process less racist, that’s the future we will have,” she says. “It’s up to us.”

You look at Siri and Cortana, they’re women, right? But they’re assistants, not the attorney or the accountant.

Alexandra Whittington, foresight director, Fast Future

It’s also up to the 37-year-old Chowdhury. Armed with two undergraduate degrees from the Massachusetts Institute of Technology, a master’s in statistics from Columbia University and a doctorate in political science from the University of California, San Diego, she works as global responsible artificial intelligence lead at Accenture (she joined the company in January after working in data science at Metis and Quotient). Big title, big vision, which she breaks into two parts: “The first is to think about what the AI-driven workforce is going to look like. We need a better system of retaining employees, training them and filling gaps in talent that we’re gonna have,” she says. “The second part is what I call responsible operations: How do we implement processes to combat algorithmic bias [bias embedded in computer programs by humans] and make sure people understand the data they’re working with. We’re still figuring it all out.”

Chowdhury loves figuring things out. She became a data scientist because she knew she wanted to understand humanity through data. “Now data science is part of our everyday lives, but it’s amazing to me that we’re able to compile all this data and figure out trends that are specific to one human being.” Her graduate work in political science and her shift to AI were a result of that same human-centric curiosity: to use data to understand people’s preferences and biases, and to evaluate how tech can affect humanity.

Now living in San Francisco, Chowdhury was born in Rockland County, New York, to Bangladeshi immigrants. “I grew up in this magical place where your race and ethnicity and gender didn’t matter and all people cared was if you were a good person and if you did your work,” she says. There were so many girls in her high school AP classes that it came as a shock years later to witness gender inequality in Silicon Valley. She points to disparities in pay and problematic company culture, topics that recently made headlines.

Chowdhury’s other personal goal — to make AI accessible to everyone — is noble, but if the technology’s ramifications are not yet fully known, might it not also be dangerous? Doomsday scenarios — AI as the rapacious monster devouring all our jobs — put forward in the media may not be in our immediate futures, but Alexandra Whittington does worry that implicit human biases could make their way into the AI of the future — a problem that might be exacerbated if not accounted for early on, before any democratization of the tools occurs. Whittington is a futurist and foresight director at Fast Future. She points to a recent example of AI in law where the “robot-lawyer” was named Ross, and the legal assistant had a woman’s name, Cara. “You look at Siri and Cortana, they’re women, right?” Whittington says. “But they’re assistants, not the attorney or the accountant.” It’s the whole garbage-in, garbage-out theory, she says, cautioning against an overly idealistic approach toward the technology.

Mmlf rumman+nyjerah francesfdenny 0347

Rumman Chowdhury in New York City. She sees common ground between political science and data science: They both study behavioral patterns.

Source Frances Denny

Chowdhury is aware of such challenges and is therefore heavily invested in the ethics of AI and developing a framework whereby bias is taken out of the equation, and people dealing with AI recommendations in the future — for everything from court verdicts to hiring processes — are trained to detect any problem that creeps in. “We need to design these solutions with human beings in mind,” she says. “I would rather have a fire department in place before there’s a fire.”

John Havens, executive director of IEEE’s Global AI Ethics Initiative, has been particularly impressed by what he calls Chowdhury’s “human in the loop” mentality to AI. “It also helps that she is really good at simplifying complex terms and ideas,” he says, laughing. His insistence on using ethics or a values-based approach to AI decisions is a conviction Chowdhury shares.

Chowdhury applies her human-centric lens outside the office as well: Along with a friend, she launched X Institute, a school in the Bay Area where refugees learn data science and marketing skills to find freelance jobs through sites like Upwork. “I truly believe that data science, AI, all this technology, especially with education, is intended to close gaps and be a great equalizer,” she says.

OZYRising Stars

People who are accelerating our culture and advancing the conversation – for good or for ill. You may not have heard of them yet – but you'll soon need to know 'em.