Why you should care
Hard-to-fool AI systems are the latest shields financial institutions are deploying to guard against fraud.
Imagine if you could stop rogue trading when it was just the glimmer of an idea — a stray thought sparked by a trader’s expensive divorce, a big loss suffered at a poker game or growing disillusionment with the daily grind. Imagine if, instead of being bogged down by 10,000 emails a day with words like “fraud,” compliance teams could instead detect changes in tone and other subtle tics that show a trader’s behavior is changing.
In a world where a Japanese company has launched artificial intelligence cameras that are designed to predict shoplifting before it happens, it’s not so hard to believe that the world’s biggest banks are closing in on advances that will allow them to do the above and more. Banks have already made major leaps in trader surveillance in the past few years, embracing communication monitoring tools that look for obvious flash phrases and keywords as well as less obvious ones like “let’s take this conversation offline.”
They have set stricter limits on traders’ activities, making it harder for anyone to make the kind of enormous bets that led to one-off losses of as much as $6 billion (in JPMorgan Chase’s “London Whale” scandal). Losses and fines in the past decade across the top 13 global banks add up to more than $10 billion, according to analytics provider Corlytics.
As people start changing their behavior, the models will learn.
Marc Andrews, vice president, IBM Watson Financial Services
Now, banks’ efforts are entering a new era, powered by AI and machine learning. “We have opened up the doors of what’s possible,” says Marc Andrews, vice president of IBM’s Watson Financial Services division, as he outlines its tools that do everything from monitoring conversations for tone to using changes to credit scores to predict which traders are likely to go rogue. About a dozen banks are already deploying Watson, IBM’s AI software, which they use to monitor everyday communications.
“We’re looking at their emails and identifying their communication patterns, the tone of their emails in addition to what they’re saying,” says Andrews. “One of the benefits of … applying machine learning is that you’re not implementing specific rules that someone can just work their way around. As people do start changing their behavior … the models will learn over time and will be able to adapt much more quickly.”
One important outcome is cutting down the number of “false positives” banks have to deal with under traditional systems, which flag hundreds of thousands of potentially suspect messages a month, leaving banks to find a needle in a haystack. The IBM software differentiates between higher- and lower-risk alerts, to allow banks to cut through the mass that they receive. The idea, Andrews says, is to reduce the effort and cost they are spending on the low-risk incidents.
Erkin Adylov, whose company Behavox also provides surveillance tools to banks, describes one case where a bank was getting 450,000 alerts per month for “silly things” like a trader asking his wife for a favor.
“When we came in we reduced that by 95 percent,” Adylove says.
Andrews says banks are already pulling other metrics into their surveillance systems, including human resources reviews and credit scoring reports to identify traders with either the motive or predisposition for rogue activity. The technology company has separately developed tools that would allow other information like public filings to be integrated into its trader surveillance system. A court conviction or sizable divorce settlement could be used as a red flag. “We’ve had a lot of inquiries, but none have started putting it into production,” says Andrews of the newest technology. “Lots of these firms are very much in that experimental stage … trying to figure out which of these techniques are worth investing in.”
Typical costs of less than $10 million for rogue trading detection systems are small compared with the multibillion-dollar losses banks have taken on rogue trading, but Andrews says banks are still “price-sensitive” on what they will adopt. He believes this is partly because there have been no big rogue trading cases in recent years and regulators are currently more focused on requirements such as “know your client” and anti-money-laundering practices.
Banks do not ask for solutions that eliminate rogue trading altogether, Andrews says, but do ask questions like: “Can you cut out 20 percent of our alerts and guarantee there won’t be any events in that bottom 20 percent?” His answer is always no.
The big question is whether these sophisticated tools, alongside other factors, including changes in bank culture, stricter trading limits and regulatory pressure, will lead to an era where rogue trading is an anachronism.
Lex Sokolin, a fintech analyst at research provider Autonomous, says the combination of machine learning tools and so-called regtech, which aligns what machines are able to do with regulators’ requirements, makes it likely that large banks will see suspicious internal activity before it hits the market. But, he adds, “no system is foolproof.”
The head of one large U.S. investment bank says that while rogue trading is seen as less of a live problem than it was a decade ago, it has not gone away and never will — a sentiment echoed privately by other banks. Many decline to speak publicly about it, though, for fear of tempting fate.
“Fundamentally it [rogue trading] is always a concern,” says Eoin Cumiskey, of U.K.-based financial services advisory firm FSCom. “The application of technology allows for greater controls to be in place, but … no one we’ve come across has really said: ‘It’s fine, we have a little black box in the corner, that stamps it out, it’s yesterday’s issue.’”