Hunting money launderers? There’s AI for that
Banks explore artificial intelligence to better detect fraud after go-ahead from federal regulators
Encouraged by a recent green light from regulators, the financial services industry is exploring new ways of using artificial intelligence to help them comply with banking regulations and to better detect fraudulent transactions used by criminals and terrorists.
This move toward new approaches to banking compliance comes despite growing concern that more government scrutiny could force the United States to fall behind similar efforts already underway overseas.
Last December, federal regulators, including the Federal Reserve System’s board of governors and the Federal Deposit Insurance Corporation, issued a joint statement encouraging the industry “to consider, evaluate and, where appropriate, responsibly implement innovative approaches” to detect money laundering operations and terrorist financing.
“The agencies realize that private sector innovation, including new ways of using existing technologies or adopting new technologies,” such as artificial intelligence, “can help banks identify and report money laundering, terrorist financing and other illicit financial activity by enhancing the effectiveness and efficiency” of their compliance programs, the regulators said.
The go-ahead resulted in a flurry of activity by banks, consulting firms and fintech companies, all of which are seeking less expensive, more streamlined ways to monitor banking transactions for possible money laundering, the cost of which has risen to roughly $25 billion per year since new compliance requirements were put in place following the Sept. 11 terror attacks.
Tim Mueller, the managing director for global investigations and compliance at Navigant, a Chicago-based consulting firm, said the current enthusiasm reminds him of the late 1990s, when he was advising banks on how to enhance their businesses using the World Wide Web.
“Although you could make some pretty significant arguments that AI is going to be bigger than the internet,” said Mueller, who gave a presentation on Navigant’s efforts to harness the technology at the annual meeting of the World Economic Forum last month in Davos, Switzerland.
“It’s kind of impossible to ignore,” he added. “If you don’t start to understand how AI can help you serve your clients better, […] you’re going to be out of business and irrelevant in the very near future.”
Still, the industry is proceeding with caution. While banks and their partners feel emboldened by the December notice from regulators, they remain wary of its stipulation that banks may not be fully exempt from punishment if pilot programs expose existing gaps in their compliance operations.
“If I’m an institution, that makes me a little nervous,” said David Stewart, who leads the financial crimes and compliance division at SAS, a North Carolina-based software company.
“Some of the things we’re doing with our clients in Hong Kong are substantially more innovative than what our clients have an appetite to do in the United States because they feel they’re under such heavy regulatory scrutiny,” Stewart said.
In partnership with Ayasdi, a Silicon Valley-based machine-learning company that has also worked with Citibank and HSBC, Navigant is seeking to challenge the traditional means of weeding out possible money laundering when monitoring a large number of transactions, which Mueller described as “pretty dicey.”
Currently, transaction monitoring typically scrutinizes a limited data set and relies heavily on humans trained to spot red flags. Transactions are segmented by broad categories such as a client’s business type, location or risk level as determined by the bank, which allows significant data to fall through the cracks.
The strategy tends to result in a high number of false positives — normal banking behavior initially flagged as suspicious. According to Mueller’s presentation at Davos, an estimated 95 percent of alerts generated in the first phase of a transaction review are found to be false-positives, and 98 percent of alerts do not lead to the filing of a suspicious activity report.
Enter Ayasdi, which used artificial intelligence to mine four years’ worth of transaction data belonging to two of Navigant’s clients, Scotiabank of Canada and Intesa Sanpaolo of Italy, for instances of possible money laundering.
Watch: Myths of the Green New Deal debunked
Instead of analyzing transactions using 20 or 30 categories, the banks were able to see data generated across 500 data points, said Alex Baghdjian, Ayasdi’s financial services strategy lead. False positives plummeted as a result, while the number of alerts that were chosen for further review rose.
“Not only were we able to get rid of the noise — all these alerts that were non-productive — but we also identified all these new areas of risk that were being escalated,” said Baghdjian. “Not only did we increase efficiency, but we drastically increased effectiveness as well.”
Navigant and Ayasdi aren’t alone in their pursuits. Bigger banks like WellsFargo are also experimenting with machine learning in the anti-money laundering sector. But firms are reluctant to charge too far ahead, lest they run afoul of regulators grappling with the promise and pitfalls of artificial intelligence.
“It’s not an industry that lends itself to being first movers,” said Mueller.
The December notice by regulators, which struck a largely optimistic tone while still warning banks against getting greedy, is just one recent sign the government is growing more curious about, and yet suspicious of, new innovation in compliance procedures. And it follows a larger trend in fintech: The industry’s rapid growth is prompting regulators to closely track innovation that could change forever the way Americans bank.
For instance, the Federal Reserve Bank of New York on Friday launched a 10-member “Fintech Advisory Group” of bankers, attorneys and academics in order to “establish clear points of contact with senior representatives and thought leaders” in the fintech industry.
The group, which is scheduled to have its first meeting next Monday, “will also gather insights that may inform our interaction with market participants and institutions, our training and hiring efforts, and the application of innovative approaches for internal business use,” said Kevin Stiroh, executive vice president and head of the supervision group at the New York Fed.
The move marks a departure for the Federal Reserve System as a whole, which has historically taken a hands-off approach to fintech innovation compared with other banking regulators.
Other regulators are taking on their own initiatives. The Office of the Comptroller of the Currency and the FDIC have both signaled they’re willing to work with fintech innovators, and the Consumer Financial Protection Bureau recently proposed a “regulatory sandbox” in order to allow companies to experiment with new financial products without needing to worry about breaking the law.
Still, the stakes are higher when it comes to anti-money laundering, and firms have trained a nervous eye on the December notice’s condition that U.S.-based firms would not “necessarily” face penance for past violations uncovered by current experimentation. For Stewart of SAS, the U.S. regulatory stance stands in stark contrast to what he considers more lenient approaches by the United Kingdom and Singapore.
“It’s great that the regulators are putting these advisories out there,” he said. “But if we want to continue to be competitive within the financial services industry, our institutions can’t be at a disadvantage.”
Steven Harras contributed to this report.