Skip to content

As banks push AI, worry about worsening inequality follows

AI raises concern about weakening fair lending rules

Barefoot, CEO of the Alliance for Innovative Regulation, sees 'very disturbing potential' in poorly designed AI.
Barefoot, CEO of the Alliance for Innovative Regulation, sees 'very disturbing potential' in poorly designed AI. (Tom Williams/CQ Roll Call file photo)

Banks, consumer advocates and think tanks are weighing in to federal bank regulators about potential pitfalls in the use of artificial intelligence and machine learning in making loan decisions.

In responses to regulators’ call for comments, many expressed interest in an increased use of AI and machine learning in the banking business, along with caveats about fair lending and unlawful discrimination concerns.

FinRegLab, a Washington-based research group that says it has launched a broad inquiry into the use of AI in financial services, told the agencies that machine learning could be “transformational,” as current gaps “increase the cost or risk of serving particular consumer and small-business populations using traditional models and data.”

At the same time, the predictive power of machine learning models can increase potential risks “due to the models’ greater complexity and to their potential to exacerbate historical disparities and flaws in underlying data,” FinRegLab said. AI and machine learning might amplify patterns of historical discrimination and financial exclusion through reliance on flawed data or mistakes in development.

The Boston-based National Consumer Law Center was even more blunt, warning in its July 1 letter to regulators that “the use of complex, opaque algorithmic models in consumer credit transactions also heightens the risk of unlawful discrimination, and unfair deceptive, and abusive practices.”

The law center noted that there isn’t even agreement on the definition of “artificial intelligence,” which adds to the concerns about how it is used.

“The lack of a definition for AI is understandable, but it is also problematic,” the group wrote. “There may be incorrect assumptions that the use of AI necessarily makes a system more accurate or predictive, or that it is unbiased and unquestionably fair.”

The pro-consumer legal organization said public perception of what constitutes AI has been heavily influenced by Hollywood with movies such as “2001: A Space Odyssey” or the Terminator series. “Many think of AI as incredibly human-like and sentient, which is very far from current reality,” it said.

State Street Corp., one of the largest banks in the U.S., with nearly $317 billion in assets — told regulators that in its experience, AI and machine learning models may face data quality challenges, including bias introduced by mislabeled data or embedded in data provided by a third-party vendor.

‘Hard issue to regulate’

Jo Ann Barefoot, a former deputy comptroller of the currency and Senate Banking Committee staff member who now leads the Alliance for Innovative Regulation in Washington, said there are numerous possible benefits to the use of AI in credit underwriting. But regulators need to ensure that banks comply with fair lending laws and that machine learning doesn’t lead to denials of credit based on prohibited reasons such as race and gender, she said.

She warned of a “very disturbing potential” for the use of poorly designed AI.

“This is a very hard issue to regulate,” Barefoot said. “They are going to have to develop smart and informed policies on this issue. I don’t envy them.”

Former Comptroller of the Currency Thomas J. Curry, a Barack Obama appointee and strong proponent of innovative financial technologies during his tenure from 2012 to 2017, said he is encouraged that several agencies — the Federal Reserve, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, Consumer Financial Protection Bureau and National Credit Union Administration — are working together in gathering the information because it will help promote uniformity and clarity in how regulators approach these issues.

“The fact they put out an RFI [request for information] as the vehicle for collecting this data importantly shows their desire to have a unified approach” when it comes to studying and understanding the issues surrounding AI and machine learning, he said.

Curry acknowledged that there is “real concern” in the credit underwriting area about the perpetuation of biases in the use of AI and machine learning and that unless the financial industry gets clearer guidance from regulators on fair lending, the issue could hold some financial institutions back from employing the technologies to their maximum potential.

“There is a lot of controversy around Big Data and its potential abuses, and that likely partially drove the decision to issue the RFI,” he said.

Like Curry, Melissa Koide, CEO of FinRegLab, said in an email it was gratifying to see regulators looking to learn more about the roles that artificial intelligence and machine learning can play in the banking system.

“Policymakers are actively building their understanding of the implications of AI/ML on model governance, fairness, explainability, and financial inclusion,” said Koide, who spent more than four years in the Obama administration as the Treasury Department’s assistant secretary for consumer policy. “It’s exciting to see they are working together to build a shared understanding.”

Regulators say the RFI responses will help them determine whether any clarifications are needed for banks to use AI and machine learning in a manner that fully complies with laws and regulations, including consumer protection statutes.

The banking industry appears all in on new technologies, saying it welcomes the government’s research as a step in clarifying existing rules and guidelines to address the risks and opportunities presented by AI.

While clarification is welcome, the American Bankers Association said it doesn’t believe that new regulations are necessary or warranted.

The Bank Policy Institute, another industry group, echoed those sentiments, writing, “AI is a technology like any other, and the risks posed by AI as outlined in the RFI can be managed within existing laws and regulations on the activities in which AI is applied across the financial industry. BPI believes that new regulations are not necessary.”

The agencies accepted responses through July 1 and must now go through the comments. Curry said it will take a while to digest all the information but the process is off to a good start in developing a unified regulatory approach.  

Recent Stories

In final month of the session, Congress looks to clean up loose ends, prepare for Trump

Private donations pour in for cash-strapped national parks

Trump announces plan to replace FBI director with Kash Patel

Trump’s USDA pick could focus on foreign investments in agricultural land

Angling for open Appropriations seats set to ratchet up

Trump names pick for NIH director