House Dems: Use of AI Must Not Result in Discrimination

The use of Artificial Intelligence and algorithms by financial institutions must not result in discriminatory practices, two key House Democrats warned financial regulators Monday.

“As you assess the use of AI from financial institutions you oversee, you must prioritize principles of transparency, enforceability, privacy, and fairness and equity,” House Financial Services Chairwoman Maxine Waters, D-Calif., and Rep. Bill Foster, chairman of the committee’s Task Force on Artificial Intelligence, wrote in letters to financial regulators.

“AI must be used in a way that serves the American public, its consumers, investors, and labor workforce, first and foremost,” the lawmakers wrote. Recipients of the letters include National Credit Union Administration Chairman Todd Harper and Consumer Financial Protection Bureau Director Rohit Chopra.

The two House members warned that historical data used as inputs for AI can reveal longstanding biases, creating models that discriminate against protected classes.

They said, for example, lenders using alternative data to offer private student loans have been accused of penalizing borrowers of color who attend Historically Black Colleges or universities or other minority-serving institutions.

Rather than helping to take human biases out of decision-making, new types of algorithmic underwriting techniques may exacerbate disparate impacts on protected groups, they added.

“Regulators must subject financial institutions using AI to comprehensive audits of their algorithmic decision-making processes, and be staffed with enough expertise as appropriate,” they wrote, adding that the regulators must have employees with the expertise needed to monitor the institutions.

Spread the word. Share this post!