skip to main content

June 2, 2022

By: Brett J. Ashton

Since his appointment on October 12, 2021, Consumer Financial Protection Bureau (“CFPB” or the “Bureau”) Director Rohit Chopra has embarked on an aggressive campaign to identify and punish discriminatory practices in the financial services industry. While heightened regulatory scrutiny of fair lending compliance is nothing new to lenders, the Bureau’s March 16, 2022 blog post titled, “Cracking down on discrimination in the financial sector” (the “UDAAP Blog Post”)and accompanying press release2 announcing “changes to its supervisory operations to better protect families and communities from illegal discrimination, including in situations where fair lending laws may not apply” signals a regulatory expansion that all financial institutions should be aware of. 

 Financial institutions should also be aware of the CFPB’s recent focus on the use of artificial intelligence to engage in what they call “Algorithmic Redlining” or “Robo-Discrimination,” and the Bureau’s latest comments on the use of “black-box” underwriting tools and compliance with the Equal Credit Opportunity Act (“ECOA”) in their May 26, 2022 press release titled, “CFPB Acts to Protect the Public from Black-Box Credit Models Using Complex Algorithms” (the “Black Box Press Release”),3  and accompanying Consumer Financial Protection Circular 2022-03 (“Circular 2022-03”)to address “Adverse action notification requirements in connection with credit decisions based on complex algorithms.” 

Expanded Oversight of Illegal Discrimination as an Unfair Practice

In the UDAAP Blog Post, the Bureau announced, “[in] the course of examining banks’ and other companies’ compliance with consumer protection rules, the CFPB will scrutinize discriminatory conduct that violates the federal prohibition against unfair practices. The CFPB will closely examine financial institutions’ decision-making in advertising, pricing, and other areas to ensure that companies are appropriately testing for and eliminating illegal discrimination.” By expanding the scope of what is considered “unfair” the Bureau asserts it can review practically any activity in the consumer finance process, and initiate enforcement action against a financial institution for discriminatory practices under its broad unfair, deceptive, or abusive acts or practices (“UDAAP”) authority. An act or practice is unfair when: (1) it causes or is likely to cause substantial injury to consumers; (2) the injury is not reasonably avoidable by consumers; and (3) the injury is not outweighed by countervailing benefits to consumers or to competition.

The CFPB also released an updated UDAAP section of its examination manual (the “Exam Manual”) that financial institutions should carefully review. The updated Exam Manual notes that “[c]onsumers can be harmed by discrimination regardless of whether it is intentional. Discrimination can be unfair in cases where the conduct may also be covered by ECOA, as well as in instances where ECOA does not apply.” “The CFPB will examine for discrimination in all consumer finance markets, including credit, servicing, collections, consumer reporting, payments, remittances, and deposits. CFPB examiners will require supervised companies to show their processes for assessing risks and discriminatory outcomes, including documentation of customer demographics and the impact of products and fees on different demographic groups. The CFPB will look at how companies test and monitor their decision-making processes for unfair discrimination, as well as discrimination under ECOA.”

This expanded oversight also extends to marketing activities. Commenting in the UDAAP Blog Post, Bureau enforcement and supervisions staff stated, “[c]ertain targeted advertising and marketing, based on machine learning models, can harm consumers and undermine competition. Consumer advocates, investigative journalists, and scholars have shown how data harvesting and consumer surveillance fuel complex algorithms that can target highly specific demographics of consumers to exploit perceived vulnerabilities and strengthen structural inequities. We will be closely examining companies’ reliance on automated decision-making models and any potential discriminatory outcomes.”

Robo-Discrimination or Algorithmic Redlining

The CFPB describes “Robo-Discrimination” or “Algorithmic Redlining” is the practice of applying artificial intelligence and other technology to a financial institutions underwriting process to achieve a discriminatory outcome, regardless of how facially neutral that underwriting process may be. CFPB Director Chopra, speaking at the announcement of the Trustmark National Bank settlementcommented,

[w]e will also be closely watching for digital redlining, disguised through so-called neutral algorithms, that may reinforce the biases that have long existed. . . .While machines crunching numbers might seem capable of taking human bias out of the equation, that’s not what is happening. Findings from academic studies and news reporting raise serious questions about algorithmic bias. . . Too many families were victimized by the robo-signing scandals from the last crisis, and we must not allow robo-discrimination to proliferate in a new crisis. I am pleased that the CFPB will continue to contribute to the all-of-government mission to root out all forms of redlining, including algorithmic redlining.

Then on May 26, 2022 the Bureau issued the Black Box Press Release along with Circular 2022-03.  Commenting on the release of Circular 2022-03, CFPB Director Rohit Chopra stated, “Companies are not absolved of their legal responsibilities when they let a black-box model make lending decisions . . . .The law gives every applicant the right to a specific explanation if their application for credit was denied, and that right is not diminished simply because a company uses a complex algorithm that it doesn’t understand.” The Bureau indicates in the press release, “Law-abiding financial companies have long used advanced computational methods as part of their credit decision-making processes, and they have been able to provide the rationales for their credit decisions. However, some creditors may make credit decisions based on the outputs from complex algorithms, sometimes called “black-box” models. The reasoning behind some of these models’ outputs may be unknown to the model’s users, including the model’s developers. With such models, adverse action notices that meet ECOA’s requirements may not be possible.”(emphasis added) 

Unfortunately, neither the Black Box Press Release or Circular 2022-03 provide further detail or concrete examples of what kinds of automated underwriting tools may meet the standard of being too complex to be used without violating the Equal Credit Opportunity Act (“ECOA”), and what kinds may be acceptable. In fact, to date the Bureau has not provided any clear guidance as to whether the complex algorithmic underwriting systems they believe may be discriminatory. 


Financial institutions should re-examine their compliance management systems in light of the expanded UDAAP standard, and focus on the potential discriminatory impact of artificial intelligence use throughout their operations. Financial institutions should have a clearly defined process to assess risks and potential discriminatory outcomes of all activities (including even marketing), document customer demographics, and assess the impact of products and fees on different demographic groups. Financial institutions too small for direct CFPB examination should nonetheless take notice of these developments and take steps to ensure compliance given the civil remedies (including class action liability) contained in the ECOA. Not only are CFPB rules, regulations, and interpretations often adopted by other prudential regulators, Bureau enforcement actions are frequently the result of consumer complaints. Financial institutions should also closely consider the variables contained in any automated underwriting tools, and carefully review their existing language used in adverse action notices issued in connection with denials of credit. While it appears unlikely the Bureau is targeting many of the “off the shelf” underwriting programs many community banks and credit unions rely on, financial institutions should also review any agreements with these service providers to ensure they are required to provide accurate descriptions of reasons for any adverse action.


The Krieg DeVault Financial Services team is continuing to monitor the CFPB’s activities on these important issues, and can answer any questions you may have regarding what they may mean for your financial institution. 

Disclaimer.  The contents of this article should not be construed as legal advice or a legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult with counsel concerning your situation and specific legal questions you may have.

[1Cracking down on discrimination in the financial sector | Consumer Financial Protection Bureau (

[2] CFPB Targets Unfair Discrimination in Consumer Finance | Consumer Financial Protection Bureau

[3] CFPB Acts to Protect the Public from Black-Box Credit Models Using Complex Algorithms | Consumer Financial Protection Bureau (

[4] Consumer Financial Protection Circular 2022-03: Adverse action notification requirements in connection with credit decisions based on complex algorithms | Consumer Financial Protection Bureau (

[5] Trustmark National Bank | Consumer Financial Protection Bureau (