skip to main content
Overview
Toggle Button Open

March 21, 2022

By: Brett J. Ashton

Since his appointment on October 12, 2021, Consumer Financial Protection Bureau (“CFPB” or the “Bureau”) Director Rohit Chopra has embarked on an aggressive campaign to identify and punish discriminatory practices in the financial services industry. While heightened regulatory scrutiny of fair lending compliance is nothing new to lenders, the Bureau’s March 16, 2022 press release (the “Press Release”) and accompanying blog post announcing “changes to its supervisory operations to better protect families and communities from illegal discrimination, including in situations where fair lending laws may not apply” signals a significant regulatory expansion with far reaching consequences. This expanded interpretation of Bureau authority should give financial institutions even more concern given the CFPB’s recent focus on the use of artificial intelligence to engage in what they call “Algorithmic Redlining” or “Robo-Discrimination.” 

Expanded Oversight of Illegal Discrimination as an Unfair Practice

In the Press Release, the Bureau announced, “[in] the course of examining banks’ and other companies’ compliance with consumer protection rules, the CFPB will scrutinize discriminatory conduct that violates the federal prohibition against unfair practices. The CFPB will closely examine financial institutions’ decision-making in advertising, pricing, and other areas to ensure that companies are appropriately testing for and eliminating illegal discrimination.” By expanding the scope of what is considered “unfair” the Bureau asserts it can review practically any activity in the consumer finance process, and initiate enforcement action against a financial institution for discriminatory practices under its broad unfair, deceptive or abusive acts or practices (“UDAAP”) authority. An act or practice is unfair when: (1) it causes or is likely to cause substantial injury to consumers; (2) the injury is not reasonably avoidable by consumers; and (3) the injury is not outweighed by countervailing benefits to consumers or to competition.

The CFPB also released an updated UDAAP section of its examination manual (the “Exam Manual”) with the Press Release that financial institutions should carefully review. The updated Exam Manual notes that “[c]onsumers can be harmed by discrimination regardless of whether it is intentional. Discrimination can be unfair in cases where the conduct may also be covered by ECOA, as well as in instances where ECOA does not apply.” “The CFPB will examine for discrimination in all consumer finance markets, including credit, servicing, collections, consumer reporting, payments, remittances, and deposits. CFPB examiners will require supervised companies to show their processes for assessing risks and discriminatory outcomes, including documentation of customer demographics and the impact of products and fees on different demographic groups. The CFPB will look at how companies test and monitor their decision-making processes for unfair discrimination, as well as discrimination under ECOA.”

This expanded oversight also extends to marketing activities. Commenting in the blog post, Bureau enforcement and supervisions staff stated, “[c]ertain targeted advertising and marketing, based on machine learning models, can harm consumers and undermine competition. Consumer advocates, investigative journalists, and scholars have shown how data harvesting and consumer surveillance fuel complex algorithms that can target highly specific demographics of consumers to exploit perceived vulnerabilities and strengthen structural inequities. We will be closely examining companies’ reliance on automated decision-making models and any potential discriminatory outcomes.”

Robo-Discrimination and Algorithmic Redlining

The CFPB has indicated that Robo-Discrimination, or Algorithmic Redlining is the practice of applying artificial intelligence and other technology to a financial institutions underwriting process to achieve a discriminatory outcome, regardless of how facially neutral that underwriting process may be. CFPB Director Chopra, speaking at the announcement of the Trustmark National Bank settlement1 commented,

[w]e will also be closely watching for digital redlining, disguised through so-called neutral algorithms, that may reinforce the biases that have long existed. . . .While machines crunching numbers might seem capable of taking human bias out of the equation, that’s not what is happening. Findings from academic studies and news reporting raise serious questions about algorithmic bias. . . Too many families were victimized by the robo-signing scandals from the last crisis, and we must not allow robo-discrimination to proliferate in a new crisis. I am pleased that the CFPB will continue to contribute to the all-of-government mission to root out all forms of redlining, including algorithmic redlining.

Chopra has made several similar comments on the issue of Robo-Discrimination in recent months. CFPB’s Chief Technologist recently issued a blog post reminding technology workers that the Bureau had an established whistleblower process, and commented, “I encourage engineers, data scientists and others who have detailed knowledge of the algorithms and technologies used by companies and who know of potential discrimination or other misconduct within the CFPB’s authority to report it to us.”2

Then in perhaps the clearest indicator of the CFPB’s intentions on this issue, on February 24, 2022 the Bureau issued a “SBREFA Outline”3 in preparation for a rulemaking on the use of automated valuation models (“AVMs”) in which they indicate they are considering including an AVM quality control factor focused on nondiscrimination given the risk of bias in algorithmic systems. Among the proposals under consideration are “[p]rescriptive requirements could address risks that lending decisions based on AVM outputs generate unlawful disparities, by specifying methods of AVM development (e.g., data sources, modeling choices) and AVM use cases. As explained in the previous part, we are considering proposing to include such requirements in an appendix or official commentary appended to the CFPB’s rule.”

Conclusion

Financial institutions should re-examine their compliance management systems in light of the expanded UDAAP standard, and focus on the potential discriminatory impact of artificial intelligence use throughout their operations. Financial institutions should have a clearly defined process to assess risks and potential discriminatory outcomes of all activities (including even marketing), document customer demographics, and assess the impact of products and fees on different demographic groups. Financial institutions too small for direct CFPB examination should nonetheless take notice of these developments and take steps to ensure compliance. Not only are CFPB rules, regulations, and interpretations often adopted by other prudential regulators, Bureau enforcement actions are frequently the result of consumer complaints. 

The Krieg DeVault Financial Services team is continuing to monitor the CFPB’s activities on these important issues, and can answer any questions you may have regarding what they may mean for your financial institution. 

 

Disclaimer.  The contents of this article should not be construed as legal advice or a legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult with counsel concerning your situation and specific legal questions you may have.

[1] Trustmark National Bank | Consumer Financial Protection Bureau (consumerfinance.gov)
[2] https://www.consumerfinance.gov/about-us/blog/cfpb-calls-tech-workers-to-action/
[3] Consumer Financial Protection Bureau Outlines Options To Prevent Algorithmic Bias In Home Valuations | Consumer Financial Protection Bureau (consumerfinance.gov)
 

March 21, 2022

By: Brett J. Ashton

Since his appointment on October 12, 2021, Consumer Financial Protection Bureau (“CFPB” or the “Bureau”) Director Rohit Chopra has embarked on an aggressive campaign to identify and punish discriminatory practices in the financial services industry. While heightened regulatory scrutiny of fair lending compliance is nothing new to lenders, the Bureau’s March 16, 2022 press release (the “Press Release”) and accompanying blog post announcing “changes to its supervisory operations to better protect families and communities from illegal discrimination, including in situations where fair lending laws may not apply” signals a significant regulatory expansion with far reaching consequences. This expanded interpretation of Bureau authority should give financial institutions even more concern given the CFPB’s recent focus on the use of artificial intelligence to engage in what they call “Algorithmic Redlining” or “Robo-Discrimination.” 

Expanded Oversight of Illegal Discrimination as an Unfair Practice

In the Press Release, the Bureau announced, “[in] the course of examining banks’ and other companies’ compliance with consumer protection rules, the CFPB will scrutinize discriminatory conduct that violates the federal prohibition against unfair practices. The CFPB will closely examine financial institutions’ decision-making in advertising, pricing, and other areas to ensure that companies are appropriately testing for and eliminating illegal discrimination.” By expanding the scope of what is considered “unfair” the Bureau asserts it can review practically any activity in the consumer finance process, and initiate enforcement action against a financial institution for discriminatory practices under its broad unfair, deceptive or abusive acts or practices (“UDAAP”) authority. An act or practice is unfair when: (1) it causes or is likely to cause substantial injury to consumers; (2) the injury is not reasonably avoidable by consumers; and (3) the injury is not outweighed by countervailing benefits to consumers or to competition.

The CFPB also released an updated UDAAP section of its examination manual (the “Exam Manual”) with the Press Release that financial institutions should carefully review. The updated Exam Manual notes that “[c]onsumers can be harmed by discrimination regardless of whether it is intentional. Discrimination can be unfair in cases where the conduct may also be covered by ECOA, as well as in instances where ECOA does not apply.” “The CFPB will examine for discrimination in all consumer finance markets, including credit, servicing, collections, consumer reporting, payments, remittances, and deposits. CFPB examiners will require supervised companies to show their processes for assessing risks and discriminatory outcomes, including documentation of customer demographics and the impact of products and fees on different demographic groups. The CFPB will look at how companies test and monitor their decision-making processes for unfair discrimination, as well as discrimination under ECOA.”

This expanded oversight also extends to marketing activities. Commenting in the blog post, Bureau enforcement and supervisions staff stated, “[c]ertain targeted advertising and marketing, based on machine learning models, can harm consumers and undermine competition. Consumer advocates, investigative journalists, and scholars have shown how data harvesting and consumer surveillance fuel complex algorithms that can target highly specific demographics of consumers to exploit perceived vulnerabilities and strengthen structural inequities. We will be closely examining companies’ reliance on automated decision-making models and any potential discriminatory outcomes.”

Robo-Discrimination and Algorithmic Redlining

The CFPB has indicated that Robo-Discrimination, or Algorithmic Redlining is the practice of applying artificial intelligence and other technology to a financial institutions underwriting process to achieve a discriminatory outcome, regardless of how facially neutral that underwriting process may be. CFPB Director Chopra, speaking at the announcement of the Trustmark National Bank settlement1 commented,

[w]e will also be closely watching for digital redlining, disguised through so-called neutral algorithms, that may reinforce the biases that have long existed. . . .While machines crunching numbers might seem capable of taking human bias out of the equation, that’s not what is happening. Findings from academic studies and news reporting raise serious questions about algorithmic bias. . . Too many families were victimized by the robo-signing scandals from the last crisis, and we must not allow robo-discrimination to proliferate in a new crisis. I am pleased that the CFPB will continue to contribute to the all-of-government mission to root out all forms of redlining, including algorithmic redlining.

Chopra has made several similar comments on the issue of Robo-Discrimination in recent months. CFPB’s Chief Technologist recently issued a blog post reminding technology workers that the Bureau had an established whistleblower process, and commented, “I encourage engineers, data scientists and others who have detailed knowledge of the algorithms and technologies used by companies and who know of potential discrimination or other misconduct within the CFPB’s authority to report it to us.”2

Then in perhaps the clearest indicator of the CFPB’s intentions on this issue, on February 24, 2022 the Bureau issued a “SBREFA Outline”3 in preparation for a rulemaking on the use of automated valuation models (“AVMs”) in which they indicate they are considering including an AVM quality control factor focused on nondiscrimination given the risk of bias in algorithmic systems. Among the proposals under consideration are “[p]rescriptive requirements could address risks that lending decisions based on AVM outputs generate unlawful disparities, by specifying methods of AVM development (e.g., data sources, modeling choices) and AVM use cases. As explained in the previous part, we are considering proposing to include such requirements in an appendix or official commentary appended to the CFPB’s rule.”

Conclusion

Financial institutions should re-examine their compliance management systems in light of the expanded UDAAP standard, and focus on the potential discriminatory impact of artificial intelligence use throughout their operations. Financial institutions should have a clearly defined process to assess risks and potential discriminatory outcomes of all activities (including even marketing), document customer demographics, and assess the impact of products and fees on different demographic groups. Financial institutions too small for direct CFPB examination should nonetheless take notice of these developments and take steps to ensure compliance. Not only are CFPB rules, regulations, and interpretations often adopted by other prudential regulators, Bureau enforcement actions are frequently the result of consumer complaints. 

The Krieg DeVault Financial Services team is continuing to monitor the CFPB’s activities on these important issues, and can answer any questions you may have regarding what they may mean for your financial institution. 

 

Disclaimer.  The contents of this article should not be construed as legal advice or a legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult with counsel concerning your situation and specific legal questions you may have.

[1] Trustmark National Bank | Consumer Financial Protection Bureau (consumerfinance.gov)
[2] https://www.consumerfinance.gov/about-us/blog/cfpb-calls-tech-workers-to-action/
[3] Consumer Financial Protection Bureau Outlines Options To Prevent Algorithmic Bias In Home Valuations | Consumer Financial Protection Bureau (consumerfinance.gov)
 

Practices

Industries