Home ESG Federal Reserve Warns Banks About Social Biases in AI

Federal Reserve Warns Banks About Social Biases in AI

by delta
0 comment

Not everyone agrees with my anti-AI stance in ESG. Fair enough, but even AI supporters need to recognize limitations and other risks in relying on AI. This memo from Arnold & Porter covers one specific example from what might be viewed as an unusual source:

“On July 18, 2023, Federal Reserve Vice Chair for Supervision Michael Barr cautioned banks against fair lending violations arising from their use of artificial intelligence (AI). Training on data reflecting societal biases; data sets that are incomplete, inaccurate, or nonrepresentative; algorithms specifying variables unintentionally correlated with protected characteristics; and other problems can produce discriminatory results.

… because AI use also carries risks of violating fair lending laws and perpetuating disparities in credit transactions, Vice Chair Barr called it ‘critical’ for regulators to update their applications of the Fair Housing Act (FHA) and Equal Credit Opportunity Act (ECOA) to keep pace with these new technologies and prevent new versions of old harms. Violations can result both from disparate treatment (treating credit applicants differently based on a protected characteristic) and disparate impact (apparently neutral practices that produce different results based on a protected characteristic).” 

There are very real questions about AI’s validation, governance and underlying data controls and how those may perpetuate biases and fraud embedded in the technology’s data universe. To illustrate gaps in ChatGPT’s controls, consider this from Tuesday’s The Economist about new bombs being developed in Ukraine:

“Some ‘candy shops’ use software to model the killing potential of different shrapnel types and mounting angles relative to the charge, says one soldier in Kyiv with knowledge of their efforts. ChatGPT an AI language model, is also queried for engineering tips (suggesting that the efforts of OpenAI ChatGTP’s creator, to prevent these sorts of queries are not working).”

Companies planning on – or already – using AI for any aspect of ESG data collection or analysis must be aware of the limitations and potential risks of doing so. If you plan on relying on AI in ESG, consider doing some type of due diligence on data sources and learn as much as you can about the algorithm’s validation, governance and underlying data controls.

The post Federal Reserve Warns Banks About Social Biases in AI appeared first on PracticalESG.

You may also like

Leave a Comment

delta-compliance.com

Delta-Compliance.com is a premier news website that provides in-depth coverage of the latest developments in finance, startups, compliance, business, science, and job markets.

Editors' Picks

Latest Posts

This Website is operated by the Company DELTA Data Protection & Compliance, Inc., located in Lewes, DE 19958, Delaware, USA.
All feedback, comments, notices of copyright infringement claims or requests for technical support, and other communications relating to this website should be directed to: info@delta-compliance.com. The imprint also applies to the social media profiles of DELTA Data Protection & Compliance.

Copyright ©️ 2023  Delta Compliance. All Rights Reserved

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00