Not everyone agrees with my anti-AI stance in ESG. Fair enough, but even AI supporters need to recognize limitations and other risks in relying on AI. This memo from Arnold & Porter covers one specific example from what might be viewed as an unusual source:
“On July 18, 2023, Federal Reserve Vice Chair for Supervision Michael Barr cautioned banks against fair lending violations arising from their use of artificial intelligence (AI). Training on data reflecting societal biases; data sets that are incomplete, inaccurate, or nonrepresentative; algorithms specifying variables unintentionally correlated with protected characteristics; and other problems can produce discriminatory results.
… because AI use also carries risks of violating fair lending laws and perpetuating disparities in credit transactions, Vice Chair Barr called it ‘critical’ for regulators to update their applications of the Fair Housing Act (FHA) and Equal Credit Opportunity Act (ECOA) to keep pace with these new technologies and prevent new versions of old harms. Violations can result both from disparate treatment (treating credit applicants differently based on a protected characteristic) and disparate impact (apparently neutral practices that produce different results based on a protected characteristic).”
There are very real questions about AI’s validation, governance and underlying data controls and how those may perpetuate biases and fraud embedded in the technology’s data universe. To illustrate gaps in ChatGPT’s controls, consider this from Tuesday’s The Economist about new bombs being developed in Ukraine:
“Some ‘candy shops’ use software to model the killing potential of different shrapnel types and mounting angles relative to the charge, says one soldier in Kyiv with knowledge of their efforts. ChatGPT an AI language model, is also queried for engineering tips (suggesting that the efforts of OpenAI ChatGTP’s creator, to prevent these sorts of queries are not working).”
Companies planning on – or already – using AI for any aspect of ESG data collection or analysis must be aware of the limitations and potential risks of doing so. If you plan on relying on AI in ESG, consider doing some type of due diligence on data sources and learn as much as you can about the algorithm’s validation, governance and underlying data controls.
The post Federal Reserve Warns Banks About Social Biases in AI appeared first on PracticalESG.