Home Artificial Intelligence How to develop ethical standards for A.I.

How to develop ethical standards for A.I.

by delta
0 comment

Elizabeth Holmes convinced investors and patients that she had a prototype of a microsampling machine that could run a wide range of relatively accurate tests using a fraction of the volume of blood usually required. She lied; the Edison and miniLab devices didn’t work. Worse still, the company was aware they didn’t work, but continued to give patients inaccurate information about their health, including telling healthy pregnant women that they were having miscarriages and producing false positives on cancer and HIV screenings.

But Holmes, who has to report to prison by May 30, was convicted of defrauding investors; she wasn’t convicted of defrauding patients. This is because the principles of ethics for disclosure to investors, and the legal mechanisms used to take action against fraudsters like Holmes, are well developed. They are not always well enforced, but the laws are on the books. In medical devices, things are murkier—in encouraging innovation, the legal standards give a wide berth to people trying to develop technologies, with the understanding that sometimes even people trying their best will screw up.

We’re seeing a similar dynamic play out with the current debate over A.I. regulation.
Lawmakers are grasping at straws over what to do, while doomsday predictions clash with breathless sales pitches for how A.I. tech will change everything. Either the future will be a panacea of algorithmically created bliss and students will never be forced to write another term paper, or we’ll all be reduced to radioactive rubble. (In which case, students still wouldn’t have to write term papers.) The problem is that new technologies without ethical and legal regulation can do a lot of damage—and often, as was the case for Theranos’ patients, we don’t have good way for people to recoup their losses. At the same time, because of its fast-moving nature, tech is particularly hard to regulate, with loose standards and more opportunity for fraud and abuse, whether through flashy startups like Holmes’ or sketchy cryptocurrency and NFT schemes.

Holmes is a useful case to think about in developing ethical standards in A.I. because she built a literal opaque box and claimed people couldn’t look at or report what was inside. Doing so, she said, would violate her intellectual property, even as the technology told healthy patients they were dying. We’re seeing many of these same dynamics play out in the conversation around developing ethical standards and regulation for artificial intelligence.

Developing ethical standards to form the basis of A.I. regulation is a new challenge. But it’s a challenge we have the tools to tackle—and we can apply the lessons we’ve learned from failures to manage other tech.

Like Theranos’ devices, artificial intelligence technologies are little boxes generally understood by their designers (at least as well as anyone) but often not subject to scrutiny from the outside. Algorithmic accountability requires some degree of transparency; if a black box makes a decision that causes harm or has a discriminatory impact, we have to open it up to figure out if these mistakes are attributable to an occasional blind spot, a systematic error in design, or (as in the Holmes case) an outright fraud. This transparency matters in order to both prevent future harms and determine accountability and liability for existing harms.

There is a lot of urgency around A.I. regulation. Big A.I. firms and researchers alike are pushing lawmakers to act quickly, and while the proposals vary, they consistently include some transparency requirements. To prevent systemic problems and fraud, even intellectual property law should not insulate big A.I. companies from showing how their technology works. Sam Altman’s recent congressional testimony on OpenAI and ChatGPT included discussion of how the technology operates, but it only scratched the surface. While Altman looks eager to craft regulation, he’s also threatening to withdraw from the European Union based on proposed A.I. regulations before the European parliament.

In early May, the Biden administration announced developments in its proposal for addressing artificial intelligence; the most significant was a commitment from major A.I. companies (Alphabet, Microsoft, and OpenAI, among others) to opt in to “public assessment,” which will subject their tech to independent testing and evaluate potential impact. The assessment is not exactly “public” in the way Altman’s congressional testimony was; experts from outside of the company would be given access to assess the technologies on the public’s behalf. If companies live up to these commitments, then experts will be able to catch problems before products are widely implemented and used, hopefully protecting the public from dangerous consequences. This is an early-stage proposal, because we do not know who these experts are or what powers they will have, and companies may not want to play by the rules, even when they helped craft them. Still, it’s a step forward in establishing the terms for greater scrutiny of private tech.

The Biden administration’s broader proposal, the “Blueprint for an AI Bill of Rights,” identifies a range of areas where we already know A.I. technologies cause harms—facial recognition algorithms misidentifying Black people; social media algorithms pushing violent and sexual content—and adopts (in broad strokes) ethical principles for addressing those issues, which can then be codified into law and made enforceable. Among these principles are nondiscrimination, safety, the right to be informed about data gathered by the systems, and the right to refuse an algorithmic service (and have access to a human alternative).

The horror stories that make the case for these principles are widespread. Researchers including Joy Buolamwini have extensively documented problems with racial bias in algorithmic systems. Facial recognition software and autonomous driving systems trained overwhelmingly on data sets of white subjects fail to recognize or differentiate Black subjects. This poses obvious dangers, from a person being wrongly identified as a criminal suspect based on faulty facial recognition to someone being hit by an autonomous car that can’t see Black people at night. People shouldn’t be subject to discrimination (or hit by cars) because of biased algorithms. The Biden administration’s proposal holds that designers have an obligation to engage in predeployment testing.

This obligation is critical. Lots of technologies have error and failure rates; for example, COVID tests have a false positive rate and a range of complicated variables—but that’s why it’s important that technologies are tested to evaluate and disclose those failure rates. There’s a difference between the false positive from an antigen test and the machine hawked by Holmes, which simply didn’t work. Responsibility and liability for designers should be responsive to what the designer did. If the designers adhered to best practices, then they shouldn’t be liable; if they were grossly negligent, then they should be. This is a principle in engineering and design ethics across fields, from medical tests to algorithms to oil wells.

There’s a long road between proposing an obligation and implementing a legal framework, but in an ideal world, the potential for discrimination can be addressed during the testing phase, with companies submitting their algorithms for independent auditing before bringing them to market. As work like Buolamwini’s becomes part of the standard for developing and evaluating these technologies, companies that don’t test for bias in their algorithms would be acting negligently. These testing standards should have legal implications and help establish when consumers injured by a product can recover damages from the company—this is what was missing in the case of the Theranos fraud and is still missing from standards around medical testing and devices in startups.

Companies, for their part, should support clear, well-founded standards for A.I. like those outlined in the Biden administration’s proposal, because doing so provides grounds for public trust. That grounding isn’t absolute; if we find out the toothpaste is tainted, then we’re going to look sideways at the toothpaste company—but knowing that there’s foundational regulatory control does help establish that products we use on a regular basis are safe. Most of us feel safer because our doctors and lawyers have a code of ethics and because the engineers who build bridges and tunnels have professional standards. A.I. products are embedded in our lives already, from recommendation algorithms to scheduling systems to voice and image recognition systems. Ensuring that these things don’t have severe, inappropriate biases is a bare minimum.

Algorithms, like medical tests, provide us with information we need to make decisions. We need regulatory oversight of algorithms for the same reason we needed it for Holmes’ boxes: If the information we’re getting is produced by machines that make systematic errors (or worse, don’t work at all), then they can and will endanger people who use them. If we know what the errors are, then we can work to prevent or mitigate the harms.

Future Tense
is a partnership of
Slate,
New America, and
Arizona State University
that examines emerging technologies, public policy, and society.

You may also like

Leave a Comment

delta-compliance.com

Delta-Compliance.com is a premier news website that provides in-depth coverage of the latest developments in finance, startups, compliance, business, science, and job markets.

Editors' Picks

Latest Posts

This Website is operated by the Company DELTA Data Protection & Compliance, Inc., located in Lewes, DE 19958, Delaware, USA.
All feedback, comments, notices of copyright infringement claims or requests for technical support, and other communications relating to this website should be directed to: info@delta-compliance.com. The imprint also applies to the social media profiles of DELTA Data Protection & Compliance.

Copyright ©️ 2023  Delta Compliance. All Rights Reserved

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00