On October 4, 2022, the White House released the Blueprint for an AI Bill of Rights (the “Blueprint”), which provides non-binding “principles” for organizations in both the public and private sectors to use when developing or deploying artificial intelligence (“AI”) or other automated systems.
The Blueprint does not include many new ideas for AI compliance. Instead, it represents a collection of principles that have been included in laws and guidance published by governments and organizations around the world. But unlike many of those guidelines, it takes a rights-based approach that is focused on AI’s potential harm, rather than a risk-based approach, which means that the Blueprint’s recommendations apply to all covered automated systems, largely regardless of their risk.
This approach significantly undermines the likely value of the Blueprint as a model for future AI regulation in the United States. Many organizations that have adopted AI are currently running hundreds, if not thousands, of models that make decisions that range from consequential to relatively insignificant. Requiring those organizations to put each model through a complicated and time-consuming compliance process is not an effective way to reduce the risks associated with automated systems. Instead, it will result in a misallocation of resources, with too much effort spent on low-risk AI (e.g., spam filters, graphics generation for games, inventory management, cybersecurity monitoring, etc.) and not enough effort spent on high-risk AI (e.g., hiring, lending, insurance underwriting, law enforcement, education admissions, etc.).
In response to this critique, the drafters will likely point out that the Blueprint only applies to automated systems that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services,” and therefore does not apply to low-risk AI at all. They likely would also note that the Blueprint includes an appendix of examples of covered automated systems, which largely include AI applications that would be considered high-risk under other regulatory frameworks. But this scope limitation is not a prominent feature, and nowhere in the Blueprint can you find examples of low-risk automated systems that are expressly out of scope. In addition, the phrase “the potential to meaningfully impact” is likely to sweep in a lot of low-risk AI that has the potential to impact Americans, but, as a practical matter, is unlikely to do so. As a result, there will be many low-risk automated systems for which the application of the Blueprint is unclear, and there will be pressure from regulators and compliance professionals to bring those systems under the compliance regime as a matter of prudence because of their theoretical potential for causing harm, even if that harm is not likely to occur.
If the Blueprint were only applicable to an identified list of high-risk AI (which is the approach that the EU has adopted with the draft AI Act), it would be a more valuable policy tool for promoting organizations’ responsible use of automated systems. As discussed below, this is especially true because the Blueprint does make effective use of “AI Storytelling” by providing concrete examples to demonstrate the risks of certain AI use cases and the ways those risks can be mitigated.
One additional drawback of the Blueprint’s rights-based approach is that it focuses almost exclusively on the risks of AI and therefore leaves little room to balance the potential benefits of automated systems against their possible drawbacks. Although the Blueprint’s Foreword does mention the “extraordinary benefits” of AI and its potential to “make life better for everyone,” the document fails to acknowledge that many of the risks it associates with automated systems can be equally applicable to human decision-making, which can also be flawed, opaque, and biased.
The Blueprint’s Five Principles
Below is a summary of the Blueprint’s five principles, along with a checklist of actions that the White House believes will advance each principle.
I. Safe and Effective Systems (You should be protected from unsafe or ineffective systems).
A. Protect the public from harm in a proactive and ongoing manner
-
-
- Public consultation
- Pre-deployment testing
- Risk identification and mitigation
- Ongoing monitoring
- Clear organizational oversight
-
B. Avoid inappropriate, low-quality, or irrelevant data use and the compounded harm of its reuse
-
-
- Use relevant and high-quality data
- Derived data sources tracked and reviewed carefully
- Data reuse limits in sensitive domains
-
C. Demonstrate the safety and effectiveness of the system
-
-
- Independent evaluation
- Clear and regular reporting
-
II. Algorithmic Discrimination Protections (You should not face discrimination by algorithms and systems should be used and designed in an equitable way).
A. Protect the public from algorithmic discrimination in a proactive and ongoing manner
-
-
- Conduct equity assessments, including input data
- Use robust representative data
- Remove proxies
- Ensure accessibility to people with disabilities
- Conduct disparity assessments and mitigate disparities identified
- Engage in ongoing monitoring and mitigation
-
B. Demonstrate that the system protects against algorithmic discrimination
-
-
- Independent evaluation
- Clear and regular reporting
-
III. Data Privacy (You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used).
A. Data privacy should be protected by design and by default
-
-
- Privacy risks include risks to third parties
- Collect and retain data only as-needed to meet specific, narrow goals
- Identify harms and mitigation risks for use, sharing, or storage of data
- Follow industry-standard best practices for privacy and security
-
B. Protect the public from unchecked surveillance and monitoring
-
-
- Heightened oversight of surveillance and monitoring systems, including risk assessments
- Avoid surveillance unless necessary and use least invasive means
- Limit surveillance and monitoring to prevent impingement of civil rights or liberties
-
C. Create appropriate and meaningful mechanisms for consent, access, and control
-
-
- Seek consent for narrow, specific use-cases for a specific duration
- Consent requests should be plain, brief, direct, and understandable by laypeople
- Provide people whose data is collected with the ability to access their data and metadata
- Provide an ability to correct the data and metadata as necessary
- Allow people to withdraw consent, resulting in deletion of their data
- Individuals should be able to use automated systems for consent, access, and control decisions
-
D. Demonstrate that data privacy and user control are protected
-
-
- Independent evaluation
- Clear and regular reporting
-
E. Data related to sensitive domains should carry additional protections
-
-
- Only use sensitive data for strictly necessary functions
- Consent for non-necessary functions should be optional
- Conduct periodic ethical reviews of any use of sensitive data
- Conduct regular audits of data quality
- Sensitive data should not be sold, transferred, or made public
- Publicly report lapses or breaches that result in sensitive data leaks, wherever appropriate
-
IV. Notice and Explanation (You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you).
A. Provide clear, timely, understandable, and accessible notice of use and explanations
-
-
- Make system documentation public, in plain language, and include impact assessments
- Identify who is responsible for design of the automated system and who is utilizing it
- Users should receive notice of automated systems before or when the system is impacting them
- Notices and explanations should be improved through user testing to ensure clarity
-
B. Provide explanations as to how and why an automated decision was made or an action was taken
-
-
- Tailor explanations to a specific purpose and make the explanation useful for users
- Tailor explanations to relevant audiences, and assess explanation through research
- Tailor explanations to risk and give users advance explanation for high-risk systems
- Ensure that explanations reflect the factors and influences that led to particular decisions
-
C. Demonstrate protections for notice and explanation
-
-
- Clear and regular reporting
-
V. Human Alternatives, Consideration, and Fallbacks (You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter).
A. Provide for opting out of automated systems in favor of a human alternative, as appropriate
-
-
- Give brief, clear notice of opt-out rights, along with information about how to opt-out
- Provide human alternatives when there is a reasonable expectation of human involvement
-
B. Institute fallback and escalation systems to address appeals, and system failure or errors
-
-
- Availability of human involvement proportional to system’s impact on rights and opportunities
- Mechanisms for human involvement should be easy to use and tested to confirm that, and should be available on a timely basis, proportionate to time-critical decisions at stake
-
C. Training, assessment, and oversight to combat automation bias
-
-
- Provide training for everyone interacting with the system, with regular assessments
- Incorporate lessons learned from assessments to mitigate system bias into governance
-
D. Additional human oversight capabilities and safeguards for sensitive domains
-
-
- Institute human oversight to ensure automated systems in sensitive domains are narrowly scoped and tailored to specific goals, and safe and effective for that specific situation
- Ensure human oversight in any high-risk decision, such as sentencing decisions or medical care
- Establish meaningful oversight of the system, including possible limited waivers of confidentiality for designers and developers of automated systems
-
E. Demonstrate access to human alternatives, consideration, and fallbacks
-
-
- Clear and regular reporting
-
The Need to Treat AI Like We Treat Employees
As discussed above, the decision in the Blueprint to take a rights-based approach (that focuses on the potential for impact), rather than a risk-based approach (that focuses on the likelihood of impact), means that there may be pressure to apply this substantial list of requirements to all automated systems, even those that pose a relatively low risk of harm. This will likely have the effect of stifling innovation and lead to a misallocation of compliance resources, especially for organizations with a large number of models. To illustrate why, it is helpful to think of automated systems in the same way we think about human resources.
Nearly every employee has the potential to cause a significant amount of harm to, and thus meaningfully impact, an organization. They can steal sensitive information, alienate clients, destroy valuable property, and undermine core company objectives. Even the most junior employee has the potential for significant damage. But organizations cannot function without their employees, and most organizations that are investing heavily in automated systems need hundreds, if not thousands, of employees. It would be unworkable for these organizations to require a lengthy and robust vetting process before each employee at every level of the company is allowed to do their job. Instead, the hiring process for most employees is limited to a resume review, one or two interviews, and a background check.
Companies, however, all have some employees who hold sensitive or higher profile positions, whose mistakes or malfeasance would be likely to cause significant financial harm, reputational damage, or legal liability to the company. The vetting process for these jobs is therefore more involved, and often includes a detailed submission from the candidate, informal and formal reference checks, and multiple rounds of interviews, which can take several months. But for most companies, this only applies to a relatively small number of positions. It would be a waste of time and resources to subject candidates for a mailroom opening to the same vetting process as the new CEO, even though mailroom employees have the potential do an enormous amount of damage to an organization by not delivering important packages, or by leaking sensitive documents to the press. For similar reasons, the Blueprint’s principles should be focused primarily on the small number of automated systems that are most likely to significantly impact Americans in a negative way and, therefore, pose the highest risk, rather than the much broader category of automated systems that merely have the potential to do so.
The Value of AI Storytelling
One area where the Blueprint does excel, however, is in illustrating the value of “AI Storytelling,” which plays an important role in building an effective compliance culture around AI and other emerging technologies. This is because a general difficulty in AI governance and compliance is a lack of comprehension—that is, not being able to effectively convey concrete practical concerns about particular AI applications to key audiences. Regulators, academics, developers, and AI users often talk past each other with respect to how the AI is actually being used and how it might cause harm. AI Storytelling helps address this problem by using plain language and concrete examples to illustrate the value of the AI tool at issue, the specific risks associated with that use case, and concrete ways that an organization can avoid or mitigate those risks.
For example, concerns have been raised about bias in automated tools that are used screen resumes of job applicants. But regulators have struggled to come up with concrete examples to help frame the issue in a way that everyone (1) understands the problem, and (2) agrees that it is a problem, such as:
- Including “travel” or other similar hobbies as an input in the resume screening tool, when doing so favors affluent candidates but does not meaningfully improve the quality of the applicant pool for the particular job; and
- Penalizing candidates who have a one-year or more gap on their resume, when doing so negatively impacts women candidates who took time off to raise children.
AI Storytelling uses these kinds of concrete examples of AI risks to allow policymakers and AI developers to engage more effectively in the areas of automated systems that are most in need of attention, rather than talking about bias in the abstract, which means one thing to civil rights lawyers, but may mean something very different to data scientists.
The Blueprint includes many examples of effective AI Storytelling, some of which are drawn from groundbreaking studies and reporting on AI risks. These include:
- Principle: Protection from Unsafe and Ineffective Systems
- Concrete Example: A proprietary model was developed to predict the likelihood of sepsis in hospitalized patients and was implemented at hundreds of hospitals around the country. An independent study showed that the model’s predictions underperformed relative to the designer’s claims while also causing “alert fatigue” by falsely alerting likelihood of sepsis.
- Principle: Protection from Discrimination by Algorithms
- Concrete Example: A search for “beautiful girls” using some search engines results in pictures mostly of whites, while searches for “Black girls,” “Asian girls,” or “Latina girls” return predominantly sexualized content. Some search engines have been working to reduce the prevalence of these kinds of results, but the problem remains.
- Principle: Protection from Abusive Data Privacy Practices
- Concrete Example: A data broker gathered millions of personal records of Americans, without their knowledge or consent, by scraping data from public social media profiles, and then suffered a breach, exposing hundreds of thousands of people to potential identity theft.
- Principle: The Right to Know that an Automated System Is Being Used
- Concrete Example: A lawyer representing an older client with disabilities who had been cut off from Medicaid-funded home healthcare assistance couldn’t determine why, especially since the decision went against historical access practices. In a court hearing, the lawyer learned from a witness that the state in which the older client lived had recently adopted a new algorithm to determine eligibility. The lack of a timely explanation made it harder to understand and contest the decision.
- Principle: The Right to Opt-Out or Have Human Review of Automated Decisions
- Concrete Example: A large corporation automated performance evaluation and other HR functions, leading to workers being fired by an automated system without the possibility of human review, appeal, or other form of recourse.
Although these examples are effective at illustrating the risks posed by AI, by not also focusing on the benefit of automated systems, and by not limiting its applications to identified high-risk use cases, the Blueprint missed an opportunity to provide a more useful template for AI regulation in the United States.
This post comes to us from Debevoise & Plimpton LLP. It is based on a post on the firm’s Data Blog, “The White House’s Blueprint for an AI Bill of Rights: What It Gets Right and What It Gets Wrong About Artificial Intelligence Regulation,” dated October 26, 2022, and available here.