Bill of Rights
The White House recently unveiled the AI Bill of Rights. What is it, why is it needed, and what does it do? In this post, I offer some perspectives – from a technical and business perspective, this document and what it can or should mean for a business.
context – why this is needed
First, some background. As many of you know, AI is now deployed or is being deployed in almost every business context, from finance (think credit card approvals) to healthcare (think disease diagnosis/risk assessment) and more. While many advanced technologies are far from consumers – when was the last time you thought about how your hospital could use the latest database research to treat you? – AI is not like that. Advances in AI directly touch humans—whether in using human information, making decisions that affect humans, or both. Furthermore, AI is everywhere – anyone from large banks to high school students can now leverage the most advanced AI known to man and then put the resulting applications in front of anyone. How do we ensure this ferocious pace of technological advancement is safe?
Enter AI Ethics
AI ethics is the field of artificial intelligence that focuses on the ethical applications of artificial intelligence—especially in areas related to humans and society. AI ethics covers areas such as AI bias – ensuring AI treats everyone fairly, AI and privacy – ensuring people understand and control how their information is used, etc. As explained here, AI ethics is a key area. Now we can see the AI Bill of Rights – a prototype of AI Ethics in action. The AI Bill of Rights outlines the government’s view of what human rights should be protected by organizations building and deploying AI.
What’s in the AI Bill of Rights?
Detailed documentation on the AI Bill of Rights can be found here. The document outlines five fundamental rights. I list them below:
- Safety. The key factor here is that automated systems can (and do!) make mistakes. In AI – these mistakes can happen in many ways – see the article here on how COVID-19 is disrupting many AIs globally. While not all errors are predictable, operational ML techniques (MLOps) can be used to detect and mitigate AI errors before they cause further damage.
- privacy. Artificial intelligence thrives on information. Combined with ubiquitous sensors, cameras and records of online activity, it is now possible for organizations to use vast amounts of personal information without the individual’s knowledge. This element focuses on the need for individuals to have ways to access, understand and control how their personal information is used.
- fair. AI learns patterns from data. Without proper data scrutiny, AI can (and does!) learn to bias and potentially become unequal in their treatment. This element focuses on fairness in designing and testing AI.
- explain. Privacy rights focus on the need for individuals to be able to understand what information about them is being used by algorithms. The right of interpretation is complementary – it states that individuals also have the right to know how algorithms use the data they are permitted to use. For example – if an individual agrees to let the bank use their personal data (as per a privacy request), the explanation request will show whether they are using their age, gender or any other information to determine their loan interest rate.
- Alternative. This element focuses on the need to give individuals choices. The choice can be to opt out of the system that is making automated decisions, or to have access to solutions or people to resolve issues caused by automated systems.
takeout
In my opinion, the AI Bill of Rights outlines a set of interrelated principles that can be applied to all stages of the AI lifecycle through a combination of AI ethics and MLOps techniques. How it is applied is very domain specific – for example, applications in healthcare will impose different constraints on human privacy than web-based retail applications. However, these principles apply to all areas. It’s worth examining each of these pillars and understanding how it should fit into operational AI practices in your organization.