The New AI Bill of Rights Needs to Go Bigger

The New AI Bill of Rights Needs to Go Bigger

Developing ethical and regulatory guidelines for AI is critical, but AI research is moving so rapidly that a legally binding approach is urgently needed.

After a year full of ideas exchanges and meetings with tech companies, leaders, and artificial intelligence (AI) experts, the Office of Science and Technology Policy (OSTP) at the White House announced the blueprint for an “AI Bill of Rights.” This step has been in the works for a long time, but was it worth the wait? 

The AI Bill of Rights is considered a guide for tech giants, the government, and citizens to safely use AI and protect society and American civil rights. It identifies five principles: safe and effective systems, algorithmic discrimination protection, data privacy, notice and explanation, and human alternatives. This guide aims to protect the American public in the age of AI. The seventy-three-page report includes detailed recommendations from civil society on how to use AI, make it more human-centric, and ensure the use of AI is respectful of human rights and the rule of law.

"Technologies will come and go, but foundational liberties, rights, opportunities, and access need to be held open, and it's the government's job to help ensure that's the case," Alondra Nelson, the OSTP’s deputy director for science and society, told Wired

The Biden administration issued the report seeking greater regulation for a more human-centric AI. In other words, the administration wants to add a socio-technical approach to AI. In that regard, the National Institute of Standards and Technology (NIST) announced its intention to issue a playbook for AI best practices. The NIST is doing a foundational job of organizing and categorizing the cyber arena. As is a general trend in the Biden administration, The NIST wants to guide the public and increase awareness of efforts to better control AI to serve the public safely. The NIST playbook and the AI Bill of Rights connect the need for technology with the necessity of enforcing our values in ways that respect human rights and democratic norms.

However, many experts believe these guides are not enough and are powerless because they are just guidelines. The OSTP did not mention a legal mechanism for enforcing it. This non-binding white paper cannot force stakeholders to abide by it. The Biden administration mistakenly highlights the AI Bill of Rights as if it is of legal value. But, unlike the U.S. Bill of Rights, which enforces and protects civil rights and the rule of law, this AI blueprint is blunt.

Public awareness of systemic biases is needed during this time of increasing reliance on emerging technologies. At the same time, technological tools—such as facial recognition—may reflect systemic racial biases if they are not immediately regulated. Some tech giants understand the need for such initiatives to reduce AI risks, but others do not. Microsoft took the lead in eliminating facial recognition tools, a step towards responsible AI. Increasing the awareness of AI risks substitutes for the lack of adequate regulations. Experts, activists, and academics play a tremendous role in pushing tech companies to adopt the safe use of technology and AI; however, it should not be the way to address this issue. Society needs effective and enforceable rules defining rights and responsibilities for users and service providers.

The fact that there is no enforcement mechanism for AI regulations is a serious motivation for tech giants, or even attackers with malign intentions, to violate human and civil rights. For example, consider a job seeker who applies for a job while living in a poor or minority-majority neighborhood. In this scenario, an Application Tracking System (ATS)—an AI software used to scan hundreds of applications in a few seconds—could filter applicants' zip codes and reject applicants who are poor or from a minority.

What should be done?

Maintaining AI ethics and ensuring that the new technology will not violate basic civil or human rights requires substantial international and domestic cooperation. For instance, the United States should have a specific entity focusing mainly on AI and its regulations. Such an agency will gather experts and leaders from tech companies, communities, civil rights agencies, and governments to discuss new rules. This agency will facilitate Congress’ efforts to pass AI legislation. The new federal agency will be a communication channel between technocrats with AI expertise and human rights activists and policymakers tasked with drafting and enforcing effective AI laws.

This step could be a robust start in pursuit of a critical long-term goal—a global agreement on AI. The United States would claim the privilege of having binding AI rules and would be able to discuss them with like-minded countries to reach a collective agreement. Regulating AI to respect human rights and democratic values must prioritize the Western countries' agendas. In the age of cyberattacks and AI, state actors must act rapidly and collectively to keep values, ethics, and norms safe in the cyber arena and send a decisive message to tech giants that they must develop more human-centric technologies. It is a time-sensitive competition to reach that collective agreement. Experts warn that rapid AI development without regulations could spiral beyond regulatory control in a few years if we do not act now.    

Marco Mossad has a Master's in International Relations with a focus on Cybersecurity, international security, and the MENA region. You can find him on Twitter (@MarcoJimmy3) and LinkedIn.

Image: Reuters