The European Union is now considering what could become the world’s first comprehensive legislation on artificial intelligence (AI). The aim is to promote European leadership in emerging technology. The proposed regulation seeks to limit government involvement in surveillance, but it actually threatens innovation in several ways. American policymakers should pay attention to avoid making a similar mistake.
The EU’s proposed Artificial Intelligence Act (AIA) would classify AI activities into different categories according to their perceived level of risk. For example, it would ban some intrusive AI-enabled government activities—such as social credit scoring systems and biometric surveillance—that pose high risks to individual privacy and limit AI usage in law enforcement facial recognition systems. These are a step in the right direction.
But the EIA’s proposed regulation also poses significant challenges to innovative European businesses. First, it adopts an overly vague and expansive definition of AI, which would potentially allow regulators to impose costly requirements on a wide variety of businesses. For example, the EU’s definition of AI includes software that employs machine learning approaches “using a wide variety of methods.” According to the Commission, software using “statistical approaches” and “Bayesian estimation”—a common field of statistics—could become subject to regulation as artificial intelligence. That is especially a problem because as many as three-fourths of all EU businesses might use AI by 2030.
Companies would need to undergo expensive compliance requirements if they offer products in high-risk activities such as biometric identification. But the EU’s proposed definition of high-risk activities is so broad that it would classify use of AI in a wide range of sectors—including “education, finance/insurance, health, IT, technical/scientific activities, social work, and critical infrastructure”—as potentially high-risk activities. Together, these sectors account for thirty-five percent of the EU’s combined GDP. According to one estimate, the proposed legislation could reduce AI investment in Europe by twenty percent and cost the EU economy €31 billion ($36.4 billion) over the next five years.
Take education, for example. An AI-enabled English proficiency test recently developed by the language-learning app Duolingo would be subject to regulation under the EU’s high-risk category simply because it uses an algorithm to evaluate “participants in tests commonly required for admission to educational institutions.”
The AIA will particularly hobble innovation by startups and small businesses. According to EU estimates, small- and medium-sized enterprises (SME)—defined as businesses with up to fifty employees or €10 million ($11.74 million) in turnover—that offer a single high-risk AI product could be subject to total compliance cost burden of up to €400,000 ($469,000). Given that the average SME has a profit margin of ten percent, the AIA is expected to reduce SME profits by forty percent for such products. In face of such an increased compliance burden, many European businesses might simply turn away from innovating in many sectors.
Even worse, the most innovative European businesses can simply move somewhere else, like London and Silicon Valley, with a friendlier regulatory environment. If so, the proposed act will have the opposite effect of what the EU needs. As the EU falls behind China and the United States in technological innovation, the continent needs more technological innovation and investment. By imposing these costly requirements, the EU risks a brain drain of its smartest entrepreneurs and innovative businesses.
The EU’s recent efforts at developing AI legislation have important lessons for American policymakers. America’s hands-off approach to technology regulation has been more innovative than Brussels’ top-down one. While the United States has yet to enact federal AI legislation, individual states and ultimately Congress could pass laws related to privacy and artificial intelligence. In that case, they should avoid the EU’s overly restrictive definition of AI and the burdensome regulations it entails.
Ryan Nabil is a Research Fellow at the Competitive Enterprise Institute in Washington, D.C, and Fox International Fellow at the Institut d’Etudes Politiques (Sciences Po) in Paris.