Can the U.S. and China Cooperate on AI?

Can the U.S. and China Cooperate on AI?

Artificial Intelligence will continue to become more general, more powerful, and more ubiquitous. Washington and Beijing must work together to mitigate the risks.

The second reason for optimism is the historical precedent set by Cold War diplomacy, especially the balance between competition and cooperation with the Soviet Union pursued by President Richard Nixon and Henry Kissinger. The Nixon administration accomplished significant diplomatic breakthroughs with the Soviet Union during the détente of the 1970s, even while it continued to compete with it. For instance, at the Moscow Summit in 1972, the United States and the Soviet Union signed the historic Anti-Ballistic Missile Treaty and the first Strategic Arms Limitation Treaty (SALT I). That same year, after years of talks between the Johnson and Nixon administrations and Soviet Premier Alexei Kosygin, the United States and the Soviet Union founded the International Institute for Applied Systems Analysis (IIASA), a scientific cooperation initiative meant to build bridges between the Western and Soviet spheres through scientific and policy research. The IIASA continues to conduct pioneering research on complex systems, governance, biodiversity, sustainability, migration, demography, and a host of other topics to this day.

While the two superpowers were increasing cooperation, Nixon and Kissinger were prosecuting a war against Soviet-aligned North Vietnam. The historic diplomatic breakthroughs in Moscow were thus not the result of a complete confluence of interests between the rivals but rather the recognition of a pool of shared interests amid an ocean of differences.

Toward an International Policy Framework

Given this historical precedent, there is no reason the United States and China cannot cooperate on AI safety research for the benefit of both countries (not to mention the rest of humanity). The two superpowers can model their joint research efforts on the IIASA. In its strongest form, this joint venture would be an International AI Safety Institute, jointly funded by the United States and China and staffed by top researchers from around the world. An alternative, if weaker, form of cooperation would be a joint commitment to funding AI safety research grants in their respective countries, with annual bilateral conferences and informal cooperation among researchers and government officials.

Because this approach is narrowly focused on technical alignment research, it requires minimal trust between the two countries. One obvious concern about this approach might be the risks of espionage and theft of intellectual property. Some officials had similar concerns about the IIASA. However, because it only conducted non-secret research, espionage, and theft never materialized as major problems. A similar dynamic would be at play in a joint AI safety institute. Most alignment research is discussed in scientific journals and is therefore available to the public. Solutions to safety concerns, therefore, tend not to be classified or proprietary. The Chinese and Americans could continue competing in the AI development arena while cooperating on alignment research. The groundwork for such an initiative has arguably already been set by the recent bilateral summit, though the discussions did not lead to substantive commitments.

As trust builds between the two countries, the United States should push for the signing of a treaty to formalize the principles of the Bletchley Declaration. The end goal would be a series of formal agreements between the two countries to implement the policies discussed above. Taken as a whole, this approach would pool Chinese and American resources to solve the technical problems of AI misalignment while setting international standards for safety testing, red-teaming, and auditing.

Fleshing out the details of this framework will require further research. For instance, the scientists who eventually receive the grants should have broad latitude to pursue the research avenues they view as most promising. However, policymakers, in consultation with AI safety experts, should design a set of standards to ensure that they prioritize the most pressing problems in AI alignment. Similarly, policymakers will need to define the exact safety standards and red-teaming requirements alongside experts with extensive knowledge of and experience in the field.

This proposed framework is meant to get the ball rolling on tackling the AI alignment problem. While it leaves out some key details, the direction it proposes is sound. Policymakers in both the United States and China need to begin prioritizing this problem now in their spending, regulatory, and diplomatic strategies. The cat is out of the bag, and the clock is ticking.

Anthony De Luca-Baratta is an intern at the Center for the National Interest, where his research centers on technology and defense policy. He is a Public Service Fellow at the Johns Hopkins School of Advanced International Studies (SAIS). When he is back home in Montreal, he is a proud member of JC’s morning hockey league.

Image: KaimDH / Shutterstock.com.