Why Are American Companies Helping China Build an Artificial Intelligence Authoritarian State?

Reuters

Why Are American Companies Helping China Build an Artificial Intelligence Authoritarian State?

If the future looks like China’s digital autocracy, then lawmakers ought to set a new standard for business ethics on the internet now, before it’s too late.

There are no rules of the road, never mind regulation governing the safe use of AI, algorithms, or deep learning for that matter. CEOs might say this unchecked expansion is a good thing. More users, after all, means more data—more personal data—which feeds advertisers’ AI. And well-crafted AI pinpoints consumers so that products can be delivered accurately and on time.

But as we’ve seen, it can also be used to unfairly single out ethnic minorities for re-education. How much of this activity should be regulated in nominally free-market societies? “I think the real question,” Facebook CEO Mark Zuckerberg said during his Senate testimony last April, “as the internet becomes more important in people’s lives, is what is the right regulation, not whether there should be or not.” Facebook’s corrosive effect on American, even global society, is well-documented. Hemant Taneja and Kevin Maney’s Unscaled argue its problematic NewsFeed algorithm is still “optimized for making money for Facebook, not for ensuring fairness and civility.” Zuckerberg has forked over $5 billion in fines for abusing users’ personal data and being a Russian propaganda mouthpiece during the 2016 election. It has almost irreparably exacerbated America’s political divides. Communist China banned its use, but if the Mandarin-speaking social media magnate had his way, Facebook and Google would probably regulate themselves.

Taneja and Maney in their book also bring up the idea of “algorithmic accountability.” While the CCP dominates the corporations that abet its control of Chinese society, Silicon Valley here in America has all of the reach that the government ought to have but none of the accountability. It bears reminding readers that the internet began as a publicly funded pilot, led by the Defense Advanced Research Projects Agency (DARPA). And, in this spirit, the internet’s Silicon Valley and Chinese benefactors ought to be accountable to the public. The right regulation—and the right organizations—might help China’s Baidu and others live up to Google’s original “Don’t be evil” ethos. There should also be regular, open testimony to new House and Senate Subcommittees on Algorithmic Accountability, so that free people understand the effect algorithms have on them, as well as Google’s business dealings with police states like China.

Otherwise, shudderingly, it might be up to the states’ attorneys general to act and expose the risks of AI at home and abroad. Their 1997 tobacco settlement forced Phillip-Morris to finally own up to the undeniable risk cigarettes posed to society—after it paid billions in fines to survivors and the federal government. The settlement set the precedent for almost two dozen chief prosecutors to file suit against Purdue Pharma, the producer of OxyContin, a highly addictive painkiller said to be a primary cause of the current nationwide opioid epidemic. The drug company’s owner, the Sackler Family, which is worth an estimated $13 billion, has filed for bankruptcy. An antitrust suit that breaks up Google or any other American internet giant would bog Silicon Valley down, not to mention dull the nation’s competitive edge against China in the AI sector. A better start toward algorithmic or even internet accountability might be reviving the independent Office of Technology Assessment (OTA). Established in 1972, Republicans eliminated the OTA in 1994 as part of their blitz to excise government bloat called the “Contract with America.” Too bad, because its non-partisan assessments lent legislators considerable expertise, and gave valuable context to technology’s global effect on everything from terrorism to health care and missile defense. There is still time for us to understand and to control AI’s harmful effects with smart regulations that give Silicon Valley the room it needs to compete globally. But, if the future looks like China’s digital autocracy, then lawmakers ought to set a new standard for business ethics on the internet now before it’s too late.

William Giannetti is a defense contractor, U.S. Air Force Reserve officer, and an Afghanistan War veteran. He has spent twenty-three years as a civil servant, a Philadelphia police officer, and a Department of Defense analyst for the Joint Staff.

Image: Reuters