Government Must Be Careful Not to Stifle Innovation When Weighing AI Restrictions, Think Tank Says

Date: February 11, 2019
Source: Wall Street Journal
U.S. economic growth, social progress and global competitiveness will suffer if federal and state governments put restrictions on the development and adoption of artificial intelligence-powered products, according to an independent Washington, D.C.-based think tank.
 
Tech-industry executives, public servants and civil liberties groups are calling increasingly for bans or limits on artificial intelligence tools. The bulk of their attention has been focused on autonomous vehicles, robots and facial recognition devices. The initial hype around driverless cars, for instance, became more cautious in the wake of a fatal accident last year involving an Uber Technologies Inc. test vehicle and separate crashes by Tesla Inc. vehicles with driver assistance.
 
However, exaggerated fears about the safety, privacy and security of AI systems could pressure state and federal regulators to take a heavy-handed approach to the technology and create unnecessary barriers to its development, warns the Information Technology and Innovation Foundation in a report titled “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence.”
 
“It’s so easy to imagine how a new technology can go wrong, and if you just stop there, the innovation stops there, too,” Daniel Castro, an ITIF vice president who co-wrote the report, told WSJ Pro AI.
 
Castro said regulators need to approach AI with the same spirit of innovation as the internet, which focused on the Web’s potential upside. That approach gave early internet developers and startups room to innovate, setting the stage for an era of economic growth led by a new digital economy, he added.
 
However, groups like the New York University-affiliated AI Now Institute have recommended strong national laws setting out oversight of, limitations on and transparency around the development of facial recognition capabilities. Likewise, the American Civil Liberties Union has sought to ban law enforcement’s use of AI-enabled facial recognition tools.
 
Some regulations have been enacted. The ITIF report cites a New York state law requiring autonomous vehicle developers to pay for police escorts for road tests, a policy that raises costs and has prompted many local AI ventures to relocate. Likewise, a biometric information-privacy law in Illinois has led to lawsuits against Facebook, Google, Shutterfly and Snapchat for scanning users’ faces without their consent -- typically to tag them in photos, the report said.
 
In addition, U.S. Senate lawmakers in 2017 introduced the Future of Artificial Intelligence Act, which seeks to create an advisory committee on national AI issues. The committee’s mandate would include setting out guidelines for a federal response to violations of existing laws by AI systems. The advisory group would also establish cultural and societal norms to address machine learning bias, among other issues.
 
The bill is currently in committee.
 
Elon Musk, chief executive officer of Tesla and Space Exploration Technologies Corp., stoked public fears five years ago in an online message predicting “something seriously dangerous happening” as a result of AI by 2019. He doubled down on that warning last year, calling AI more dangerous than nuclear weapons at a prominent tech conference in Austin, Texas.
 
The ITIF report recommends that policymakers wait to craft targeted responses to specific AI problems as they occur.
 
Bernard Marr, a strategic business and technology adviser to governments and companies, said he supports fast-paced innovation but not when it allows companies to violate reasonable expectations for data privacy and security.
 
“Regulators are still catching up in the AI and data space, where many companies have moved ahead with little concern about exposing and exploiting data that people have given companies, often without an understanding of how their data could be used,” Marr told WSJ Pro AI.
 
He said regulations such as Europe’s General Data Protection Regulation privacy law, which limits the use of personal information, are trying to address these issues by ensuring companies gain permission to use people’s data and clearly state how the data will be used.
 
Castro, who acknowledges that AI has sparked some legitimate concerns, said many AI worries are already addressed by existing laws, or can be quickly corrected by market forces. When that is not the case, he added, “we should step in with new guardrails,” but always with an eye to moving forward.
 
Creating laws in advance risks overreaching and curbing innovation, he said.
 
Whit Andrews, a Gartner Inc. analyst, said one problem is the presence of AI in popular culture: “People are afraid of robots, they aren’t afraid of blockchain,” Andrews said.
 
He said regulators should focus efforts on real-world outcomes of AI deployments rather than trying to prevent hypothetical worst-case scenarios.
Printer-friendly version