In early June, China’s State Council announced a draft AI law would be submitted to the Standing Committee of the National People’s Congress (NPC), China’s highest legislative body. In July, China’s Cyberspace Administration joined six other agencies in issuing tentative management measures on generative AI services, which experts believe will lay the foundation for future AI legislation.
China began planning in 2017, when the State Council rolled out an AI development program. In this document, the government proposed to work out ethics and regulations for AI technologies in specific fields by 2020 and establish a suite of laws and policies for AI-related safety by 2025.
According to the Artificial Intelligence Index Report 2023 issued by Stanford University on April 3, legislation mentioning AI increased nearly 6.5 times worldwide since 2016.
“The haste to legislate is a result of the heated competition and development of AI technologies,” Peng said. “Data has increasingly become a crucial strategic element, and all countries hope to lead the legislation... Meanwhile, new social problems and contradictions brought about by the fast development of AI technologies like ChatGPT also spur legislation,” she added.
The EU’s AI Act has been in the works since April 2021, with added revisions for generative AI services.
One requires transparency for general purpose AI like ChatGPT. For example, developers must label AI-generated content for users and curb illegal content. Developers must release what information they use to train their models, especially when copyrighted.
Risk assessment is a primary feature of the AI Act – it categorizes risk into four levels, the highest being “unacceptable.” For example, a system that classifies people according to social behaviors or personality is banned.
In the latest draft, the EU expanded the highest-level risk category to include AI that is “invasive” or “discriminatory.” For example, it bans use of biometric identification in public places, emotion and sentiment analysis, predictive policing based on profiling, location or criminal records, and harvesting facial data from the internet.
The latest version also raised the fine cap from 30 million euros (US$32.8m) or 6 percent of the company’s operating income from the previous year to 40 million euros (US$43.7m) or 7 percent, much higher than in the EU’s General Data Protection Regulation.
“This shows the EU’s resolution to supervise and manage AI technologies. Tech giants like Google, Microsoft and Apple could face tens of billion dollars in fines if they violate the law,” Peng said.
China’s latest tentative measures, which took effect on August 15, propose to conduct “deliberate” and “classified” supervision of generative AI services. The document states that such services should not harm China’s national security and should respect others’ legal interests and rights. The document stresses that no illegal or tortious data should be used to train AI models. Like the EU, China requires AI-generated content to be labeled and puts the onus of monitoring user input on service providers.
“China’s current AI management is scattered across different fields and departments... and measures and policies usually target a specific technology or service... normally this is designed and released by competent departments, but they have not yet made it law,” Peng told NewsChina.
According to Zhao Jingwu, compared to the EU and China, the US’s management prioritizes commercial development to maintain its competitiveness. Schumer’s framework aims to realize the potential of AI technologies and support US-led innovation.
“US management of AI development remains weak, and its society is inclined to be open and encourage the innovation and expansion of AI technologies,” Peng said, adding that management is conducted according to the state while being “general” and “non-specific” at the federal level.
“The AI Bill of Rights, a milestone in the US’s management of AI development, for example, proposes only five basic principles without more detailed articles or measures... It’s only a framework for guiding the design, use and planning of AI systems,” Peng said.
“Such documents are not compulsory... since intensified management will surely obstruct the development and innovation of an emerging industry like AI,” she added.
Despite signing the FLI open letter calling for a suspension of AI training, Musk has switched on an AI project on X (formerly Twitter) and recruited AI experts, calling some to question whether he intended to hobble the progress of OpenAI, which Musk left in 2018 and is now competing with.
Some scientists and AI leaders have denied they ever signed the FLI open letter. Thomas G. Dietterich, an American pioneer of machine learning, tweeted that the letter is “such a mess of scary rhetoric and ineffective/non-existent policy prescriptions.”
Yann LeCun, chief AI scientist of Facebook’s parent Meta, tweeted on March 29: “Nope. I did not sign this letter. I disagree with its premise.” During an April 8 livestream on tech news site VentureBeat about the FLI letter, LeCun criticized the call for a “pause” as backward, saying people cannot slow down the progress of science and knowledge.
In the same livestream, renowned British AI scientist Andrew Ng argued that AI will create enormous value for many industries and pausing its progress would obstruct AI from benefiting the world.
LeCun and Ng suggest managing content rather than development and research, and argue that concerns about human safety are premature.