Keep current with global developments in AI rulemaking with The Global Regulation Tracker by Raymond Sun (content creator, AI/software developer, and Australian tech lawyer at Herbert Smith Freehills). Raymond provides extremely comprehensive updates on all major national efforts to regulate artificial intelligence. (There are yet a few things to work out — yes, Singapore is on there too, but you need to zoom way in!)
While the Global Regulation Tracker’s news is near perfect everywhere I can confirm, doublecheck the work for your own jurisdiction. Send Raymond or me your news and comments to fill in anything missing from a local perspective. Raymond is generously extending his passion to help others beyond his firm, so let’s collaborate and coordinate to improve all our work.
Here are a few specific signs of world-wide regulatory speedup in the past week:
US academic & industry-backed global AI risk statement
The Center for AI Safety has released a one-sentence statement on AI risk signed by the CEOs of the three most advanced AI labs, Deepmind (now part of Google), OpenAI, and Anthropic, the world’s two most-cited AI scientists (Geoffrey Hinton & Yoshua Bengio), over 100 other scientists and professors, and figures like Bill Gates. The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
UK-led AI political summit
Britian will host the world’s first political summit on AI. (See attached – thanks to Soo Chye Lee at Wee Swee Teow for the article and to Nigel Rowley at Mackrell Solicitors for keeping an eye on this!) Rishi Sunak has acknowledged that the UK government’s pro-innovation white paper published two months ago, is already out of date due to newly raised concerns, and Sunak is echoing calls by OpenAI’s CEO to create an IAEA for AI safety.
EU-led G-7 stopgap AI International Code
To fill the legislative void while the EU AI Act slowly moves through EU institutions, the EU plans to soon advance a draft AI code of conduct with US and industry input and hopes that other countries, such as Canada, the UK, Japan, and India, will back the effort. Talks between EU institutions on the EU AI Act start soon and may reach a deal by the end of 2023, but legislative impact will take at least two to three more years.
EU-US transatlantic AI governance
On May 31, the fourth ministerial-level meeting of the EU-US Trade and Technology council (TTC) released a list of common definitions of 65 key technical and policy AI terms and launched three expert groups: AI terminology and taxonomy; joint AI standards and risk management tools; and “monitoring and measuring existing and emerging AI risks”. While US policymakers are not as stringent as the EU’s, both sides agreed on a ‘risk-based’ approach in the joint statement released after the TTC summit.
Singapore-based AI Verify Foundation
At a part of Asia Tech x Singapore (ATxSG), Singapore’s Minister for Communications and Information announced the launch of the AI Verify Foundation to promote the development of tools for responsible use of AI, and boost AI testing capabilities for companies and regulators. (See attached – thanks to Soo Chye Lee at Wee Swee Teow for the article!)