The State of AI Report is now in its seventh year. It’s a good summary of the bulk of the major news in the AI industry in 2024, compiled by Air Street Capital a venture capital firm investing in AI companies. Released in October, it’s interesting to see how many of the 213 slides in the original report are already at least somewhat out of date—a task I leave to diligent readers. For everyone else, I’ve prepared an 11-page digest of slides most relevant to lawyers and related professionals at the end of 2024 here:
Notable news since the report was released includes some significant product releases from major AI labs, though the full impact of these innovations remains to be fully understood—something to digest and discuss in the coming months.
Major Themes in 2024
The original State of AI Report organizes news into areas of research, industry, politics, safety, and predictions for the coming year. To simplify for a quicker overview, I approached the information through the these lens:
AI’s Smaller Better Faster and More Important in 2024
NVIDIA remains the most powerful company in the world, enjoying a stint in the $3T club, while regulators probe the concentrations of power within GenAI.
AI’s New Superpowers in 2024
Foundation models expand beyond language as multimodal research drives into mathematics, biology, genomics, the physical sciences, and neuroscience.
AI’s Dangers in 2024
Every proposed jailbreaking ‘fix’ has failed, and researchers are increasingly concerned about sophisticated, long-term attacks.
AI’s Law and Regulation in 2024
While global governance efforts stall, national and regional AI regulation has continued to advance, with controversial legislation passing in the US and EU.
Refer to the included PDF for detailed insights, and continue reading below for a more narrative overview of AI regulation trends.
Evolving AI Regulation
Globally, the regulatory focus has shifted in the past five years—from soft principles aimed at citizens and consumers (e.g., fairness, transparency) to technical standards and organizational regulations, and ultimately law—as the horizontal AI layer of the economy becomes more of a reality. This evolution is good, and heralds new AI economies on the horizon as well as a global foundation of regulation to manage them. The evolution can be broken into four stages:
Stage 1: Principles (2018–)
Governments and other organizations began drafting frameworks that outlined high-level, non-binding governance for AI development pipelines. Examples include:
Google’s AI Principles, unveiled June 2018, can be found here.
The Montreal Declaration for a Responsible Development of AI, published December 2018, is here.
The OECD AI Principles (May 2019) reside here.
The Beijing AI Principles (May 2019) are hosted here.
G20 AI Principles (June 2019) are available via official G20 documentation here (or respective national portals).
Stage 2: Frameworks and Guidelines (2020–)
Governments and international organizations began drafting frameworks that outlined high-level governance for AI development pipelines. Examples include:
The EU White Paper on AI (February 2020) is accessible through here.
U.S. White House Guidance for Regulation of AI (January 2020) sits in the archives at here.
Canada’s Directive on Automated Decision-Making (April 2020) is posted at here.
Singapore’s Model AI Governance Framework (2nd Edition, January 2020) can be grabbed from here.
UNESCO’s Recommendation on the Ethics of AI (2022) is visible at here.
Israel published for public comment a draft policy for regulation and ethics in the field of AI (November 2022) here.
Stage 3: Technical Standards (2021–)
Technical standards became central as frameworks matured. These standards anticipated gaps in the general requirements of later legislation, providing actionable specifications.
Japan: Machine Learning Quality Management Guidelines and Consortium of Quality Assurance for Artificial-Intelligence-based products and service: here (AIST) and here (QA4AI).
ISO/IEC 23053:2022 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML) (Edition 1, 2022) here.
O/IEC TR 24028:2020 Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence (Edition 1, 2020) here.
ISO/IEC TR 24029-1:2021 Artificial Intelligence (AI) — Assessment of the robustness of neural networks Part 1: Overview (Edition 1, 2021) here.
The IEEE Global Initiative 2.0 on Ethics of Autonomous and Intelligent Systems here.
IEEE Standards Activities in Autonomous and Intelligent Systems (A/IS) here.
NIST Special Publication 1270 (March 2022) is available at here.
CEN/CLC/JTC 21 here.
Stage 4: Laws and Regulations (2023–)
Europe leads in AI regulation, setting trends with the EU AI Act. Other regions are following suit, such as Korea’s AI law (effective 2026) and Japan’s recently announced AI Bill (December 26, 2024). The prior stages quickly evolved from one to the next, but this stage should be longer-lasting. Highlights include:
The Korean AI Act references are on its National Assembly portal here.
Japan’s AI Bill, announced December 26, 2024 here (and when announced may be tracked via government bulletins here).
The UK AI White Paper (March 2023) is posted here.
The reintroduced U.S. Algorithmic Accountability Act (2022) is on Congress.gov at here.
Canada’s Artificial Intelligence and Data Act (AIDA) is making its way through the Canadian parliament.
The EU AI Act is in effect and can be found here. Notable upcoming dates under the act, which will take effect in stages, include:
Feb. 2, 2025—Prohibitions on AI systems with unacceptable risks, including subliminal manipulation, social scoring, and biometric sorting.
Aug. 2, 2025—General-purpose AI model documentation requirements.
Aug. 2, 2026—Obligations for high-risk AI systems, covering critical infrastructure, education, and biometric uses.
Preparing for the AI Era
Large clients with sufficient engineering investment in technical and compliance issues should be able to smoothly transition to a post-standardization world governed by AI laws. Otherwise, regardless of jurisdiction, clients should be developing AI quality assurance and risk management processes now, to avoid the engineering and compliance teams being overwhelmed by the regulatory burden of AI systems.