AI Regulations in 2024 We Should All Know
Recent data from Google Trends indicates that the interest in artificial intelligence (AI) has seen a remarkable surge over the last year. This significant increase comes after a decade of steady hype around AI regulations in 2024. The trend data reveals a marked escalation in searches related to AI, suggesting that public curiosity and engagement with AI topics have reached unprecedented levels. This spike in search interest aligns with the broader observation of AI's growing prominence and its impact across various industries and sectors.
Photo from Google Trends taken on January 10, 2024
As we navigate through 2024, the rapid advancements and widespread integration of artificial intelligence (AI) into various sectors have precipitated a critical challenge: the need for effective AI regulation. This emerging dilemma, a focal point of both discussion and development, is primarily driven by the accelerating pace at which AI technologies are evolving and becoming more deeply entrenched in our daily lives. Consequently, the landscape of AI regulation in 2024 is complex and dynamic, influenced by numerous developments and trends.
AI Regulations in 2024 in Different Countries
In July last, the bipartisan CREATE AI Act was introduced in Congress, aimed at enhancing AI development by providing students and researchers with essential AI resources, data, and tools. This Act gained significant support due to its potential to expand access to AI development. Following this, in late October 2023, President Biden endorsed an Executive Order focusing on the Safe, Secure, and Trustworthy Development and Use of AI. This order represents the administration's dedication to fostering a robust AI ecosystem while ensuring effective governance of the technology.
CREATE AI Act
The CREATE AI Act, introduced in the US Congress, is a comprehensive legislation designed to revolutionize the field of artificial intelligence. Here are its key features:
- Enhancing Access to AI Resources: It provides students and researchers with increased access to AI tools, data, and resources, thereby democratizing AI research and development.
- Fostering AI Innovation: The Act aims to stimulate innovation in AI by making advanced tools and data sets more accessible to a broader range of researchers and institutions.
- Bipartisan and Bicameral Support: It has received support from both major political parties, indicating a strong consensus on the strategic importance of advancing AI technology.
- Impact on AI Ecosystem: By providing these resources, the Act is expected to accelerate AI advancements and foster a more vibrant AI ecosystem in the US.
Biden’s Executive Order
President Biden's Executive Order on AI focuses on ensuring the technology's safe, secure, and trustworthy development and use. This order reflects the administration's commitment to fostering a responsible AI ecosystem. It includes measures for enhanced transparency and security in AI technologies, aiming to protect the public's interest while encouraging innovation. The order also underscores the need for AI systems to be developed in a way that is consistent with democratic values and ethical principles.
Key points include:
- Enhancing Transparency and Security: It focuses on ensuring AI systems are transparent and secure, safeguarding public interests.
- Adherence to Democratic Values: The order stresses the importance of developing AI in alignment with democratic and ethical principles.
- Balancing Innovation and Protection: While encouraging AI innovation, it also prioritizes protecting citizens from potential AI-related risks.
In 2024, the initiatives from President Biden's executive order on AI are expected to be implemented more fully. A key development will be the establishment of the US AI Safety Institute, which is set to oversee the execution of many policies outlined in the order. On the legislative front, the situation is less certain. Senate Majority Leader Chuck Schumer has hinted at potential new AI-related laws, complementing the executive order. Several legislative proposals are being considered, covering issues like AI transparency, deepfakes, and platform accountability. However, it's unclear which of these proposals will gain significant momentum this year.
The European Union's upcoming implementation of the AI Act marks a significant milestone in the field of artificial intelligence regulation. This pioneering law, which has been meticulously refined and ratified by EU member states and the European Parliament, is poised to take effect soon. The AI Act stands out as the world’s first comprehensive legislation specifically aimed at AI, setting a precedent for global AI governance.
This Act is particularly notable for its swift implementation timeline, with certain prohibitions potentially becoming effective by the year's end. It's a year of substantial activity for the AI sector as businesses gear up to align with the new regulations. The AI Act will apply rigorously to developers of foundational models and AI applications categorized as "high risk," especially those employed in critical sectors like education, healthcare, and law enforcement.
One of the notable provisions includes restricting the use of AI in public surveillance by law enforcement, except under specific circumstances authorized by a court, such as for anti-terrorism efforts. Furthermore, the AI Act aims to completely prohibit certain AI applications in the EU, like the creation of expansive facial recognition databases and the deployment of emotion recognition technology in workplaces and educational settings.
The AI Act emphasizes transparency in AI development and introduces stringent accountability measures for companies and users of high-risk AI systems. Under this Act, companies developing foundational models will need to comply within a year, while other tech companies are granted a two-year window for implementation.
The Act also requires companies to be more meticulous in system design and documentation for audit purposes. AI systems deemed high-risk must be trained and tested on diverse data sets to minimize biases. The AI Act recognizes the potential "systemic" risks posed by powerful AI models, requiring companies to proactively assess and mitigate these risks, ensure system and website security, and report significant incidents.
Open-source AI companies enjoy certain exemptions under the AI Act, except when their models reach the complexity of systems like GPT-4. Non-compliance could result in hefty fines or market exclusion in the EU. In parallel, the EU is also working on the AI Liability Directive to facilitate financial compensation for individuals affected by AI technologies. This directive is still under negotiation and is expected to gain momentum this year.
The EU's leadership in AI regulations in 2024 is likely to have a global impact. Companies outside the EU, aiming to operate within this major economic bloc, will have to adhere to the AI Act's stipulations. This phenomenon, known as the "Brussels effect," reflects the EU's ability to set global standards.
China's approach to regulating AI and algorithms is comprehensive, with three major regulations that stand out: the 2021 regulation on recommendation algorithms, the 2022 rules on deep synthesis, and the 2023 draft rules on generative AI. Each of these regulations aims to control information while addressing specific issues in AI technology.
The 2021 regulation targets recommendation algorithms, focusing on fair practices and workers' rights in algorithmic decision-making. The 2022 deep synthesis rules mandate clear labeling of synthetically generated content to distinguish it from organic content. The draft rules for generative AI, introduced in 2023, set stringent standards for the accuracy of training data and outputs, challenging the capabilities of AI chatbots.
These regulations are part of a broader strategy by Chinese authorities to build a robust regulatory framework for AI. In fact, Chinese AI companies face stringent regulations, with a mandate that all foundational AI models must be registered with the government before public release. By the end of 2023, 22 companies had complied with this registration requirement. This regulatory environment moves away from a laissez-faire approach, but the specifics of enforcement are still unclear. In the next year, generative AI companies will be navigating these regulations, focusing on safety reviews and avoiding IP infringements.
The current regulatory stance also insulates domestic AI companies from foreign competition, potentially giving them an advantage over Western counterparts but possibly limiting competitive dynamics and reinforcing government control over online discourse.