Approaching deadline for the second enforcement of the EU AI Act and its associated General-Purpose AI Code of Practice: Key information for businesses
The European Union's (EU) AI Act, with its approaching enforcement deadlines, is set to significantly impact businesses worldwide. One of the key aspects of the Act is the voluntary Code of Practice for General-Purpose AI (GPAI) models, which aims to advance AI safety, transparency, security, and risk management.
The Code of Practice enforces more robust guardrails for training AI models, focusing on greater transparency, safety, and security. By adopting the Code, companies can gain a presumption of conformity with key AI Act requirements, providing strong regulatory incentives to follow it.
Businesses must assess and document potential harms that their AI models might pose to the public or enterprises. They are expected to take appropriate actions to reduce these risks, especially for general-purpose models considered capable of systemic risks. This enforces a lifecycle, security-by-design approach, integrating continuous security and safety considerations from development through deployment and use.
Providers are required to increase transparency through documentation and disclosures about training data, processes, and datasets. Public summaries of training content are required to allow stakeholders to understand model origins and enable exercising of rights under EU law.
The Code emphasizes security by creating a harmonized EU-wide baseline for AI security, mandating that security be embedded throughout the AI system’s lifecycle to mitigate risks of breaches or misuse. This helps align AI providers on common security standards.
In addition to safety and security, the Code prohibits training AI on pirated content and requires compliance with content owners’ requests, ensuring ethical AI practices around data.
Following the Code can help companies demonstrate compliance with the AI Act to regulators, lowering legal risks and potentially avoiding significant fines (up to 7% of global turnover for non-compliance). This has led leading tech firms like Google and OpenAI to declare their intent to comply with the Code.
However, it's important to note that not all the authorities responsible for enforcing the EU AI Act are operational yet. The EU AI Office is one of them. Furthermore, there isn't a solid baseline for enterprises to work from with regard to AI-related security risks at this stage, raising questions over compliance.
Despite these challenges, the EU AI Act's requirements represent the only binding set of rules on AI with global reach and the only realistic option for trustworthy AI and responsible innovation. Companies operating AI technology in the EU or using AI-generated insights within the EU market should pay attention to the enforcement deadline of the EU AI Act.
It's worth noting that Meta has announced that it will not sign up for the EU AI Act's Code of Practice. This decision, however, does not exempt Meta from the broader obligations under the AI Act.
In conclusion, the voluntary Code of Practice serves as a practical compliance framework that helps businesses responsibly develop and deploy AI while fostering innovation under a safer, more transparent, and secure regulatory environment in the EU. Companies are encouraged to familiarise themselves with the Code and consider its implications for their AI strategies.
Companies must adhere to the Code of Practice's requirements for greater transparency, security, and safety in AI development, as it provides a presumption of conformity with key AI Act requirements and offers regulatory incentives for compliance.
The Code of Practice’s emphasis on security by creating a harmonized EU-wide baseline for AI security, mandating that security be embedded throughout the AI system’s lifecycle, aims to align AI providers on common security standards and mitigate risks of breaches or misuse.