EU's AI Act Enters Force: New Rules, Bodies, and Categories for AI Safety
The EU's AI Act, set to enter into force in late June or early July 2024, complements the GDPR, focusing on product safety while the GDPR protects human privacy rights. This landmark legislation establishes a regulatory framework for AI deployment, development, and use within the EU.
The AI Act creates several bodies to support its implementation. The AI Office oversees regulation enforcement, while the AI Board advises on uniform application, consisting of representatives from each member state and the European Data Protection Supervisor as an observer. A Scientific Panel of independent experts and an Advisory Forum representing stakeholders like industry and civil society also play crucial roles. National competent authorities, including market surveillance bodies, and agencies like the CNPD and ENISA, will enforce and support the AI Act.
The AI Act categorizes AI systems based on their potential impact on safety, human rights, and societal well-being. Prohibited systems will be banned, high-risk systems will have stricter requirements, and low-risk systems will have fewer compliance obligations. Organizations using high-risk AI systems must follow a standard template for monitoring AI systems 18 months after the AI Act enters into force. Codes of conduct for AI developers and deployers will be finalized nine months after the AI Act enters into force. Rules for providers of General Purpose AI (GPAI) will come into effect 12 months after the AI Act enters into force. AI systems with unacceptable risks will be banned six months after the AI Act enters into force.
The AI Act applies to organizations operating within the EU that develop, deploy, or use AI systems within the EU's single market. With a phased implementation schedule over two years, the AI Act aims to ensure the safe and responsible development and use of AI within the EU.