South Korea's New AI Legislation: Navigating Innovation and Oversight
South Korea's **Framework Act on the Development of Artificial Intelligence and Establishment of a Foundation for Trustworthiness**, commonly known as the **AI Basic Act**, is a groundbreaking legal framework passed in December 2024. Set to take effect on 22 January 2026, following a one-year transition period, this law marks South Korea as the first country to enact an overarching law dedicated to AI development and trustworthiness.
### Key Features and Content of the Framework Act
The Act aims to boost national AI competitiveness while addressing societal issues related to AI, such as safety, ethics, and trustworthiness. To achieve this, the law mandates the creation of several key bodies, including an AI Policy Center, an AI Safety Research Institute, and a National AI Commission. These institutions are tasked with coordinating AI development, safety research, and policy oversight.
The Act explicitly focuses on establishing a foundation for trust in AI systems, ensuring that AI technologies are developed and deployed in ways that are ethical, transparent, and socially responsible.
### Regulatory Requirements for Organizations
Organizations developing or deploying AI systems will have to comply with the soon-to-be-published enforcement regulations. These regulations are expected to ensure transparency and safety in AI deployment, promote cooperation with government investigations and guidance, and align AI development with ethical standards and national competitiveness goals.
The Act emphasizes a cooperative relationship between AI business actors and government bodies, similar to other Asian AI frameworks. It also mandates that AI Business Operators without a domestic address or place of business, crossing certain user number or revenue thresholds, must appoint a "domestic representative" with an address or place of business in South Korea.
### Implications for Organizations
Companies must prepare for regulatory measures coming into force in January 2026, which will likely require adjustments in AI development, deployment, and governance practices to meet transparency, safety, and ethical standards. Organizations should anticipate working closely with newly established government institutions, such as the AI Policy Center and National AI Commission, which will oversee AI policy and compliance.
The Act attempts to strike a balance between encouraging AI innovation and ensuring accountability and public trust. Organizations need to innovate within this regulatory environment, which could influence AI research priorities and business strategies. The delay in releasing detailed enforcement rules creates some uncertainty, but companies and users are expected to engage actively in public comment periods once draft regulations are released.
In addition, the Act mandates government-led initiatives to support the entire lifecycle of AI training data, including the production, collection, management, distribution, and utilization of AI training data. The MSIT is empowered to investigate potential violations of the Act, including non-compliance with notification obligations, safety measure implementation, and adherence to safety and reliability standards for high-impact AI systems.
The Act also introduces specific obligations for "high-impact" AI systems in critical sectors, such as healthcare, energy, and public services. It promotes the advancement and safe use of AI by encouraging technological standardization, supporting SMEs and start-ups, and fostering AI-driven innovation.
Finally, the Act requires AI Business Operators with computational learning capacity exceeding prescribed thresholds to establish a risk management system for AI safety incidents and submit findings to the MSIT. Administrative fines of up to KRW 30 million (approximately USD 20,707) may be imposed for failure to comply with corrective or cease-and-desist orders issued by the MSIT, non-fulfillment of notification obligations related to high-impact AI or generative AI systems, and failure to designate a required domestic representative.
In sum, South Korea’s AI Basic Act positions the country as a normative leader in responsible AI while establishing a structured governance framework that organizations must navigate carefully to ensure lawful, ethical, and competitive AI development and deployment.
- This groundbreaking law, the AI Basic Act in South Korea, aims to boost national AI competitiveness while addressing societal issues related to AI, such as safety, ethics, and trustworthiness.
- The Act creates several key bodies, including an AI Policy Center, an AI Safety Research Institute, and a National AI Commission, tasked with coordinating AI development, safety research, and policy oversight.
- Organizations developing or deploying AI systems will have to comply with the soon-to-be-published enforcement regulations, which aim to ensure transparency and safety in AI deployment, promote cooperation with government investigations and guidance, and align AI development with ethical standards and national competitiveness goals.
- The Act mandates a cooperative relationship between AI business actors and government bodies, and requires AI Business Operators without a domestic address or place of business to appoint a "domestic representative" with an address or place of business in South Korea.
- Companies must prepare for regulatory measures coming into force in January 2026, which will likely require adjustments in AI development, deployment, and governance practices to meet transparency, safety, and ethical standards.
- The Act introduces specific obligations for "high-impact" AI systems in critical sectors, such as healthcare, energy, and public services, and requires AI Business Operators with computational learning capacity exceeding prescribed thresholds to establish a risk management system for AI safety incidents and submit findings to the MSIT.
- Failure to comply with corrective or cease-and-desist orders issued by the MSIT, non-fulfillment of notification obligations related to high-impact AI or generative AI systems, and failure to designate a required domestic representative may result in administrative fines of up to KRW 30 million (approximately USD 20,707).