In 2025, regulatory frameworks for AI have begun to take full effect in countries around the world, accompanying the rapid development of AI technology. In particular, the full enforcement of the EU AI Act has an undeniable impact on companies operating globally. This article provides a detailed explanation of the latest AI regulation trends that developers and engineers should know, along with practical response points.
Full Enforcement of the EU AI Act
What is the Risk-Based Approach
The EU AI Act classifies AI systems into four levels based on “risk level” and imposes different regulatory requirements on each.
| Risk Level | Target Examples | Regulatory Content |
|---|---|---|
| Prohibited | Social scoring, real-time facial recognition (public spaces) | Prohibited in principle |
| High Risk | Hiring AI, credit scoring, medical diagnosis support | Conformity assessment and registration required |
| Limited Risk | Chatbots, deepfake generation | Transparency obligations (AI disclosure) |
| Minimal Risk | Spam filters, game AI | No regulation |
Requirements for High-Risk AI Systems
When developing or operating AI systems classified as high-risk, the following requirements must be met.
Required elements for High-Risk AI Systems:
├── Establishment of risk management system
├── Data governance (training data quality management)
├── Preparation and storage of technical documentation
├── Implementation of logging functionality
├── Ensuring transparency and explainability
├── Human oversight functionality
├── Accuracy, robustness, and security
└── Obtaining EU conformity assessment
Penalties for Violations
Violations of the EU AI Act result in strict penalties exceeding GDPR.
- Use of prohibited AI: Up to 35 million euros or 7% of global turnover
- High-risk AI requirement violations: Up to 15 million euros or 3% of global turnover
- Information provision obligation violations: Up to 7.5 million euros or 1.5% of global turnover
Japan’s AI Regulation Trends
AI Business Operator Guidelines
In Japan, the Ministry of Economy, Trade and Industry and the Ministry of Internal Affairs and Communications established the “AI Business Operator Guidelines” in April 2024. While not legally binding, they function as de facto industry standards.
// 10 Principles of AI Business Operator Guidelines
const aiPrinciples = [
"Human-centric", // Human-centric
"Safety", // Safety
"Fairness", // Fairness
"Privacy protection", // Privacy protection
"Security", // Security
"Transparency", // Transparency
"Accountability", // Accountability
"Education & Literacy", // Education & Literacy
"Fair competition", // Fair competition
"Innovation" // Innovation
];
Copyright Law and AI Training
The 2024 Cultural Council report clarified the interpretation of copyrighted work usage in AI training.
Important Point: Under Article 30-4 of the Copyright Law, the use of copyrighted works for AI training purposes is generally permitted, but there is an exception when it “unjustly harms the interests of the copyright holder.” Training for the purpose of imitating a specific creator’s style risks being judged as copyright infringement.
US AI Policy
Executive Order 14110
The Biden administration’s executive order signed in October 2023 remains the foundation of US AI policy as of 2025.
Key elements of US AI Executive Order:
├── Development of safety standards (NIST-led)
├── Large model reporting obligations
│ └── Reporting required for training over 10^26 FLOP
├── Recommendation of AI red teaming
├── Guidelines for AI use in federal government
└── Investment in AI talent development
State-Level Regulations
California and Colorado have passed or are deliberating their own AI regulation bills.
| State | Bill Name | Main Content |
|---|---|---|
| California | SB 1047 | Safety evaluation requirements for large AI models |
| Colorado | AI Consumer Protection Act | Explanation requirements for high-risk AI decisions |
| Illinois | AI Interview Law | Prior notification requirements for AI hiring tools |
Practical Responses for Engineers
1. AI System Inventory
First, inventory the AI systems being developed or used in your company and classify their risks.
# AI System Risk Assessment Example
class AISystemRiskAssessment:
def __init__(self, system_name: str):
self.system_name = system_name
self.risk_factors = []
def evaluate_risk_level(self) -> str:
"""Risk level evaluation based on EU AI Act"""
high_risk_domains = [
"employment", # Hiring/HR
"credit_scoring", # Credit evaluation
"education", # Educational evaluation
"law_enforcement", # Law enforcement
"border_control", # Immigration control
"healthcare" # Medical diagnosis
]
if self.domain in high_risk_domains:
return "HIGH_RISK"
elif self.requires_human_interaction:
return "LIMITED_RISK"
else:
return "MINIMAL_RISK"
2. Technical Documentation Preparation
High-risk AI systems require the following technical documentation.
- General description of the system and intended purpose
- Design specifications and architecture
- Training data details (sources, preprocessing, bias measures)
- Test results and performance metrics
- Records of risk management measures
3. Implementing Logging Functionality
// AI System Logging Example
interface AIAuditLog {
timestamp: Date;
systemId: string;
inputData: string; // Hash of input data
outputDecision: string;
confidenceScore: number;
modelVersion: string;
humanOverride?: boolean; // Whether human override occurred
}
const logAIDecision = async (log: AIAuditLog): Promise<void> => {
// EU AI Act requires minimum 5-year log retention
await auditStorage.save(log, { retentionYears: 5 });
};
Future Outlook
From 2025 onwards, AI regulations are expected to become more concrete and strict.
- International Standardization: Spread of ISO/IEC 42001 (AI Management Systems)
- Mutual Recognition: Possibility of mutual recognition of AI regulations between EU and Japan
- Generative AI-Specific Regulations: Consideration of additional regulations specific to generative AI
- Open Source AI: Clarification of regulatory scope for open source models
Summary
AI regulation should be viewed not as “something that restricts development freedom” but as “a framework for building trusted AI.” By advancing regulatory compliance early, it can also become a differentiating factor against competitors.
As engineers, it is required not only to focus on technical implementation but also to keep an eye on regulatory trends and build appropriate governance structures.