
AI and Legislation
The European AI Regulation (AI Act) came into effect since august of 2024, imposing new obligations on companies regarding the safe and ethical use of AI, depending on the risk level of their applications. By complying with these rules in time, businesses can avoid fines, build trust, and strengthen their competitive position.

Reading time: 4 minutes
AI and Legislation in 2025: What Does It Mean for Your Business?
The European AI Regulation (AI Act) takes effect in August 2024. This law has significant implications for how businesses handle artificial intelligence (AI). But what does it mean specifically for your company in 2025 and beyond?
What is the AI Act?
The AI Act is designed to ensure the safe and ethical use of AI. Depending on the risks posed by AI systems, specific requirements are imposed, such as:
- Unacceptable risks: AI systems that can be harmful (e.g., surveillance without consent) are prohibited.
- High risks: AI systems in sectors like healthcare, transportation, and justice must comply with strict requirements, such as human oversight and extensive documentation.
- Low risks: Less critical applications, such as chatbots, face fewer restrictions but still require transparency.
Want to learn more about the EU AI Act? Check out our blog ‘The EU AI Act’.
Key Deadlines for Businesses
February 2025:
- AI systems with unacceptable risks must be removed from the market. These are systems that, regardless of benefits, violate fundamental rights, safety, or societal values.
- Start with an internal AI audit to identify applications that pose risks. While the AI Act does not explicitly mandate audits, an internal audit or risk assessment is a practical tool to identify risks, document compliance, and avoid fines or reputational damage.
August 2025:
- Developers of generative AI models (such as GPT) must maintain technical documentation and ensure transparency about how the system works.
- Protect copyrights when using AI. Developers must ensure that the output of generative AI models does not infringe on existing copyrights.
August 2026:
- High-risk AI systems must fully comply with the AI Act, including oversight, safety tests, and ethical safeguards.
August 2027:
- All AI systems must fully comply with the AI Act by this date. The maximum penalties (30 million euros or 6% of global turnover) will depend on the severity of the violation.
What Should Your Business Do?
- Analyze Your AI Usage:
Identify which AI systems you use and assess their risk level according to the AI Act. Are there applications that may need to be prohibited or adjusted?
- Compliance and Documentation:
Ensure transparency in your systems. Document how your AI models make decisions and implement mechanisms for human oversight.
- AI Training for Employees:
Improve AI knowledge within your team. This helps in using AI correctly and reducing legal risks.
- Legal Advice:
Consult experts to ensure full compliance with the new regulations. This prevents fines and protects your reputation.
Why Is This Important?
The AI Act is not just a legal obligation; it is an opportunity to build trust with customers and partners. Companies that handle AI responsibly will be better prepared for future developments and gain an edge in a competitive market.
By taking action now, you minimize risks, strengthen your position, and capitalize on the opportunities AI offers. Take the first step toward compliance today and secure your company’s future in the AI era.
Want to know how your business can prepare for the AI Act? Contact us and discover how we can help you.