Back to blog home

What the EU AI Act Really Means

The EU AI Act was passed to redefine the landscape for AI development and AI governance in Europe. But what does it really mean for enterprises, AI innovators, and industry leaders? In this post, we’ll explore AI regulations, particularly the EU AI Act, and their impact on AI development, and how enterprises can remain compliant ahead of these regulations.

The EU AI Act

The EU AI Act is poised to impact the AI industry in a manner similar to how GDPR shaped data privacy. Its broad reach means that even enterprises outside the EU must comply if their products or services engage with EU customers in any significant way. While some may see the Act as a potential barrier to innovation, it offers clear guidelines and can become a competitive advantage for businesses that proactively align with these regulations. Gaining certification and showcasing responsible AI practices can serve as a strong signal to potential customers, especially as enterprises increasingly demand compliance from their vendors.

Key Features of the EU AI Act:

  1. Regulation of AI Products, Not Technology: The EU AI Act regulates the AI products, not the technology itself. It doesn't focus on technical specifications (e.g., neural network layers or F1 scores) but rather governs products that incorporate AI, regardless of the specific AI technology used.
  2. Risk-Based Approach: The obligations scale with the risk of the AI application. Higher-risk applications (e.g., credit scoring systems) are treated more stringently than lower-risk ones (e.g., spam detection). This risk-based approach is now widely adopted across the industry.
  3. Modeled After Medical Device Regulation: The structure of the Act draws from the EU's medical device regulation, meaning enterprises must meet similar compliance steps to bring AI products to market.
The EU AI Act categorizes AI systems into four distinct risk levels
The EU AI Act categorizes AI systems into four distinct risk levels

Enterprises that want to operate in the EU or serve EU customers must be prepared for these obligations, much like GDPR for data privacy. Within the next 10 months, enterprises must comply with the new disclosure obligations, and within 22 months, all high-risk AI systems must go through certification to remain on the market in the EU.

The EU AI Act as a Blueprint for Global AI Regulations

The EU AI Act is influencing AI governance worldwide, with many countries using it as a model for crafting their own laws. While the Act offers valuable best practices, caution is needed because its language and concepts are tailored to the European context. Countries should adapt the core principles, such as risk-based regulation and product-focused AI compliance, to fit their unique legal and cultural environments. Meanwhile, in the US, the lack of an overarching federal AI law results in a fragmented approach, with state-level regulations like California’s 17 AI bills passed in 2024 leading the way. Although there is an AI Bill of Rights, the absence of a unified framework makes it more challenging for companies to navigate the regulatory landscape effectively. As countries adopt the EU Act's principles as a benchmark, regulations must still be customized to ensure clarity, practicality, and support for innovation.

Vendor vs. Developer Responsibilities in AI Governance

Some of the specific challenges of the EU AI Act’s AI governance framework center on the responsibility of vendors (providers of pre-trained models) versus developers (those building applications using these models). When a company uses a pre-trained model from a third-party vendor (e.g., OpenAI, Microsoft, or Google), it still assumes responsibility for how that model is integrated and applied in its products or services. The Act requires organizations to thoroughly assess the risks associated with the pre-trained models they use and ensure transparency regarding how these models were trained and tested.

Even if a company did not develop the original model, it cannot rely solely on the vendor’s assurances for compliance. Instead, it must actively implement governance measures, such as:

  • Conducting due diligence on the model's training data and safeguards.
  • Documenting the model's use in a comprehensive model card, which includes details from both the original training and any fine-tuning or adaptation.
  • Monitoring the AI system's behavior in production to ensure it aligns with legal and ethical standards.

Ultimately, this means that the responsibility for regulatory compliance extends beyond the developers to all parties in the AI supply chain, creating a shared liability framework. Enterprises will need to collaborate closely with vendors to obtain necessary disclosures and ensure compliance, as failing to meet these standards could lead to significant legal and financial consequences.

Compliance Challenges for Startups and Large Enterprises

Startups may find the cost and complexity of compliance daunting, but implementing responsible AI practices can actually provide a competitive advantage. Achieving certifications like ISO 42001 and demonstrating compliance can signal quality and trustworthiness to potential customers. For startups, getting ahead of the curve can set them apart from competitors who may not yet be addressing these requirements.

For larger enterprises, certifications such as ISO 42001 are becoming essential for due diligence when purchasing AI products. Responsible AI governance is quickly becoming the baseline expectation across all industries, with enterprises increasingly demanding compliance assurances from their vendors.

Steps for Implementing AI Governance

Enterprises must establish a comprehensive AI governance framework to ensure compliance with evolving AI regulations:

  • Create an AI Inventory: Enterprises should take stock of all the AI applications they are using. The Act’s definition of AI is broad, encompassing even simple algorithms and rules-based systems. In the US, the term "automated decision systems" is often used, which could include more types of software than traditionally considered AI
  • Risk Management and Quality Management Systems: Enterprises need to establish AI-specific risk management and quality management systems. These should continuously monitor risks posed by AI models, such as discrimination or errors, and act promptly if any risks exceed acceptable thresholds
  • Monitoring and Technical Safeguards: Enterprises should implement processes to monitor their AI models in production, ensuring that any potential risks are managed and mitigated. Kevin noted that while many organizations already have monitoring infrastructure in place, they need to incorporate regulatory checks to meet new compliance requirements

As AI regulations continue to evolve, enterprises must start preparing now to ensure smooth compliance and avoid potential penalties. Early action to implement robust AI governance frameworks and adhere to evolving standards will not only help enterprises meet regulatory requirements but also provide a competitive edge in the marketplace. Enterprises that view AI governance as an enabler of AI trust and transparency will be well-positioned to thrive in the increasingly regulated global AI landscape.

Watch the full fireside chat below: