everything you need to know (so far) about the eu ai act banner

Everything You Need To Know (So Far) About The EU AI Act

Artificial intelligence (AI) has evolved from a futuristic concept to a transformative technology integrated across virtually every industry in the last 12 months. From healthcare and finance to retail and manufacturing, AI is already reshaping how businesses operate, make decisions, and serve customers. However, with this rapid growth comes significant challenges around transparency, ethical use, and managing risks, particularly in areas like privacy, information security, and data protection.

Enter the EU AI Act, the world’s first comprehensive legislative framework specifically designed to regulate AI technologies.

Understanding and adhering to this regulation is now more critical than ever for businesses operating within or interacting with the EU market. Failure to comply could result in severe penalties and damage brand reputation and consumer trust. This blog will explain everything you need to know about the EU AI Act and what businesses should be doing to prepare.

What is the EU AI Act?

The EU AI Act is legislation introduced by the European Union to create a comprehensive framework for regulating artificial intelligence. It aims to set global standards for how AI systems are developed, deployed, and monitored, focusing on managing AI technology’s risks to individuals and society.

Objectives of the EU AI Act:

  • Risk Management: One of the core objectives of the EU AI Act is to create a regulatory framework that addresses the risks associated with AI systems, which includes safeguarding privacy, preventing discrimination, and avoiding risks to physical or mental well-being.
  • Balancing Innovation and Safety: The Act seeks to strike a balance between encouraging the continued innovation of AI technologies and protecting public safety, ensuring that AI advancements do not come at the cost of transparency, fairness, or ethical standards.
  • Transparency and Accountability: Another key goal is to promote transparency in AI use, requiring companies to disclose essential information about their AI systems when they impact high-risk areas like healthcare, law enforcement, or employment.

 

By creating a clear and enforceable regulatory structure, the EU AI Act aims to lead the global conversation on AI governance and provide a model for other nations to follow.

Key Components of the EU AI Act

Risk-based Approach

The EU AI Act employs a risk-based approach that classifies AI systems into four categories based on their potential harm:

  • Unacceptable Risk: AI applications that severely threaten people’s rights and safety, such as AI-based social scoring by governments or systems that exploit vulnerable populations, are outright banned.
  • High Risk: AI systems used in critical areas like biometric identification, healthcare, and essential infrastructure are subject to strict oversight. Compliance requirements for high-risk systems include data governance, record-keeping, and detailed risk assessments.
  • Limited Risk: These systems face fewer obligations but must adhere to basic transparency requirements, such as notifying users when interacting with an AI system.
  • Minimal or No Risk: AI systems in this category, such as AI-driven chatbots or recommendation engines, are largely exempt from the regulatory framework.

 

How to Identify If Your AI Solutions Fall Under “High-Risk” or “Limited-Risk” Categories

One of the first steps in navigating the EU AI Act is determining where your AI solutions fall within this risk-based framework. Here’s a quick top-level guide:

High-Risk AI Systems

AI systems that fall under the high-risk category are subject to stringent compliance obligations due to their potential to cause significant harm if they malfunction or are misused. High-risk systems include:

  1. Biometric identification systems (such as facial recognition) used in public spaces.
  2. AI tools used in critical sectors like healthcare, education, and employment, where decisions based on AI may significantly affect people’s lives.
  3. Critical infrastructure management, including AI systems that control energy grids, water supplies, and transportation systems.

 

For these high-risk systems, companies must conduct thorough risk assessments, implement human oversight mechanisms, and ensure the AI systems are safe, reliable, and transparent.

Limited-Risk AI Systems

These systems carry fewer potential risks and thus face lighter obligations. Examples include:

  • AI systems that interact with users but do not make decisions affecting rights or safety (e.g., chatbots or virtual assistants).
  • AI used for automated decision-making in customer service or recommendation engines.

Transparency Obligations

The Act introduces several transparency obligations, especially for high- and limited-risk AI systems:

  • Businesses must provide clear documentation on how their AI systems function and how they were trained.
  • Users interacting with AI systems must be informed they are engaging with AI, particularly when those systems make decisions that impact people’s rights or well-being.
  • Specific disclosures are required for AI systems involved in data processing to ensure users are aware of the potential privacy implications.

 

These transparency requirements aim to build public trust in AI technologies by making the systems easier to understand and scrutinise.

Prohibited AI Practices

Specific AI applications are banned under the EU AI Act due to their potential to cause harm to society. These include:

  • AI-based social scoring systems, which profile individuals based on their behaviour, socioeconomic status, or other personal data, particularly when used by governments.
  • Real-time biometric identification systems used in public spaces for mass surveillance, with narrow exceptions for law enforcement under specific, high-necessity conditions.
  • AI systems that manipulate human behaviour in ways that exploit vulnerabilities, such as those that target children or people with disabilities.

 

These prohibitions reflect the EU’s commitment to preventing the misuse of AI in ways that could undermine human rights, dignity, and privacy.

How Does the EU AI Act Affect My Business?

The EU AI Act has far-reaching implications for businesses that develop or deploy AI systems within the European Union. Companies must understand and meet the regulation’s compliance requirements, whether directly operating in the EU or offering AI products and services to EU citizens.

General Compliance Requirements for All AI Providers

Regardless of the risk category of their systems, all AI providers must adhere to specific baseline requirements to ensure safety, transparency, and accountability. These general obligations include:

Transparency Obligations:

Informing Users: AI providers must ensure that individuals are notified when interacting with an AI system. For example, if users are engaging with a chatbot or another system that could potentially manipulate their behaviour, they need to be clearly notified of its AI nature.
Labelling AI-Generated Content: Any content (e.g., text, audio, or images) generated by AI must be labelled to ensure it is easily identifiable as AI-produced

Risk Management Systems:

• Risk Identification: All AI providers must implement risk management procedures to assess and mitigate risks associated with deploying their AI systems. While this is less stringent than high-risk systems, every provider must have some form of risk mitigation in place.

Data Governance:

Data Quality & Integrity: Providers must take steps to ensure the quality and integrity of the data their AI systems rely on. Although high-risk systems have more specific requirements (discussed below), all AI systems must maintain a certain level of accuracy and bias management.

Continuous Monitoring and Testing:

• Providers must regularly monitor their AI systems to ensure they remain reliable, accurate, and secure throughout their lifecycle. This is especially important for AI systems that evolve through machine learning.

Additional Compliance Requirements for High-Risk AI Providers

Providers of high-risk AI systems, such as those involved in biometric identification, critical infrastructure, healthcare, law enforcement, and other sensitive sectors listed in Annex III of the Act, are subject to much more stringent regulations, including:

Fundamental Rights Impact Assessments (FRIA):

Assessing Impact on Fundamental Rights: Before deployment, high-risk AI systems must assess their potential impact on fundamental rights (e.g., privacy and non-discrimination). If a Data Protection Impact Assessment (DPIA) is required, it should be conducted in conjunction with the FRIA.

Conformity Assessments (CA):

Pre-Market Compliance Checks: High-risk AI systems must undergo conformity assessments before being placed on the market. These assessments verify the system meets the EU AI Act’s safety and transparency requirements. If the AI system is significantly modified, the CA must be updated.
Third-Party Audits: Certain high-risk AI systems, such as those used in biometric identification, may require external audits and certifications from independent bodies to ensure compliance.

Human Oversight:

Ensuring Human Control: High-risk AI systems must have mechanisms for human oversight, allowing operators to intervene or override the AI’s decisions if necessary. This safeguard ensures that AI decisions impacting individuals’ rights or safety can be reviewed and corrected by humans.

Data Quality and Governance:

Higher Standards for Data: High-risk AI systems must meet stricter data governance standards, ensuring the accuracy, reliability, and fairness of the data used. This includes minimising potential biases and ensuring the integrity of training datasets.

Documentation and Traceability:

Comprehensive Record-Keeping: High-risk AI providers must keep detailed records of how the AI system was developed, tested, and trained. This documentation must be transparent and accessible to regulators for audits, ensuring the traceability of the AI’s decision-making processes.

Public Database Registration (for Public Authorities):

Public authorities deploying high-risk AI systems must register them in a public EU database, except for certain sensitive cases such as law enforcement or migration, to promote transparency.

These additional layers of compliance reflect the increased potential for harm in sensitive sectors and are critical for ensuring that AI systems operate safely, ethically, and accountable.

Potential Penalties for Non-Compliance

Non-compliance with the EU AI Act could lead to substantial penalties, similar to the fines imposed under the General Data Protection Regulation (GDPR). Penalties for violating the EU AI Act can reach up to:

€30 million or 6% of a company’s global annual turnover, whichever is higher, for serious breaches (such as using AI for prohibited practices).
• For less severe breaches, fines can be up to €20 million or 4% of the company’s global turnover.

These penalties are comparable to GDPR fines and highlight the EU’s commitment to enforcing its AI regulation with strict accountability. Businesses must ensure they are compliant to avoid the financial and reputational damage that could result from non-compliance.

Balancing Regulation and Growth: Will the Act Stifle or Stimulate AI Development?

One concern surrounding the EU AI Act is whether the regulation will stifle innovation by imposing too many restrictions. While the requirements are rigorous, the Act aims to strike a balance between regulation and growth:

  • The compliance demands for high-risk AI systems are indeed strict, but this is balanced by offering businesses a clear path to deploying safe, trustworthy AI.
  • The regulatory burden is lighter for low-risk and minimal-risk AI systems, enabling smaller businesses and startups to innovate without excessive constraints.
  • The Act encourages businesses to invest in AI governance early in development, which may help avoid costly regulatory issues later on, ultimately fostering sustainable growth.

 

Additionally, the EU is investing in AI research and development through initiatives like Horizon Europe, which provides funding for ethical AI projects. This support is intended to stimulate growth while ensuring that new AI technologies meet the highest standards of safety and accountability.

What Businesses Need to Do Now to Prepare

To ensure compliance with the EU AI Act, businesses should take immediate steps to prepare:

Legal and Ethical Review: Conduct a thorough legal review of AI systems to ensure they align with the Act’s ethical standards and legal obligations. This might involve setting up dedicated compliance teams or working with external experts.
Technical Adjustments: Implement technical safeguards, such as human oversight mechanisms, transparency features, and data protection protocols, to meet the Act’s requirements.
Training and Awareness: Educate teams across the organisation about the ethical implications of AI and ensure they are familiar with the compliance requirements. Awareness campaigns and training programs can be valuable in embedding compliance into corporate culture.
Regular Audits and Risk Management: Businesses should adopt a proactive approach by conducting regular audits of their AI systems, using risk management tools and frameworks like an Information Security Management System (ISMS) structured around ISO 27001 for information security and ISO 42001 for AI to ensure ongoing compliance.

Leveraging ISO 27001 and ISO 42001 to Streamline EU AI Act Compliance

By integrating their processes with ISO 27001 and ISO 42001, businesses can meet the current requirements of the EU AI Act and future-proof themselves against emerging AI regulations that are likely to be introduced in other jurisdictions.

These standards provide a comprehensive framework that addresses general information security and AI-specific risks, offering an efficient path to compliance for multiple regulatory environments.

Security and Data Privacy: ISO 27001 ensures robust security and data protection practices, while ISO 42001 addresses the ethical and operational challenges specific to AI. Together, they help businesses meet the EU AI Act’s stringent requirements around data governance, privacy, and AI transparency.
Risk Management: By implementing both ISO 27001 and ISO 42001, businesses can streamline their risk management efforts, ensuring they can effectively manage both information security risks and the distinct risks AI systems pose. This alignment makes it easier to integrate AI-specific controls and maintain compliance with global AI regulations.
Audit and Compliance: Following both standards simplifies the audit process required under the EU AI Act and other emerging regulations. ISO 27001 offers well-established guidelines for information security audits, while ISO 42001 adds a layer of AI-focused auditing criteria. This dual compliance approach reduces duplication of efforts, lowers costs, and efficiently positions businesses to meet regulatory demands.

Unlocking Efficiencies with ISO 27001 and ISO 42001

Adopting both ISO 27001 and ISO 42001 not only ensures compliance with the EU AI Act but also prepares businesses for forthcoming AI regulations in other regions.

Many countries are developing AI-specific laws, and companies that have already aligned with these international standards will be better positioned to meet these future requirements as the bulk of the necessary infrastructure, risk management, and auditing procedures will already be in place. By future-proofing their AI governance through these standards, businesses can stay ahead of regulatory changes, reduce compliance complexity, and confidently focus on innovation.

Key Deadlines and Milestones for the EU AI Act’s Implementation

The EU AI Act entered into force on 2 August 2024. However, there are still a few critical deadlines and milestones for its implementation:

Feb 2025: Ban on AI systems with unacceptable risk takes effect
May 2025: From 2 May 2025, the codes of conduct are applied
Aug 2026: From 2 August 2025, governance rules and obligations for General Purpose AI (GPAI) become applicable
Aug 2026: The bulk of the EU AI Act’s obligations will start to apply, including essential requirements for high-risk AI systems (such as AI in biometrics, critical infrastructure, employment, and law enforcement)placed on the market or modified after this date
Aug 2027: Additional obligations will apply for high-risk AI systems that are also regulated as safety components in other EU product safety legislation (e.g., medical devices, aviation systems). This gives companies handling these particular AI systems more time to comply.

Preparing for the Future of AI Governance

The EU AI Act marks a pivotal moment in the regulation of artificial intelligence, with far-reaching implications for businesses across industries. Understanding this legislation and preparing for its compliance requirements will help companies avoid penalties and build trust with consumers and stakeholders by ensuring that AI systems are ethical, transparent, and safe.

Final Tips for Businesses to Ensure AI Practices are Ethical, Compliant, and Sustainable:

Adopt a Proactive Approach: Waiting until the EU AI Act is fully implemented could lead to rushed, reactive efforts. Begin aligning your AI systems with the Act’s requirements now, particularly by adopting ISO 27001 and ISO 42001 to establish a strong foundation for compliance.
Invest in Compliance Infrastructure: Set up the necessary processes, such as regular risk assessments, transparency tools, and human oversight mechanisms. By incorporating ISO 27001 for information security and ISO 42001 for AI-specific governance, you ensure smooth compliance while also preparing for future regulations.
Focus on Ethical AI Development: Beyond meeting legal requirements, consider the ethical implications of your AI solutions. Implementing responsible AI practices, supported by ISO 42001, will help with compliance and enhance your reputation as a leader in ethical AI innovation.

By taking a proactive stance on AI compliance and integrating both ISO 27001 and ISO 42001, businesses can meet regulatory requirements, simplify future compliance efforts, and position themselves for long-term success.

Explore ISMS.online's platform with a self-guided tour - Start Now