Search
Close this search box.

EU AI Act: Key Implications and What Companies Must Know

EU AI Act

Build, deploy, operate computer vision at scale

  • One platform for all use cases
  • Connect all your cameras
  • Flexible for your needs
Contents

The European Union Artificial Intelligence Act (EU AI Act) is the first comprehensive legal framework to regulate the design, development, implementation, and use of AI systems within the European Union. The primary objectives of this legislation are to:

  1. Ensure the safe and ethical use of AI
  2. Protect fundamental rights
  3. Foster innovation by setting clear rules— most importantly for high-risk AI applications

The AI Act brings structure into the legal landscape for companies that are directly or indirectly relying on AI-driven solutions. We would like to say that this AI Act is a comprehensive approach to AI regulation across the world and will impact businesses and developers far beyond the European Union’s borders.

In this article, we go deep into the EU AI Act: its guidelines, what companies may be expected of them, and the greater implications this Act will have on the business ecosystem.

About us: Viso Suite provides an all-in-one platform for companies to perform computer vision tasks in a business setting. From people tracking to inventory management, Viso Suite helps solve challenges across industries. To learn more about Viso Suite’s enterprise capabilities, book a demo with our team of experts.

Viso Suite for the full computer vision lifecycle without any code
Viso Suite is the only end-to-end computer vision platform

What is the EU AI Act? A High-Level Overview

The European Commission published a regulatory document in April 2021 to create a uniform legislative framework for the regulation of AI applications among its member states. After more than three years of negotiation, the law was published on 12 July 2024, going into effect on 1 August 2024.

Following is a four-point summary of this act:

Risk-based Classification of AI Systems

The risk-based approach classifies AI systems into one of four risk categories of risk:

Categories of AI Systems as Classified by EU AI Act
Categories of AI Systems as Classified by EU AI Act

Unacceptable Risk:

AI systems pose a grave danger and damage to safety and fundamental rights. This would also encompass any system applying social scoring or manipulative AI practices.

High-Risk AI Systems:

This involves AI systems with a direct impact either on safety or on basic rights. Examples include those in the healthcare, law enforcement, and transportation sectors, along with other critical areas. These systems will be subject to the most rigorous regulatory requirements that may include rigorous conformity assessments, mandatory human oversight, and the adoption of robust risk management systems.

Limited Risk:

Systems of limited risk will have lighter demands for transparency; however, developers and deployers should make sure that transparency to the end-user is given regarding the presence of AI, for instance, chatbots and deepfakes.

Minimal Risk AI Systems:

Most of these systems presently are unregulated, such as applications like AI in video games or spam filters. However, as generative AI matures, possible changes to the regulatory regime for such systems are not precluded.

Obligations on Providers of High-Risk AI:

Most of the compliance burdens developers. In any event, whether inside or outside the EU, these obligations apply to any developer that is marketing or operating high-risk AI models emanating within or into the European Union states.

Conformity with these regulations further extends to high-risk AI systems provided by third countries whose output is used within the Union.

Healthcare Machineries— A High-Risk AI Sytems
Healthcare Machineries— A High-Risk AI Sytems
User’s Responsibilities (Deployers):

Users means any natural or legal persons deploying an AI system in a professional context. Developers have less stringent obligations as compared to developers. They do, however, have to ensure that when deploying high-risk AI systems either in the Union or when the output of their system is used in the Union states.

All these obligations are applied to users based both in the EU and in third countries.

General-Purpose AI (GPAI):

The developers of general-purpose AI models should provide technical documentation and instructions for use and likewise follow copyright laws. Their AI Model should not create a systemic risk.

Free and Open-license providers of GPAI would comply with the copyright and publication of the training data unless their AI model creates a systemic risk.

Regardless of whether being licensed or not, the same model evaluation, adversarial test, incident tracking and monitoring, and cybersecurity practices should be conducted on GPAI models that present systemic risks.

First Artificial Intelligence Act — European Union
First Artificial Intelligence Act — European Union

What Can Be Expected From Companies?

Organizations using or developing AI technologies should be prepared to expect significant changes in compliance, transparency, and operational oversight. They can prepare for the following:

High-Risk AI Control Requirements:

Companies deploying high-risk AI systems must be responsible for strict documentation, testing, and reporting. They will be expected to undertake ongoing risk assessment, quality management systems, and human oversight. We shall, in turn, require accurate documentation of the system’s functionality, safety, and compliance. Indeed, non-compliance could attract heavy fines under the GDPR.

Transparency Requirements:

Companies will have to communicate this well to users, whether the AI system is clear enough to indicate to the user when he is dealing with an AI system or sufficiently unclear in the case of limited-risk AI. It will hence improve user autonomy and compliance with the principle of the EU in terms of transparency and fairness. This rule will cover the use of things like deepfakes; they will have to disclose if a thing is AI-generated or AI-modified.

Data Governance and AI Training Data:

This means that AI systems shall be trained, validated, and tested with diverse, representative datasets, unbiased in nature. This shall require business to examine more carefully its sources of data and move toward far more rigorous forms of data governance so that AI models yield nondiscriminatory results.

Impact on Product Development and Innovation:

The Act introduces AI developers to a greater extent of new testing and validation procedures that may slow down the pace of development. Companies that can incorporate compliance measures from an early stage in their lifecycle of AI products will have key differentiators in the long run. Strict regulation may curtail the pace of innovation in AI in the beginning, but businesses able to adjust quickly to such standards will find themselves well-positioned to expand confidently into the EU market.

EU AI Act (2024)
EU AI Act (2024)

Guidelines to Know About

Companies have to adhere to the following key directions to comply with the EU Artificial Intelligence Act:

Timeline for Enforcement

The EU AI Act proposes a phase-in enforcement schedule to give organizations time to adapt to new requirements.

  • 2 August 2024: The official implementation date of the Act.
  • 2 February 2025: AI systems falling under the categories of “unacceptable risk” will be banned.
  • 2 May 2025: Codes of conduct apply. These codes are guidelines to AI developers on best practices to comply with the Act and indeed align their operations with EU principles.
  • 2 August 2025: Governance rules regarding responsibilities for General Purpose AI, or GPAI, are in force. For GPAI systems, including large language models or generative AI, there are particular demands on transparency and safety. In this respect, the demands on such systems are not interfered with during this stage but rather given time to get prepared.
  • 2 August 2026: Full implementation of GPAI commitments begins.
  • 2 August 2027: Requirements for high-risk AI systems will fully apply, and thus, companies will have more time to align with the most demanding parts of the regulation.
AI Transportation Management— A High-Risk AI Sytem
AI Transportation Management— A High-Risk AI System
Risk Management Systems

The providers of high-risk AI have to establish a risk management system providing for constant monitoring of the performance of AIs, periodic assessments concerning compliance issues, and the instigation of fallback plans in case any wrong operation or malfunction of AI systems occurs.

Post-Market Surveillance

Companies will be required to maintain post-market monitoring programs for as long as the AI system is in use. This is to ensure ongoing compliance with the requirements outlined in their applications. This would include activities such as feedback solicitation, operational data analysis, and routine auditing.

Human Oversight

The Act requires high-risk AI systems to provide for human oversight. That is, for instance, humans need to be able to intervene with, or override AI decisions, where that is necessary; for instance, regarding healthcare, the AI diagnosis or treatment recommendation has to be checked by a healthcare professional before it is applied.

Registration of High-Risk AI Systems

High-risk AI systems need to be registered in the database of the EU and allow access to the authorities and public with relevant information regarding the deployment and operation of that AI system.

Third-Party Assessment

Third-party assessments of some AI systems could be needed before deployment, depending on the risk involved. Audits, certification, and other forms of evaluation would confirm their conformity with EU regulations.

Impact on Business Landscape

The introduction of the EU AI Act is expected to have far-reaching effects on the business landscape.

Equalizing the Playing Field

The Act will level the playground for businesses by imposing new regulations on AI over companies of all sizes in matters of safety and transparency. This could also lead to a huge advantage for smaller AI-driven businesses.

Building Trust in AI

The new EU AI Act will no doubt breed more consumer confidence in AI technologies by espousing the values of transparency and safety within its provisions. Firms that follow these regulations can further this trust as a differentiator. In turn, marketing their services as ethical and responsible AI providers.

Possible Compliance Costs

For some businesses, and in particular smaller ones, the cost of compliance could be unbearable. Conformity to the new regulatory environment could well require heavy investment in compliance infrastructure, data governance, and human oversight. The fines for non-conformity could go as high as 7% of global revenue-a financial risk companies cannot afford to overlook.

Increased Accountability in Cases of AI Failure

Businesses will be held more responsible when there is a failure in the AI system or some other misuse that leads to damage to people or a community. There may also be an increase in the legal liabilities of companies if they do not test and monitor AI applications appropriately.

Geopolitical Implications

The EU AI Act finally can set a globally leading example in regulating AI. Non-EU companies acting in the EU market are subject to the respective rules. Thus, fostering cooperation and alignment internationally with questions of AI standards. This may also call upon other jurisdictions, such as the United States, to take similar regulatory steps.

Eurpean Union AI Act (2024)
European Union AI Act (2024)

Frequently Asked Questions

Q1. According to the EU AI Act, which are the high-risk AI systems?

A: High-risk AI systems are applications in fields that have direct contact with an individual citizen’s safety, rights, and freedoms. This includes AI in critical infrastructures, like transport; in healthcare, like in diagnosis; in law enforcement, enhanced through biometrics; in employment processes; and even in education. These shall be systems of strong compliance requirements, such as risk assessment, transparency, and continuous monitoring.

Q2. Does every business developing AI have to follow the EU AI Act?

A: Not all AI systems are regulated uniformly. Generally, the Act classifies AI systems into the following categories according to their potential for risk. These categories include unacceptable risk, high, limited, and minimal risk. This legislation solely lays high levels of compliance for high-risk AI systems, basic levels of transparency for limited-risk systems, and minimal-risk AI systems, which include manifestly trivial applications such as video games and spam filters, remain largely unregulated.

Businesses developing high-risk AI must comply if their AI is deployed in the EU market, whether they are based inside or outside the EU.

Q3. How does the EU AI Act affect companies outside the EU?

A: The EU Artificial Intelligence Act AI would apply to companies with a place of establishment outside the Union when their AI systems are deployed or used within the Union. For instance, if an AI system developed in a third country issues outputs used within the Union, it then would need to comply with the requirements under the EU Act. In this vein, all AI systems affecting EU citizens would meet the identical regulatory bar, no matter where they are built.

Q4. What are the penalties for any non-compliance with the EU AI Act?

A: The EU Artificial Intelligence AI Act punishes the event of non-compliance with significant fines. Indeed, for severe infringements, such as uses of prohibited AI systems and non-compliance with obligations for high-risk AI, fines of up to 7% of the company’s overall worldwide annual turnover or €35 million apply.

Recommended Reads

If you enjoy reading this article, we have some more recommended reads