The EU Artificial Intelligence Act: Analysis & Implications

Artificial Intelligence (AI) is rapidly transforming various aspects of business and society, prompting the need for comprehensive regulation to ensure its ethical and responsible use.  The European Union (EU) has taken a significant step towards addressing this concern with the EU Artificial Intelligence Act (EUAI Act), one of the first regions in the world to provide a comprehensive legal framework for artificial intelligence.  The European Parliament approved the draft legislation on 14 June 2023. It now enters a phase of negotiations where the main European institutions (the European Council, the European Commission, the Council of the European Union and the European Parliament) will work towards reaching an agreement on the final text of the Act.     

Agreement is anticipated to be reached by the end of 2023, and regulations are expected to come into force in mid-2024 with a 24-month transition period to help AI systems providers prepare for compliance. 

This pioneering regulatory framework aims to regulate AI and classify AI systems horizontally based on risk levels, classified as 1) UNACCEPTABLE, 2) HIGH, 3) LIMITED and 4) MINIMAL or NO RISK.  The primary objectives of the EU AI Act (as set out in the Explanatory Memorandum of legislation) are to:

  1. Ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
  2. Ensure legal certainty to facilitate investment and innovation in AI;
  3. ENhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; and
  4. Facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.   

The European AI Act is bound to have widescale implications for providers and users of AI systems.  The EU has taken the lead in regulating this fast-developing technology.  It provides regulatory certainty that can give investors and entrepreneurs the confidence to launch and grow new ventures or products.  

However, it could also be perceived as overly constraining, especially in a market undergoing rapid innovation and development.  Entrepreneurs and innovators may well choose more open countries where regulators are taking a more liberal approach to regulating AI, being cautious not to constrain innovation.  Time will tell whether the EU has made the right decision and therefore benefits from a  regulatory environment that offers legal certainty for artificial intelligence to thrive, balancing protection of society’s fundamental rights. 

Pioneers looking to harness the power of AI to solve business and societal problems, or build lucrative ventures around this technology, will need to understand the intricate provisions set out in the Act.  It is a laborious process to read through the Act line by line, and even more challenging to understand the implications of the legal clauses set out in the Act.   

The best way to learn (in my opinion), is to ask questions (while reading through the Act), and answer those questions.  Documenting such Frequently Asked Questions can help you understand the applicable laws, without having to wade through over one hundred pages of legalese text. 


The world of AI is rapidly evolving.  More questions will emerge as people begin to understand the detailed requirements and their implications in their specific solutions or use of AI.  These FAQs will be kept updated along the discovery process.  Feel free to send in your questions, which I will attempt to answer to the best of our ability.  Please also send any thoughts, insights or implications to you. 

Rationale & Key Features of the EU AI Act

Artificial intelligence (AI) is recognised as a rapidly evolving technology that can bring economic and societal benefits.  It has the potential to provide a competitive advantage to companies and the European economy.

However, AI can bring about new risks or result in negative consequences for individuals and society.  The EU is therefore committed to striving for a balanced approach, preserving EU's technological leadership and allowing Europeans to benefit from such new technologies, while protecting fundamental rights and principle ensuring that this technology functions in accordance with Union values. 

The EU AI Act is designed to bring about this balance.  

The proposal delivers on a political commitment by President von der Leyen, who announced in her 2019-2023 vision, "A Union that Strives for More", that the Commission will put forward legislation for the coordinated European approach to AI's human and ethical implications.  Chapter 3 sets out her vision for Europe, fit for the digital age, with related highlights as follows:

    • She wants "Europe to strive for more by grasping the opportunities from the digital age within safe and ethical boundaries."
    • "To lead the way on next-generation hyperscalers, we will invest in blockchain, high-performance computing, quantum computing, algorithms and tools to allow data sharing and data usage. We will jointly define standards for this new generation of technologies that will become the global norm."
    • "Data and AI are the ingredients for innovation that can help us to find solutions to societal challenges, from health to farming, from security to manufacturing."
    • "In my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence. This should also look at how we can use big data for innovations that create wealth for our societies and our businesses."
    • "I will make sure that we prioritise investments in Artificial Intelligence, both through the Multiannual Financial Framework and through the increased use of public-private partnerships."
    • "A new Digital Services Act will upgrade our liability and safety rules for digital platforms, services and products, and complete our Digital Single Market."


  • The regulatory framework sets out MINIMUM necessary requirements to address risks and problems linked to AI.
  • It aim not to  unduly constraining or hindering technological development.
  • It aims not to disproportionately increase the costs of placing AI solutions on the market.
  • The proposals set a robust and flexible legal framework.
  • It's designed to be comprehensive and future-proof, accommodating fast-paced changes in the AI landscape.  
  • On the one hand it provides  principle-based requirements that AI systems should comply with. 
  • On the other hand, it puts in place a proportionate regulatory system, centered on a well-defined risk-based regulatory approach. 

Article 3: Definitions

"‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with."

Timeline for the Development of the Act

Edit Content

Various Dates

Calls for Legislation

European Parliament and European Council repeatedly express calls for legislative action to ensure a well functioning internal market for artificial intelligence.

Various Dates

19 October 2017

Call for Sense of Urgency

At the European Council meeting, the  Council called for a sense of urgency to addressing emerging trends including issues such as artificial intelligence.

19 October 2017


President’s Committment

President von der Leyen makes a commitment to put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence.


11 February 2019

Coordinated Plan

Council for the European Union – “Artificial intelligence b) Conclusion on the coordinated plan on artificial intelligence- Adoption 6177/19”  – the Council calls for review of existing legislation to make it fit for purpose for the new opportunities and challenges raised by AI.

11 February 2019

19 February 2020

White Paper on AI

19 February 2020

19 Feb 2020 - 14 June 2020

Stakeholder Consultation

The Commission launched a broad stakeholder consultation on regulatory intervention to address the challenges and concerns raised by the increasing use of AI.

19 Feb 2020 - 14 June 2020

2 October 2020

Determination on High Risk AI

At a Special Meeting of the European Council, an invitation was extended to the Commission to provide a clear, objective definition of a high risk AI system.

2 October 2020

21 October 2020

Further Calls

Presidency conclusions – the Charter of Fundamental Rights in the context of Artificial Intelligence and Digital Change.  makes further calls for addressing the opacity, complexity, bias, a degree of unpredictability and partially autonomous behaviour of certain AI system, to ensure their compatibility with fundamental rights and to facilitate the enforcement of legal rules. 

21 October 2020

October 2020

Adoption of Resolutions

The European Parliament adopted a number of resolutions related to AI, including, Framework of Ethical Aspects of AI, Civil Liability Regime for AI, and Intellectual Property Rights for the development of AI

October 2020

21 April 2021

Proposal for Regulation of AI

21 April 2021

19 May 2021

AI in Education Resolution

The European Parliament adopted a resolution on artificial intelligence in education, culture and the audiovisual sector.

19 May 2021

6 October 2021

AI in Criminal Law Resolution

The European Parliament adopted a resolution on artificial intelligence (AI) in criminal law and its use by the police and judicial authorities in criminal matters.

6 October 2021

14 June 2023

AI Act Adopted

European Parliament adopted its negotiating position on the Artificial Intelligence (AI) Act ahead of talks begin with EU countries  in the Council on agreeing the final form of the law. 

14 June 2023

Scope of Application

Article 2: Scope

The Act applies to PROVIDERS who place AI systems on the market or put them into service within the European Union.  

It also applies to USERS of the AI system located within the Union. 

Yes, PROVIDERS and USERS of AI system are subject to the requirements of the Act, provided that the output of the system is produced within the confines of the Union. 

For example, foreign firms launching AI systems in any European member state, must comply with the requirements of the AI Act.

The Act applies to ANYONE who provides an AI system, whether this is a company or an individual.  

The Act also applies, regardless of whether the AI system is provided for payment or free of charge. 

Therefore, private individuals making AI systems available to anyone will also be subject to the law.

The Act applies to operators, who are defined as providers, users, authorised representatives (or agents), importers and distributors of AI systems. 

  • The Regulation does not apply to AI systems developed or used exclusively for military purposes.
  • Public authorities in third countries or international organisations who use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the Union or with one or more Member States.

Risk Based Approach to Regulating AI

There are four categories of risks of AI in the EU Act, namely:

  1. Unacceptable risks - AI systems the use of which is considered unacceptable as contravening Union values for example, violating fundamental rights. 
  2. High risks - High risk to the health and safety of or fundamental rights of natural persons. 
  3. Low risk
  4. Minimal risks

Article 5

  1. AI Systems deploying subliminal techniques on a person without them knowing, in order to distort their behaviour such that it causes or is likely to cause physical or psychological harm.
  2. Systems that exploit any vulnerabilities of a specific group of people, based on their age, mental or physical disability, to materially distort their behaviour in a manner that causes or is likely to cause them physical or psychological harm.
  3. Public authorities (and their agents) who deploy AI systems to classify the trustworthiness of natural persons based on various personality characteristics. 
  4. The use of real time biometric identification systems in publically accessible spaces for the purposes of law enforcement, unless proven to be strictly necessary and a judicial authority grants prior authorisation. 

AI systems are considered High Risk when both of the following conditions are met:

  1. The purpose of the AI system is to serve as a safety feature for a product or to function as a standalone product. It falls under the Union harmonization legislation outlined in Annex II.
  2. Any product that includes an AI system for safety purposes, or any AI system sold as a product, required to  undergo a third-party conformity assessment to ensure compliance with Union harmonisation legislation listed in Annex II before being placed on the market or put into service.

Chapter 2, Article 8 - Key Highlights

  1. Establish, implement, document and maintain a risk management system in relation to High-Risk AI.
  2. The High-Risk AI system must be tested to identify the most appropriate risk management measures.  
  3. Training, validation and testing data sets must be subject to appropriate data governance and management practices.  
  4. Training, validation and testing data sets must be relevant, representative, free of errors and complete and must take into account the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the High-Risk AI system is intended to be used.
  5. High-risk AI systems must be technically documented before going live and kept up to date.
  6. Automatic recording of event logs of a High Risk AI system is required - conforming to recognised standards or common specification. 
  7. Operations of a High Risk AI system must be sufficiently transparent, to enable users to interpret the system's output and use it appropriately.
  8. High-Risk AI systems shall be designed and developed in a way where they can be effectively overseen by natural persons (human oversight).
  9. The systems must be developed with an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout its life. 


Article 17

Providers of High-Risk AI systems must put in place a quality management system to ensure compliance with regulations.  

Providers must ensure that their High Risk system undergo the relevant conformity assessment procedures (as per Article 43), prior to launch of their system. 

Initial cost of compliance of High Risk systems with the requirements of the Act, amounts to between  EUR 6,000 to EUR 7,000.

The annual cost of compliance is estimated at between EUR 5,000 and EUR 8,000 per year.

Verification (assurance or audits) costs could amount to another EUR 3,000 to EUR 7,500 for suppliers of high-risk systems.

Article 71: Penalties

  1. Up to EUR 3o million fine  or 6% of worldwide annual turnover (for companies) for non-compliance with the prohibitions of AI practices referred to in Article 5 and non-compliance High-Risk AI systems set out in Article 10.
  2. Non compliance with other provisions will result in a fine up to EUR 20 million or 4% of worldwide annual turnover for companies. 
  3. Up to EUR 10 million or 2% of total worldwide annual turnover for supplying incorrect, incomplete or misleading information to notified bodies and national regulators.
  1. Take Corrective Action - Immediately take necessary corrective action to bring that system into conformity, to withdraw it or recall it, as appropriate.
  2. Duty to Inform - When High Risk Ai systems present a risk - providers must immediately inform their national regulator.
  3. Cooperation with Regulators - Provide regulators with all the information and documentation necessary to demonstrate conformity to regulation.  Give access to logs if requested to do so.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top