
Various regulations, policies, and frameworks aim to govern the rapidly evolving world of AI. Traditional software regulations are becoming outdated or controversial, with some arguing they hamper innovation. Meanwhile, jurisdictions like the European Union are taking proactive steps to protect their citizens.
Legal and regulatory documents can be intimidating for those outside the legal profession—their complexity and length often discourage people from reading and understanding them. In this section, I’ll try to break down the EU AI Act, explaining both its core principles and practical applications through examples from two fictional companies.
Two companies on different continents share similar ambitions:
InnovateAI Corp, a dynamic US tech firm, is perfecting “ConverseAI“: an AI agent with human-like conversational abilities that aims to revolutionize customer service.
TrustFund AI, a fledgling EU startup, is preparing to launch “CreditWise“: an app that uses AI to streamline loan approvals and make financial advice more accessible.
While both companies dream of market success, they face a new regulatory reality: the European Union’s Artificial Intelligence Act.
The ambition of the EU is to create a globalist standard for AI governance. The purpose of the EU AI Act is to foster human-centric and trustworthy AI development, while ensuring a high level of protection for health, safety, and fundamental rights, including democracy, the rule of law, and environmental protection. It aims to create harmonized rules for AI systems placed on or used within the Union market.
Chapter I
General Provisions, Articles 1-4
Consider the perspective of executives, developers, and product managers at our two fictional companies. Their first crucial task is determining whether their systems qualify as AI under the Act and understanding their responsibilities. This assessment is a vital part of the pre-design phase that every company must undertake as AI becomes globally prevalent—especially since the boundaries between prohibited and permissible AI systems can be complex and consequential.
The EU AI Act defines an ‘AI system’ (Art. 3(1)) as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” (bold added for emphasis).
- For InnovateAI Corp: ConverseAI clearly fits. It’s machine-based, designed for autonomous customer interaction (varying levels), will likely adapt based on conversations (adaptiveness), and infers from customer queries (input) how to generate responses and solutions (outputs) that influence the virtual customer service environment.
- For TrustFund AI: CreditWise also falls under this definition. It’s a machine-based app, operates with autonomy in assessing data, potentially adapts its algorithms, and infers from financial data (input) to produce credit scores or loan decisions (outputs) that significantly influence individuals’ financial environments.
This definition is crucial because it determines the Act’s applicability. It’s broad enough to cover current AI technologies and flexible for future developments, focusing on key characteristics like inference and autonomy. It intentionally excludes simpler traditional software.
Who’s Who? Providers, Deployers, and Territorial Scope (Articles 2, 3(3), 3(4))
The Act distinguishes between key actors:
- A ‘provider’ develops an AI system or a general-purpose AI model (more on those in Article 53) or has one developed, and places it on the market or puts it into service under its own name or trademark.
InnovateAI Corp is a provider of ConverseAI when it sells it to other businesses.
TrustFund AI is a provider of the CreditWise app.
- A ‘deployer’ is a person or body using an AI system under its authority, except for personal, non-professional activity.
An EU-based bank that licenses ConverseAI from InnovateAI Corp to handle its customer calls would be a deployer.
An individual using the CreditWise app for a loan is an “affected person“, while the entity making the loan decision based on CreditWise’s output is the deployer (which could be TrustFund AI itself if it’s a direct-to-consumer lender, or a bank using the app).
The most important first step (if you’re an engineer or someone interested in deploying an AI app) is understanding these roles, where you fit within the Act’s scope, and which specific obligations apply to you. For example, providers of high-risk AI systems must handle most compliance requirements (Chapter III, Section 3, e.g., Article 16), while deployers have their own responsibilities, such as ensuring proper use and human oversight (Article 26).
Does this apply to InnovateAI Corp in the US? Territorial Scope (Article 2)
Yes, very likely. The Act applies to:
- Providers placing AI systems on the Union market, irrespective of their location. So, if InnovateAI Corp wants to sell ConverseAI to EU clients, they are covered.
- Deployers of AI systems established or located in the Union.
- Providers and deployers established outside the Union if the output produced by their AI system is used in the Union. This is a significant point for global companies. If an EU citizen interacts with an AI system provided by a non-EU company, and the output (e.g., a decision, content) is used in the EU, the Act applies.
InnovateAI Corp Example: If a US company uses ConverseAI (hosted in the US) to serve its EU customers, and ConverseAI’s output (e.g., resolution of a complaint, provision of information) is directed at those EU customers, InnovateAI Corp (and potentially the US company deploying it, depending on specifics) could fall under the Act’s jurisdiction due to the output being used in the Union. This demonstrates the Act’s extraterritorial reach, a concept also seen in data protection with GDPR (Recital 22 explains some of this rationale), which is material for another blog.
The Act also doesn’t apply in certain areas like systems exclusively for military, defense, or national security purposes, or to AI systems and models solely for scientific research and development before market placement. However, any research and development activity must still be conducted ethically and responsibly.
AI Literacy: A Shared Responsibility (Article 4)
The Act also introduces the concept of ‘AI literacy’. Both providers and deployers like InnovateAI Corp and TrustFund AI must take measures to ensure their staff (and others dealing with the AI systems on their behalf) have a sufficient level of AI literacy.
This includes understanding the system’s capabilities (think System Cards or Model Cards), limitations, and how to interpret its output correctly, tailored to the context of use. This is crucial for ensuring human oversight (a key requirement for high-risk AI under Article 14) is effective. The European AI Board is tasked with promoting AI literacy tools and public awareness.
Chapter 3
Red Lines in the Code: Steering Clear of Prohibited AI Practices (Relating to Chapter II: Prohibited AI Practices, Article 5)
Companies must carefully navigate critical design decisions to ensure compliance with the EU AI Act’s prohibited practices. This complex undertaking requires thorough consideration, extensive planning, and a deep understanding of both technical and regulatory requirements.
Organizations need to dedicate substantial time and resources to properly define their products, establishing clear boundaries and safeguards. It is essential to assemble a team of qualified personnel who possess the expertise to handle various scenarios, edge cases, and potential compliance challenges that may arise during development and deployment.
The decision-making process must be approached as a comprehensive, collaborative effort that brings together diverse perspectives and expertise. This necessitates active participation from multiple stakeholders across the organization, including engineering teams who understand technical constraints, design professionals who focus on user experience, product managers who oversee development strategy, legal experts who ensure regulatory compliance, board members who provide strategic oversight, and other key personnel who can contribute valuable insights into the company’s risk tolerance and ethical boundaries. Each of these perspectives plays a vital role in shaping how the company approaches AI development while maintaining compliance with the Act’s requirements.
As InnovateAI Corp and TrustFund AI move from concept to design, they encounter the EU AI Act’s strictest chapter: Prohibited AI Practices.
These are AI applications deemed to pose an unacceptable risk to EU values and fundamental rights, and are therefore banned outright.
For our companies, understanding these red lines isn’t just about compliance; it’s about building ethically sound and trustworthy AI from the ground up.The penalties for non-compliance here are the most severe, reaching up to EUR 35,000,000 or 7% of global annual turnover.
Article 5 lists several categories of prohibited AI. These include systems that:
- Deploy subliminal, manipulative, or deceptive techniques to materially distort behavior and cause significant harm.
- Exploit vulnerabilities of specific groups (due to age, disability, socio-economic situation) to materially distort behavior and cause significant harm.
- Are used for ‘social scoring’ by public or private actors leading to detrimental or unjustified treatment.
- Perform risk assessments of natural persons to predict criminal offenses based solely on profiling or personality traits, with a narrow exception for supporting human assessment based on objective facts linked to criminal activity.
- Create or expand facial recognition databases through untargeted scraping from the internet or CCTV footage.
- Infer emotions in the areas of workplace and education, with exceptions for medical or safety reasons.
- Conduct biometric categorization based on sensitive attributes like race, political opinions, or sexual orientation to deduce or infer these, with an exception for lawful labelling/filtering in law enforcement.
- Engage in ‘real-time’ remote biometric identification in publicly accessible spaces for law enforcement, except under exhaustively listed and narrowly defined circumstances requiring strict authorization and safeguards.
Now, with this in mind, let’s see how these prohibitions directly influence the design considerations for ConverseAI and CreditWise.
InnovateAI Corp’s ConverseAI: The Case of Sentiment Analysis
The product team at InnovateAI Corp is excited about ConverseAI’s potential. During a brainstorming session, a product manager suggests:
“What if ConverseAI could perform real-time sentiment analysis on the customer’s voice during a call? It could detect frustration levels and subtly adapt its approach, or even route highly agitated customers to specialized human agents. This could de-escalate situations and improve satisfaction!”
Their internal AI ethics advisor, familiar with the EU AI Act, immediately raises a concern, pointing to Article 5(1f) which prohibits “the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons“.
Why this is problematic for ConverseAI:
- Direct Prohibition Context: While a customer interacting with a company’s AI might not traditionally be seen as being in a “workplace” or “education institution” from their own perspective, the AI agent itself is a tool used within the deploying company’s operational environment, which constitutes a workplace for its employees (who might oversee or interact with the AI). If the sentiment analysis is primarily for managing the interaction from the company’s side (e.g., optimizing agent responses, managing employee workload based on customer emotion), it leans into the workplace context.
- Spirit of the Law & Recital 44: Even if a narrow legal interpretation might argue about the customer’s context, Recital 44 clarifies the deep concerns regarding emotion recognition systems. It notes their “limited reliability, the lack of specificity and the limited generalizability,” and the risk of “discriminatory outcomes” and intrusiveness. The Recital further highlights that combining these systems with the “imbalance of power in the context of work or education” could lead to detrimental treatment. While customer interactions aren’t always workplaces for the customer, an imbalance of power often exists.
- Risk of Manipulation or Exploitation: If ConverseAI uses detected frustration not just to help, but to, for example, push a more expensive “calming” service or exploit the customer’s emotional state for sales, it could stray towards the prohibited manipulative practices under Article 5(1a) (“purposefully manipulative or deceptive techniques…materially distorting the behaviour of a person”) or Article 5(1b) (exploiting vulnerabilities if frustration is deemed a temporary vulnerability).
- Definition of “Emotion Recognition System”: Article 3(39) defines an ’emotion recognition system’ as “an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data“. Voice patterns used to infer emotion fall under this.
- Transparency Issues: Even if not strictly prohibited in every customer service scenario, using such a system would trigger significant transparency obligations under Article 50(3), requiring clear information to the natural persons exposed. However, this article on transparency does not legitimize a practice if it’s otherwise prohibited by Article 5.
Risks if InnovateAI Corp proceeds with this feature for EU clients:
- Direct Legal Challenge: Authorities could argue it falls under the spirit or letter of Article 5(1f) or, if misused, Article 5(1a) or 5(1b).
- Hefty Fines: Violating Article 5 carries the highest tier of penalties.
- Reputational Damage: Users and advocacy groups are increasingly wary of AI that “reads” emotions, viewing it as intrusive and potentially inaccurate.
- Difficulty in Justification: The “medical or safety reasons” exception in Article 5(1f) is narrow and unlikely to apply to general customer service frustration.
Proactive Design Choice: InnovateAI Corp decides against implementing real-time emotion inference based on voice sentiment for ConverseAI in the EU market. Instead, they focus on:
- Training ConverseAI to recognize explicit cues of dissatisfaction (e.g., keywords, repetition of issues) without inferring underlying emotions from biometrics.
- Providing clear and easy escalation paths to human agents if the AI cannot resolve an issue or if the customer requests it, aligning with principles of human oversight (which will be critical if ConverseAI is ever considered to be high-risk, see Article 14).
- Ensuring transparency about interacting with an AI, as per Article 50(1).
TrustFund AI’s CreditWise: Social Media Profiling
The data science team at TrustFund AI is under pressure to make CreditWise the most accurate credit assessment tool on the market. One data scientist proposes:
“Traditional credit data is limited. Why don’t we augment our profiles by scraping applicants’ social media and other publicly available online data? We could look for patterns in spending habits, social connections, even lifestyle choices mentioned online, to build a richer risk profile.”
The startup’s compliance lead, who has been diligently studying the EU AI Act, immediately flags this as a high-risk idea with probable prohibitions.
Why this is problematic for CreditWise:
- Social Scoring (Article 5(1c)): This is the most direct concern. Article 5(1c) prohibits AI systems used for “the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment…in social contexts that are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment…that is unjustified or disproportionate to their social behaviour or its gravity“.
Using diverse, unverified, and context-stripped social media data (e.g., holiday photos, political likes, online comments) to influence a credit score, which then determines access to essential financial services (loans), is highly likely to be seen as social scoring. Denying a loan based on non-financial online persona would be a “detrimental treatment” in a context (creditworthiness) “unrelated” to the original context of the social media posts. Recital 31 emphasizes that such systems “may violate the right to dignity and non-discrimination and the values of equality and justice“.
- Profiling and Fundamental Rights: Even if it narrowly dodged the “social scoring” definition, this approach relies heavily on profiling natural persons. Profiling is defined in Article 3(52) by referencing GDPR Article 4(4). While profiling itself isn’t banned for credit scoring, using unreliable, irrelevant, and potentially discriminatory data from social media would make it virtually impossible to meet the stringent requirements for high-risk AI systems, which CreditWise will almost certainly be (as per Annex III, point 5(b) for creditworthiness assessment). These requirements include: Data Quality and Governance (Article 10): Training, validation, and testing datasets must be “relevant, sufficiently representative, and to the best extent possible free of errors and complete in view of the intended purpose“.
Social media data is notoriously unreliable, prone to misinterpretation, and often not representative of actual financial behavior. It would also be challenging to ensure this data is “free of errors and complete.” Bias detection and mitigation would be a nightmare. Accuracy (Article 15): Achieving an appropriate and justifiable level of accuracy with such noisy data would be difficult.
Non-Discrimination: Recital 58 explicitly warns that AI systems for credit scoring “may lead to discrimination between persons or groups and may perpetuate historical patterns of discrimination… or may create new forms of discriminatory impacts“. Social media data is rife with proxies for protected characteristics.
- Risk Assessment for Offending (Analogy from Article 5(1d)): While Article 5(1d) specifically prohibits risk assessments predicting criminal offenses based solely on profiling or personality traits, the underlying principle is caution against broad profiling for critical decisions. Drawing inferences about financial reliability from general social behavior is analogous in its potential for unfairness.
Risks if TrustFund AI proceeds with this feature:
- Direct Violation of Article 5(1c): Leading to the highest tier of fines and an order to cease operations.
- Failure of Conformity Assessment: Even if not strictly deemed “social scoring,” the system would likely fail the mandatory conformity assessment for high-risk AI systems (Article 43) due to data quality (Article 10) and bias issues.
- Legal Challenges and Reputational Ruin: Such a system would be a prime target for discrimination lawsuits and public outcry.
- Erosion of User Trust: Users would be unlikely to trust a financial app that scrapes their social media for loan decisions.
Proactive Design Choice: TrustFund AI decides firmly against using social media or unrelated public data for profiling in CreditWise. Their focus shifts to:
- Using only relevant, verified financial data provided by the applicant or obtained through legitimate, regulated financial data channels (with explicit consent).
- Investing heavily in robust data governance practices for their training and operational data, as mandated by Article 10, with a specific focus on identifying and mitigating biases related to protected characteristics. This includes careful data collection, preparation, and examination.
- Designing CreditWise for transparency and explainability, so users understand the main factors driving a decision (linking to Article 13 on transparency for high-risk AI and the right to explanation in Article 86).
- Preparing for a rigorous fundamental rights impact assessment as required for deployers of such systems under Article 27.
By proactively considering Article 5, both InnovateAI Corp and TrustFund AI are not only aiming for legal compliance but are also embedding ethical considerations deep into their product DNA. This early attention to “red lines” will save them significant legal, financial, and reputational trouble down the line and steer them towards building AI that is genuinely trustworthy.
Chapter III
Raising the Stakes: Is Your AI High-Risk?
Having successfully steered clear of the outright prohibitions in Article 5, InnovateAI Corp and TrustFund AI now face a critical question:
Are ConverseAI and CreditWise “high-risk” AI systems?
This classification is pivotal, as it unlocks a demanding suite of mandatory requirements (Articles 8-15 ) and obligations for providers and deployers (Articles 16-27).
The Two Gateways to High-Risk Classification (Article 6)
Article 6 lays out two primary ways an AI system is considered to be high-risk:
Pathway 1: AI as a Safety Component of Regulated Products (Article 6(1)) An AI system is high-risk if it’s intended to be used as a safety component of a product, or is itself a product, covered by specific Union harmonisation legislation listed in Annex I of the Act. AND, that product (or the AI system itself as a product) requires a third-party conformity assessment under that existing legislation.
Annex I includes legislation for machinery, toys, medical devices, etc..
What is the relevance for our companies?
InnovateAI Corp (ConverseAI): Unlikely to be a safety component of a physical product listed in Annex I. ConverseAI is primarily a software system for interaction.
TrustFund AI (CreditWise): Also unlikely to fall under this pathway. CreditWise is a financial application, not typically a safety component of products like machinery or toys.
Pathway 2: AI Systems Listed in Annex III (Article 6(2)) Independently, AI systems used in specific areas listed in Annex III are considered high-risk. This is where most standalone AI systems affecting fundamental rights or safety will be captured. The areas in Annex III include:
- Biometrics (e.g., remote biometric identification, biometric categorization based on sensitive attributes, emotion recognition – to the extent their use is permitted). Management and operation of critical infrastructure (e.g., road traffic, supply of water, gas, electricity).
- Education and vocational training (e.g., determining access, evaluating learning outcomes).
- Employment, workers’ management, and access to self-employment (e.g., recruitment, promotion decisions, task allocation, performance monitoring).
- Access to and enjoyment of essential private services and essential public services and benefits (e.g., evaluating eligibility for public assistance, credit scoring, risk assessment for life/health insurance, dispatching emergency services).
- Law enforcement (e.g., polygraphs, evaluating evidence reliability, certain risk assessments of offending, profiling – to the extent permitted).
- Migration, asylum, and border control management (e.g., polygraphs, risk assessments of individuals, examining applications, identification – to the extent permitted).
- Administration of justice and democratic processes (e.g., assisting judicial authorities in research or applying law, influencing election outcomes).
TrustFund AI’s CreditWise: A Clear Case for High-Risk
The team at TrustFund AI convenes to analyze CreditWise’s classification. Their legal counsel points directly to Annex III, point 5(b): AI systems intended to be used to “evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud“.
“CreditWise is precisely this,” she explains. “Its core function is to assess creditworthiness and produce a score that will be used to make loan decisions. This means we are, by default, a high-risk AI system under Article 6(2).”
This has significant implications for TrustFund AI. They will need to comply with all the demanding requirements of Chapter III, Section 2 (Articles 8-15) and the obligations for providers in Section 3 (e.g., Article 16).
InnovateAI Corp’s ConverseAI: A More Complex Picture
For InnovateAI Corp, the classification of ConverseAI is less straightforward. As a versatile customer service agent, its standard “intended purpose” might not immediately place it in an Annex III category.
However, their Head of Product, Melina, recalls a conversation with a potential major client – a large EU-based insurance company. This client wants to use ConverseAI not just for general queries, but also to assist in the initial intake and preliminary assessment for health insurance applications.
“This changes things,” their EU legal consultant advises. “If ConverseAI is marketed or customized for, and then used by this insurance company to, for example, ‘evaluate’ aspects relevant to ‘risk assessment and pricing in relation to natural persons in the case of life and health insurance,’ it could fall under Annex III, point 5(c). The key is whether ConverseAI’s output materially influences the decision-making in that high-risk context.”
This highlights a critical aspect: an AI system’s risk classification can depend heavily on its specific deployment and the ‘intended purpose’ defined by the provider.
If InnovateAI Corp markets ConverseAI specifically for such a high-risk insurance use case, ConverseAI itself becomes a high-risk AI system. If the insurance company (the deployer) independently adapts a generic ConverseAI for this purpose, the deployer might take on provider obligations for that modified system (as per Article 25(1b or 1c) ).
This distinction regarding who makes the system high-risk is vital for assigning responsibility.
The “Not Significant Risk” Derogation (Article 6(3))
Is there an escape hatch from high-risk classification for systems listed in Annex III?
Article 6(3) provides a narrow derogation. An AI system under Annex III is not considered high-risk if it “does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making“. This applies if the system meets one of the following conditions:
- It performs a narrow procedural task.
- It improves the result of a previously completed human activity.
- It detects decision-making patterns or deviations from prior patterns but isn’t meant to replace or influence the human assessment without proper human review.
- It performs a preparatory task to an assessment relevant for Annex III use cases.
However, crucially, Article 6(3) also states: “Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons“. Profiling is defined in Article 3(52) by reference to GDPR Article 4(4).
- For TrustFund AI (CreditWise): Applying this derogation to CreditWise is nearly impossible. Credit scoring directly and materially influences loan decisions, often being the primary determinant. It’s not a “narrow procedural task” nor merely “preparatory” to the final credit decision; it is a core part of that decision. CreditWise inherently involves profiling natural persons to assess their creditworthiness. Thus, TrustFund AI concludes CreditWise remains firmly high-risk.
- For InnovateAI Corp (ConverseAI in the insurance scenario): If ConverseAI were only used, for example, to transcribe the initial customer interaction for a human underwriter who makes all substantive assessments (a “preparatory task” ), it might qualify for the derogation, provided it doesn’t otherwise materially influence the outcome or perform prohibited profiling in that context. However, if it starts categorizing risk or suggesting premium levels, it’s materially influencing and likely high-risk.
If a provider believes their Annex III system is not high-risk under this derogation, they must document their assessment before placing it on the market and register it in a specific section of the EU database (another subject for another day).
National competent authorities in the EU can request this documentation. The Commission had a mandate to provide guidelines on this practical implementation, including examples, by February 2026. If a provider misclassifies a system to circumvent requirements, market surveillance authorities can intervene (a topic for Article 80 ), and there could be penalties (Article 80(7) ).
A Dynamic List: Amendments to Annex III (Article 7)
Companies like InnovateAI Corp and TrustFund AI must also be aware that the list of high-risk use cases in Annex III is not static.
The Commission is empowered to adopt delegated acts to add or modify use cases in Annex III if an AI system is intended for an Annex III area and poses a risk of harm to health, safety, or fundamental rights equivalent to or greater than existing high-risk systems.
The Commission will consider criteria such as the intended purpose, extent of use, data processed, level of autonomy, evidence of harm, potential extent of harm (especially to vulnerable groups or due to power imbalances), and the reversibility of outcomes. According to the Act, the Commission evaluates the need for amendments to Annex III annually.
For TrustFund AI, accepting their high-risk status early is crucial. For InnovateAI Corp, continuous assessment of ConverseAI’s evolving features and marketed applications will be necessary to ensure they don’t inadvertently cross into a high-risk category without preparing for the extensive obligations that follow.
This is the first part of the practical application of the EU AI Act, which is a massive piece of legislation everyone building with AI should have studied and at least acknowledged as part of their due diligence. Understanding the intricacies of this act is crucial, as it sets forth guidelines that not only govern the ethical use of artificial intelligence but also ensure compliance with various safety and accountability standards.
As the landscape of AI technology continuously evolves, stakeholders must remain vigilant in their efforts to adapt to these regulations, thereby fostering a responsible approach to innovation that safeguards both users and society at large. Embracing these principles is not merely a legal obligation; it is a commitment to advancing a future where AI enhances human potential while minimizing risks associated with its deployment.
In Part II we will keep exploring how to apply the Act.
Disclaimer:
The views and opinions expressed in this blog are solely my own and are intended for educational and informational purposes only. They do not constitute legal, financial, or business advice. Readers are encouraged to seek independent professional advice before applying any ideas or suggestions discussed herein. Any reliance you place on the information provided is strictly at your own risk.
