
In the upcoming segment, we’ll delve deeper into Part II of the analysis.
For those who haven’t had the opportunity to review the previous blog post, here is the link to it.
This section continues the exploration of Chapter III, focusing on essential components such as risk management, data governance principles, and implementation guidelines.
WARNING: This part it’s a bit wordy, so get ready; but the following (Part III) starts exploring more technical/software development ideas in relation to the Act.
Building Trustworthy AI: The High-Risk Compliance piece
We need to tackle the high-risk classification for CreditWise a little bit different than the one for InnovateAI Corp. Because of how the regulation and how each of the companies are assessing and implementing risk management processes, policies, and procedures.
The “high-risk” classification for CreditWise means TrustFund AI is now subject to a comprehensive set of mandatory requirements laid out in Chapter III, Section 2 of the EU AI Act (Articles 8-15). These are not mere suggestions but legally binding prerequisites for placing their app on the EU market.
Article 8(1) makes it clear: high-risk AI systems shall comply, taking into account their intended purpose and the state of the art, with the risk management system being a guiding principle.
Their first major tasks we need to tackle are to implement a dynamic Risk Management System (Article 9) and determine the Data and Data Governance practices (Article 10) needed for them. So let’s get to it.
TrustFund AI’s newly appointed Chief Compliance Officer, Anya, pull together a cross-functional team.
“Our journey with CreditWise,” she begins, “starts and ends with managing risk. Article 9 requires us to establish, implement, document, and maintain a continuous, iterative risk management system throughout CreditWise’s entire lifecycle”.
The team maps out the core steps mandated by Article 9(2).
Identification and Analysis of Known and Reasonably Foreseeable Risks.
They must identify risks CreditWise could pose to health, safety, or fundamental rights when used as intended (credit assessment).
For CreditWise, these risks include:
Fundamental Rights: Risk of discriminatory loan denials impacting access to credit (a key financial service, linking to Annex III, point 5(b) ), perpetuating societal biases, infringing on privacy through data use, lack of due process if decisions are opaque.
Safety/Health (Indirect): Financial distress due to unfair denial of credit or, conversely, granting credit inappropriately leading to unmanageable debt, could have health (stress, anxiety) and safety (economic instability) implications.
The Act stresses this includes risks that are “reasonably foreseeable,” not just those explicitly intended. The team then estimates the likelihood and severity of these identified risks. You can reference many risk management frameworks, but just to get you up to speed with some AI-related material, see this post.
This also includes considering “reasonably foreseeable misuse”. For example, what if deployers (banks using CreditWise) try to use the credit scores for purposes beyond what TrustFund AI intended, like employee vetting, if not contractually and technically restricted?
Evaluation of Other Risks (from Post-Market Monitoring).
Anya emphasizes that this system must integrate with their future post-market monitoring (a requirement under Article 72). Here, you have to navigate the Act in different ways. You have to look at Article 72, which references Articles 98, and in turn 11 and Annex IV, and then come up with a plan to be able to justify the possible interactions that your system could have with other systems, AI or non-AI.
New risks emerging once CreditWise is in the wild must be fed back into this evaluation process.
Article 72(2): “The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems.”
Adoption of Appropriate Risk Management Measures:
Based on the evaluations, TrustFund AI must adopt “appropriate and targeted risk management measures”.
- Article 9(5) dictates a hierarchy for these measures: Eliminate or reduce risks as far as technically feasible through CreditWise’s design and development (e.g., designing algorithms to minimize bias from the outset, robust data validation). Implement adequate mitigation and control measures for risks that cannot be eliminated (e.g., clear explanations of how scores are derived, robust human review processes for borderline cases by the deployer). Provide information (under Article 13) and, where appropriate, training to deployers.
After implementing these measures, any remaining risk must be deemed acceptable based on the deployer’s technical capabilities and intended usage context.
The risk management measures referred to in paragraph 2, point (d), shall be such that the relevant residual risk associated with each hazard, as well as the overall residual risk of the high-risk AI systems is judged to be acceptable. Article 9(5).
Testing as a Cornerstone (Article 9(6)-9(8))
“We can’t just theorize about risks,” Anya continues. “Article 9(6) requires rigorous testing to identify the best risk management measures and to ensure CreditWise performs consistently and compliantly”.
This testing must occur throughout development and, crucially, before CreditWise is placed on the market or put into service. It will be performed against “prior defined metrics and probabilistic thresholds appropriate to the intended purpose” (metrics which later must be disclosed in instructions for use, see Article 15(3)).
Where appropriate, this testing can even occur in real-world conditions under specific rules (Article 60, cross-referenced by Art. 9(7)).
Vulnerable Groups and Existing Frameworks (Article 9(9)-9(10))
The team must also specifically consider if CreditWise is likely to adversely impact persons under 18 or other vulnerable groups.
For TrustFund AI, this means being acutely aware of how financial exclusion can disproportionately affect certain demographics. Since TrustFund AI is a financial institution, Anya notes that Article 9(10) allows them to integrate these risk management processes into their existing risk management procedures required under Union financial services law or others, which could streamline some efforts.
InnovateAI Corp (customer service AI app), observing TrustFund AI’s process, realizes that if ConverseAI is ever adapted for a high-risk scenario, they too would need such a comprehensive risk management system.
The Lifeblood of AI: Data and Data Governance (Article 10)
With the risk management framework outlined, TrustFund AI’s data science lead, Ben, takes center stage.
“CreditWise’s AI models will be trained on data. Article 10 is non-negotiable: our training, validation, and testing datasets must meet stringent quality criteria, managed through appropriate data governance and management practices”.
Core Data Governance Practices (Article 10(2))
Ben outlines the key practices they must embed:
- Relevant Design Choices: Documenting their data sourcing strategies from the start.
- Data Collection & Origin: Transparency about data sources and, for personal data, the original purpose of collection. For CreditWise, this means being clear about where applicant data comes from (e.g., applicant submission, credit bureaus with consent).
- Data Preparation: Rigorous processes for annotation, labelling (if supervised learning is used), cleaning, updating, and aggregation.
- Assumptions: Clearly formulating assumptions about what the data measures and represents (e.g., does a certain financial transaction history truly represent future creditworthiness?).
- Data Assessment: Evaluating the availability, quantity, and suitability of necessary datasets.
- Bias Examination (CRITICAL for CreditWise): This is a major focus. They must examine datasets for “possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations (feedback loops)”.
Challenge for CreditWise: Historical lending data often reflects past societal biases. If not carefully handled, CreditWise could perpetuate discrimination against certain demographics (e.g., based on gender, race, or postcode if these are proxies for protected characteristics). Check Recital 67’s warning about biases in underlying datasets.
- Bias Mitigation: Implementing “appropriate measures to detect, prevent and mitigate” identified biases. This might involve pre-processing techniques, in-processing adjustments to algorithms, or post-processing of outputs, alongside continuous monitoring.
- Data Gaps: Identifying relevant data gaps or shortcomings that prevent compliance and figuring out how to address them.
Dataset Quality Requirements (Article 10(3) & 10(4))
Beyond governance processes, the datasets themselves must be:
- Relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose.
- Possess appropriate statistical properties, including for the persons or groups CreditWise is intended to be used for. This means if CreditWise is for all EU adults, its training data can’t be skewed heavily towards one demographic group.
- Take into account, as required by the intended purpose, features particular to the specific geographical, contextual, behavioral, or functional setting. (e.g., financial behaviors might differ across EU regions).
The Sensitive Topic: Using Special Category Data for Bias Mitigation (Article 10(5))
Anya raises a complex point from Article 10(5):
“To ensure bias detection and correction, providers may exceptionally process special categories of personal data (like data revealing racial or ethnic origin), but only if it’s strictly necessary and subject to a list of stringent safeguards.”
These safeguards include:
- Bias correction cannot be effectively fulfilled by other data (e.g., synthetic, anonymized).
- Data is subject to technical limitations on re-use, state-of-the-art security, and privacy-preserving measures (e.g., pseudonymization).
- Strict access controls and confidentiality obligations are in place.
- Data is not transmitted to other parties.
- Data is deleted once bias is corrected or the retention period ends, whichever is first.
- Detailed records justifying the processing must be kept.
“This is a powerful but carefully controlled provision, it means if we find, for example, that CreditWise is underperforming for a particular ethnic group, and we can only verify and correct this by carefully analyzing data that might reveal ethnicity, Article 10(5) provides a narrow path, provided we meet all those conditions and document everything meticulously. This supports the Act’s goal of preventing discrimination, as highlighted in Recital 70.“
Now Ben’s team realizes the immense responsibility. For CreditWise, ensuring fair and unbiased credit assessments is not just good business; it’s a core legal requirement involving deep, ongoing work on their data. They must avoid the pitfalls of their earlier idea about scraping social media, as such data would fail almost every quality and governance criterion under Article 10 for a high-risk system.
For InnovateAI Corp, even if ConverseAI isn’t initially high-risk, if it processes personal data (like voice recordings containing biometric data as per Article 3(34)), they will still be bound by GDPR. And if they ever offer a high-risk version of ConverseAI, Article 10’s requirements will fully apply to its training, validation, and testing datasets.
The Blueprint and the Logbook: Documentation, Transparency, and Oversight (Articles 11-14)
With a risk management system in development and a clear data governance strategy taking shape, Anya from TrustFund AI turns the team’s attention to the comprehensive documentation, transparency, and human oversight mechanisms required for CreditWise.
“These aren’t just afterthoughts, they are integral to proving compliance and ensuring CreditWise is used responsibly. They are also what notified bodies will scrutinize during conformity assessment (a process detailed later in Article 43).”
The System’s Guide: Technical Documentation (Article 11)
“Article 11 mandates extensive technical documentation, which must be drawn up before CreditWise is placed on the market and kept up-to-date…”
This documentation must demonstrate compliance with all the high-risk requirements (Chapter III, Section 2) and be clear enough for national competent authorities and notified bodies to assess this compliance.
Now, lets go to the bottom of the Act, Annex IV, which details the minimum information. A more in depth explanation of this (and worth checking) are the System Cards from major AI providers, it will give you a good grasp of this. This ties to the section below as well on: Clarity for Users.
For the EU AI Act, we can summarize in :
- General description: Intended purpose, provider details, system version, interaction with other hardware/software, forms in which it’s placed on the market, hardware requirements, and user interface descriptions.
For CreditWise, this means detailing its loan assessment purpose, its API for banks, and the dashboard for human reviewers.
- Elements of the AI system and development process: Methods, design specifications (algorithms, key choices, assumptions, what it optimizes for, expected output quality), system architecture, computational resources used, data requirements (datasheets on training, validation, testing data (provenance, scope, characteristics, selection, labelling, cleaning methodologies), human oversight measures planned, pre-determined changes, and validation/testing procedures (metrics for accuracy, robustness, bias, test logs, reports).
This is where TrustFund AI’s rigorous work under Article 10 on data governance will be meticulously documented.
- Monitoring, functioning, and control: Capabilities, limitations (including accuracy for specific groups), foreseeable unintended outcomes, risks to fundamental rights, human oversight needs (Article 14), and input data specifications.
- Risk management system details: As we discussed above and as per Article 9.
- Lifecycle changes, standards applied, EU declaration of conformity, and the post-market monitoring plan (Article 72).
Anya notes:
“While the Act offers a simplified documentation form for SMEs and microenterprises, given CreditWise’s high-risk nature and our ambitions, we’ll need the full, comprehensive version.”
This documentation must be kept for 10 years after market placement. For financial institutions like the banks that might deploy CreditWise, Article 18(3) notes that this technical documentation can be part of the documentation they already keep under relevant financial services law.
Keeping Track: Record-Keeping and Logs (Article 12)
“Beyond static documentation, CreditWise must technically allow for the automatic recording of events – logs – throughout its lifetime,” Ben citing Article 12(1).
These logs are crucial for traceability and must facilitate:
- Identifying situations that could result in CreditWise presenting a risk or undergoing a substantial modification (which would trigger a new conformity assessment under Article 43(4)).
- The provider’s post-market monitoring (Article 72).
- Monitoring of operations by deployers (a deployer obligation under Article 26(5)).
For certain biometric identification systems (listed in Annex III, point 1a), Article 12(3) specifies minimum logging details like period of use, reference database, input data leading to a match, and identification of persons involved in verification. While not directly CreditWise’s function, this illustrates the level of detail expected for some high-risk systems.
Logs must be kept by providers for an appropriate period, at least six months, unless other laws require longer. Deployers also have log-keeping obligations. So, when doing System Design your engineers should start looking at this, not only for financial apps, but in general, due to the constant change of AI.
Clarity for Users: Transparency and Provision of Information (Article 13)
I relations to and focusing on Article 13:
“Our deployers (the banks and financial institutions using CreditWise) need to understand it inside out, the system must be designed for sufficient operational transparency to enable deployers to interpret its output and use it appropriately”. Head of Product.
This means CreditWise must be accompanied by instructions for use (digital or otherwise) that are “concise, complete, correct and clear…relevant, accessible and comprehensible“. These instructions must include:
- Provider identity and contact details.
- Characteristics, capabilities, and limitations of performance: Intended purpose; level of accuracy (including its metrics, as required by Article 15), robustness, and cybersecurity, and known circumstances affecting these; known or foreseeable circumstances leading to risks to health, safety, or fundamental rights; technical capabilities for explaining its output; performance for specific groups if relevant; and input data specifications or information on training/validation/testing datasets.
For CreditWise, this means clearly stating its accuracy in predicting creditworthiness, any known limitations (e.g., for individuals with thin credit files), and what data it was trained on.
- Pre-determined changes to the system and its performance.
- Human oversight measures (as per Article 14), including technical measures to help deployers interpret outputs.
- Computational/hardware resources needed, expected lifetime, and maintenance/care measures (including software updates).
- Description of mechanisms for deployers to use the logging capabilities (Article 12).
This information empowers deployers to meet their own obligations under Article 26, such as using the system according to instructions and conducting fundamental rights impact assessments (Article 27).
Keeping Humans in the Loop: Human Oversight (Article 14)
“Crucially, CreditWise must be designed and developed so it can be effectively overseen by natural persons while it’s in use”. Anya
This is the core of Article 14.
Human oversight aims to prevent or minimize risks to health, safety, or fundamental rights, especially those persisting despite other measures.
The oversight measures must be commensurate with CreditWise’s risks, autonomy, and context of use, and can be:
- Built into the system by TrustFund AI before market placement (e.g., dashboards for human reviewers, alert systems for anomalous scores).
- Identified by TrustFund AI for implementation by the deployer (e.g., protocols for when a human must review an AI-generated loan denial).
To facilitate this, CreditWise must enable the assigned human overseers (e.g., bank loan officers) to:
- Understand its capacities and limitations and monitor its operation for anomalies or unexpected performance.
- Be aware of “automation bias” – the tendency to over-rely on the AI’s output, especially as CreditWise provides recommendations for decisions.
- Correctly interpret its output, using available tools. This is a major subject and for another post of Explainable AI.
- Decide not to use the system or to disregard, override, or reverse its output in any particular situation. This is vital for CreditWise; a loan officer must be able to override an AI-recommended denial if other valid factors warrant it.
- Intervene or interrupt the system via a ‘stop’ button or similar safe halt procedure.
For certain biometric identification systems (Annex III, point 1a), Article 14(5) requires verification by at least two natural persons before action is taken, though this specific rule has exceptions for law enforcement/migration where disproportionate. While not directly about CreditWise, it underscores the Act’s emphasis on robust human control in sensitive high-risk areas.
Deployers of CreditWise will need to ensure their staff are adequately trained and have the authority to exercise this oversight effectively, supported by the AI literacy principles from Article 4.
Performance and Protection: Accuracy, Robustness, and Cybersecurity (Article 15)
Finally, the technical heart of CreditWise must be sound.
Article 15 mandates that high-risk AI systems like CreditWise:
- Achieve an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle.
- Have their levels of accuracy and relevant accuracy metrics declared in the instructions for use (linking back to Article 13(3bii)).
TrustFund AI must be transparent about how accurate CreditWise is.
- Be as resilient as possible to errors, faults, or inconsistencies, especially those arising from interaction with people or other systems. This can be achieved through technical redundancies like fail-safe plans.
For systems that continue to learn post-deployment (which CreditWise might, to adapt to new financial data), they must be designed to reduce the risk of biased outputs influencing future operations (feedback loops).
- Be resilient against attempts by unauthorized third parties to alter their use, outputs, or performance by exploiting vulnerabilities.
This cybersecurity requirement is critical for CreditWise, which handles sensitive financial data. Technical solutions should address AI-specific vulnerabilities (one blog post coming about this soon) such as:
- Data poisoning (manipulating training data).
- Model poisoning (manipulating pre-trained components).
- Adversarial examples or model evasion (inputs designed to cause mistakes).
- Confidentiality attacks.
High-risk AI systems might demonstrate compliance with these cybersecurity requirements by fulfilling essential requirements under forthcoming horizontal EU cybersecurity legislation for products with digital elements, or through certification under schemes like the Cybersecurity Act (Regulation (EU) 2019/881).
For InnovateAI Corp, even if ConverseAI aims for a lower-risk classification, these principles from Articles 11-15 represent best practices for building any reliable and trustworthy AI.
If any version of ConverseAI is ever deemed high-risk (e.g., if deployed in critical infrastructure management or for certain employment decisions, Annex III, points 2 and 4), these requirements would become fully mandatory.
These has been a very dense and in depth look at critical components of the EU AI Act, in the following post, we will dive a bit deeper into technical aspects and considerations.
Disclaimer: The views and opinions expressed in this blog are solely my own and are intended for educational and informational purposes only. They do not constitute legal, financial, or business advice. Readers are encouraged to seek independent professional advice before applying any ideas or suggestions discussed herein. Any reliance you place on the information provided is strictly at your own risk.
