AI Act

The AI Act imposes broad new responsibilities to control risks from AI systems without at the same time laying down specific standards they are expected to meet. For instance:

  • Conformity assessment (Art. 43) – The proposed route for internal control relies too much on the self-reflective capacities of producers to assess AI quality management, risk management and bias. Resulting in subjective best-practices;
  • Risk- and quality management systems (Art. 9 and 17) – Requirements set out for risk management systems and quality management systems remain too generic. For example, it does not provide precise guidelines how to identify and mitigate ethical issues such as algorithmic discrimination;
  • Normative standards – Technical standards alone, as requested the European Commission to standardization bodies CEN-CENELEC, are not enough to realize AI harmonization across the EU. Publicly available technical and normative best-practices for fair AI are urgently needed.

As a member of Dutch standardization body NEN, Algorithm Audit contributes to the European debate how fundamental rights should be co-regulated by product safety.

Presentation to European standardization body CEN-CENELEC on stakeholder panels

Our audits take in mind upcoming harmonized standards that will be applicable under the AI Act, excluding cybersecurity specifications. For each of our technical and normative audit reports is elaborated how it aligns with the current status of AI Act harmonized standards.

Digital Services Act (DSA)

The Digital Services Act (DSA) lacks provisions to disclose normative methodological choices that underlie the AI systems the DSA tries to regulate. For instance:

  • Risk definitions – Article 9 of the Delegated Regulation (DR) for independent third party auditing (as mandated under DSA Art. 37) specifies that “audit risk analysis shall consider inherent risk, control risk and detection risk”. More specific guidance should be provided in Art. 2 of the DR how risks relating to subjective concepts, such as “…the nature, the activity and the use of the audited service”, can be assessed;
  • Audit templates – Pursuant to Article 5(1)(a) of the DR, Very Large Open Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) shall transmit to third-party auditing organisations “benchmarks used […] to assert or monitor compliance […], as well as supporting documentation”. We argue that the normative considerations underlying the selection of these benchmarks should be asked out more decisively in this phase of the audit. Therefore, we asked the European Commission (EC) to add this dimension to Question 3(a) of Section D.1 Audit conclusion for obligation Subsection II. Audit procedures and their results;
  • Insufficient knowledge how to audit AI – Feedback submitted to the European Commission (EC) on DSA Art. 37 DR reveals that:
    • Private auditors (like PwC and Deloitte) warn that the lack of guidance on criteria against which to audit poses a risk of subjective audits;
    • Tech companies (like Snap and Wikipedia) raise concerns about the industry’s lack of expertise to audit specific AI products, like company-tailored timeline recommender systems.

Read our feedback to the Europen Commission on DSA Art. 37 Delegated Regulation

General Data Protection Regulation (GDPR)

The GDPR has its strengths regarding participatory decision-making, but it has also weaknesses in regulating profiling algorithms and its focus on fully automated decision-making.

  • Participatory DPIA (art. 35 sub 9) – This provision mandates that in cases where a Data Privacy Impact Assessment (DPIA) is obligatory, the opinions of data subjects regarding the planned data processing shall be seeked. This is a powerful legal mechanism to foster collaborative algorithm development. Nevertheless, the inclusion of data subjects in this manner is scarcely observed in practice;
  • Profiling (recital 71) – Profiling is defined as: “to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements”. However, the approval of profiling, particularly when “authorised by Union or Member State law to which the controller is subject, including fraud monitoring”, grants public and private entities significant flexibility to integrate algorithmic decision-making derived from diverse types of profiling. This wide latitude raises concerns about the potential for excessive consolidation of personal data and the consequences of algorithmic determinations. As illustrated by simple, rule-based but harmful profiling algorithms in The Netherlands;
  • Automated decision-making (art. 22 sub 2) – Allowing wide-ranging automated decision-making (ADM) and profiling under the sole condition of contract agreement opens the door for large scale unethical algorithmic practices without accountability and public awareness.

Read Algorithm Audit’s technical audit of a risk profiling-based control proces of a Dutch public sector organisation

Administrative law

Administrative law provides a normative framework for algorithmic-driven decision-making processes. In The Netherlands, for instance, through the codification of general principles of good administration (gpga). We argue that these principles are relevant to the algorithmic practice, but require contextualisation, which is often lacking. Take a closer look, for instance, to:

  • Principle of reasoning: On the basis of the principle of reasoning, it must be sufficiently clear on what grounds and why an administrative body takes a decision. What can and cannot be categorised as ’explainable’ as a non-legal part of the legal norm of the principle of reasoning is undergoing extensive development, and concrete norms are therefore still lacking.
  • Principle of due diligence: This principle relates to the formation of a decision, and ML-driven risk profiling is precisely used in the phase of the decision-making process. The principle of due diligence can be jeopardized when ML-driven risk profiling is applied if the input data is incomplete or incorrect and if the risk profile does not include all the relevant facts. This principle suffers from a lack of interpretation, resulting in a lack of clear guidance.
  • Fair play principle: The principle of fair play, or proper treatment, which is partly codified as a prohibition of bias in Section 2:4 of the Dutch General Administrative Law Act, concerns impartial execution of tasks by an administrative body. We argue that ‘contextualising’ the gpga in the case of this principle should focus on new, digital manifestations of bias. Thereafter, a subsequent best-efforts obligation could be applied to prevent bias and guarantee fairness in algorithmic applications.

Read Algorithm Audit’s article How ‘algoprudence’ can contribute to responsible use of ML-algorithms and its interplay with the Dutch General Administrative Law Act

FRIA

The Impact Assessment Human Rights and Algorithms (IAMA) and the Handbook for Non-Discrimination, both developed by the Dutch government, assess discriminatory practice mainly by asking questions that are meant to stimulate self-reflection. It does not provide answers or concrete guidelines how to realise ethical algorithms.

Registers

Unifying principles of sound administration with (semi-) automated decision-making is challenging. For instance:

Obligation to state reasons: Governmental institutions must always provide clear explanations for their decisions. However, when machine learning is employed, such as in variable selection for risk profiling, this transparency may be obscured. This leads to the question of how far arguments based on probability distributions are acceptable as explanations for why certain citizens are chosen for a particular profile.

Read Algoprudence AA:2023:02 for a review of xgboost machine learning used for risk profiling variable selection 

Newsletter

Stay up to date about our work by signing up for our newsletter

Newsletter

Stay up to date about our work by signing up for our newsletter

Building public knowledge for ethical algorithms