Featured

Bias prevented – Qualitative and quantitative audit of college allowances control process DUO

Irregularities are identified in the college allowances control process as conducted by Dutch public sector organisation DUO in the period 2012-2022. Students living close to their parent(s) are significantly more often manually selected for a check than other students. The algorithm used to support the selection process performed as expected. Read more...

01-03-2024 technical audit

Recent activities

Presentation Fundamental Rights Impact Assessment (FRIAs) and deliberative, inclusive stakeholder panels plenary JTC21 CEN-CENELEC Dublin

13-02-2024 presentation

Panel discussion Dutch Data Protection Authority – Auditing algorithms

08-02-2024 presentation

University of Groningen (RUG) AI Act event

06-02-2024 presentation

Featured

Bias prevented – Qualitative and quantitative audit of college allowances control process DUO

Irregularities are identified in the college allowances control process as conducted by Dutch public sector organisation DUO in the period 2012-2022. Students living close to their parent(s) are significantly more often manually selected for a check than other students. The algorithm used to support the selection process performed as expected. Read more...

01-03-2024 technical audit

Recent activities

Presentation Fundamental Rights Impact Assessment (FRIAs) and deliberative, inclusive stakeholder panels plenary JTC21 CEN-CENELEC Dublin

13-02-2024 presentation

Panel discussion Dutch Data Protection Authority – Auditing algorithms

08-02-2024 presentation

University of Groningen (RUG) AI Act event

06-02-2024 presentation

Supported by

Building AI auditing capacity


from a not-for-profit perspective


Building
AI auditing capacity


from a
not-for-profit perspective


Distinctive in

Independence

By working nonprofit and under explicit terms and conditions, we ensure the independence and quality of our audits and normative advice

Normative advice

Mindful of societal impact our commissions provide normative advice on ethical issues that arise in algorithmic use cases

Public knowledge

All our audits and corresponding advice (algoprudence) are made publicly available, increasing collective knowledge how to deploy and use algorithms in an responsible way

Independence

By working nonprofit and under explicit terms and conditions, we ensure the independence and quality of our audits and normative advice

Normative advice

Mindful of societal impact our commissions provide normative advice on ethical issues that arise in algorithmic use cases

Public knowledge

All our audits and corresponding advice (algoprudence) are made publicly available, increasing collective knowledge how to deploy and use algorithms in an responsible way

AI expertise

Profiling

Auditing rule-based and ML-driven profiling, e.g., differentiation policies, selection criteria, Z-testing, model validation and organisational aspects

FP-FN balancing

Context-dependent review of ML and DL confusion matrix-based evaluation metrics, such as False Positives (FPs) and False Negatives (FNs)

Ranking

Recommender systems are everywhere. With the new Digital Services Act (DSA) came into forces last summer, auditing ranking systems is highly relevant

Profiling

Auditing rule-based and ML-driven profiling, e.g., differentiation policies, selection criteria, Z-testing, model validation and organisational aspects

FP-FN balancing

Context-dependent review of ML and DL confusion matrix-based evaluation metrics, such as False Positives (FPs) and False Negatives (FNs)

Ranking

Recommender systems are everywhere. With the new Digital Services Act (DSA) came into forces last summer, auditing ranking systems is highly relevant

Recent audits

Risk Profiling Social Welfare Re-examination

Normative advice commission provides rationales why these variables are eligible or not as a profiling selection criterion for a xgboost algorithm

Technical audit indirect discrimination

Assessment of risk distributions through Z-tests and bias test for various steps in algorithmic-driven decision-making process

Risk Profiling Social Welfare Re-examination

Normative advice commission provides rationales why these variables are eligible or not as a profiling selection criterion for a xgboost algorithm

Technical audit indirect discrimination

Assessment of risk distributions through Z-tests and bias test for various steps in algorithmic-driven decision-making process

Building algoprudence

Step 1

Indentifying issue

Identifying a concrete ethical issue in a real algorithm or data-analysis tool

Step 2

Problem statement

Describe ethical issue, legal aspects and hear stakeholders and affected groups

Step 3

Advice commission

Deliberative conversation on ethical issue by diverse and inclusive advice commission

Step 4

Public advice

Advice of commission is published together with problem statement on our website. Publicly sharing the problem statement and normative advice is called algoprudence

Step 1 – Indentifying issue

Identifying a concrete ethical issue in a real algorithm or data-analysis tool

Step 2 – Problem statement

Describe ethical issue, legal aspects and hear stakeholders and affected groups

Step 3 – Advice commission

Deliberative conversation on ethical issue by diverse and inclusive advice commission

Step 4 – Public advice

Advice of commission is published together with problem statement on our website. Publicly sharing the problem statement and normative advice is called algoprudence

Advantages of algoprudence

Learn & harmonize

> Ignite collective learning process to deploy and audit responsible AI

> Harmonizes the resolution of ethical questions and the interpretation of open legal norms

Question & criticize

> Fostering criticism on normative decision-making through transparency

> Informing public debate with important ethical issues to be discussed within democratic sight

Inclusion & participation

> Connecting various stakeholders to design ethical algorithms together with technical experts

> European answer to deploy responsible AI systems

Learn & harmonize

> Ignite collective learning process to deploy and audit responsible AI

> Harmonizes the resolution of ethical questions and the interpretation of open legal norms

Question & criticize

> Fostering criticism on normative decision-making through transparency

> Informing public debate with important ethical issues to be discussed within democratic sight

Inclusion & participation

> Connecting various stakeholders to design ethical algorithms together with technical experts

> European answer to deploy responsible AI systems

Jurisprudence for algorithms


The Movie

Newsletter

Stay up to date about our work by signing up for our newsletter

Newsletter

Stay up to date about our work by signing up for our newsletter