Featured
Bias prevented – Qualitative and quantitative audit of college allowances control process DUO
Irregularities are identified in the college allowances control process as conducted by Dutch public sector organisation DUO in the period 2012-2022. Students living close to their parent(s) are significantly more often manually selected for a check than other students. The algorithm used to support the selection process performed as expected. Read more...
Recent activities
Presentation Fundamental Rights Impact Assessment (FRIAs) and deliberative, inclusive stakeholder panels plenary JTC21 CEN-CENELEC Dublin
Panel discussion Dutch Data Protection Authority – Auditing algorithms
University of Groningen (RUG) AI Act event
Featured
Bias prevented – Qualitative and quantitative audit of college allowances control process DUO
Irregularities are identified in the college allowances control process as conducted by Dutch public sector organisation DUO in the period 2012-2022. Students living close to their parent(s) are significantly more often manually selected for a check than other students. The algorithm used to support the selection process performed as expected. Read more...
Recent activities
Presentation Fundamental Rights Impact Assessment (FRIAs) and deliberative, inclusive stakeholder panels plenary JTC21 CEN-CENELEC Dublin
Panel discussion Dutch Data Protection Authority – Auditing algorithms
University of Groningen (RUG) AI Act event
Building
AI auditing
capacity
from a
not-for-profit
perspective
Building
AI auditing
capacity
from a
not-for-profit
perspective
Distinctive in
Independence
By working nonprofit and under explicit terms and conditions, we ensure the independence and quality of our audits and normative advice
Normative advice
Mindful of societal impact our commissions provide normative advice on ethical issues that arise in algorithmic use cases
Public knowledge
All our audits and corresponding advice (algoprudence) are made publicly available, increasing collective knowledge how to deploy and use algorithms in an responsible way
Independence
By working nonprofit and under explicit terms and conditions, we ensure the independence and quality of our audits and normative advice
Normative advice
Mindful of societal impact our commissions provide normative advice on ethical issues that arise in algorithmic use cases
Public knowledge
All our audits and corresponding advice (algoprudence) are made publicly available, increasing collective knowledge how to deploy and use algorithms in an responsible way
AI expertise
Profiling
Auditing rule-based and ML-driven profiling, e.g., differentiation policies, selection criteria, Z-testing, model validation and organisational aspects
FP-FN balancing
Context-dependent review of ML and DL confusion matrix-based evaluation metrics, such as False Positives (FPs) and False Negatives (FNs)
Ranking
Recommender systems are everywhere. With the new Digital Services Act (DSA) came into forces last summer, auditing ranking systems is highly relevant
Profiling
Auditing rule-based and ML-driven profiling, e.g., differentiation policies, selection criteria, Z-testing, model validation and organisational aspects
FP-FN balancing
Context-dependent review of ML and DL confusion matrix-based evaluation metrics, such as False Positives (FPs) and False Negatives (FNs)
Ranking
Recommender systems are everywhere. With the new Digital Services Act (DSA) came into forces last summer, auditing ranking systems is highly relevant
Recent audits
Risk Profiling Social Welfare Re-examination
Normative advice commission provides rationales why these variables are eligible or not as a profiling selection criterion for a xgboost algorithm
Technical audit indirect discrimination
Assessment of risk distributions through Z-tests and bias test for various steps in algorithmic-driven decision-making process
Risk Profiling Social Welfare Re-examination
Normative advice commission provides rationales why these variables are eligible or not as a profiling selection criterion for a xgboost algorithm
Technical audit indirect discrimination
Assessment of risk distributions through Z-tests and bias test for various steps in algorithmic-driven decision-making process
Building algoprudence
Step 1
Indentifying issue
Identifying a concrete ethical issue in a real algorithm or data-analysis tool
Step 2
Problem statement
Describe ethical issue, legal aspects and hear stakeholders and affected groups
Step 3
Advice commission
Deliberative conversation on ethical issue by diverse and inclusive advice commission
Step 4
Public advice
Advice of commission is published together with problem statement on our website. Publicly sharing the problem statement and normative advice is called algoprudence
Step 1 – Indentifying issue
Identifying a concrete ethical issue in a real algorithm or data-analysis tool
Step 2 – Problem statement
Describe ethical issue, legal aspects and hear stakeholders and affected groups
Step 3 – Advice commission
Deliberative conversation on ethical issue by diverse and inclusive advice commission
Step 4 – Public advice
Advice of commission is published together with problem statement on our website. Publicly sharing the problem statement and normative advice is called algoprudence