What we do

Auditing Algorithms

Our audit commissions conduct case-based ethical reviews of algorithmic methods, in a holistic and context-sensitive way that is mindful of societal impact.

Independent

By working nonprofit and under explicit terms and conditions, we ensure the independence, academic quality and diversity of our audit commissions and of our ethical advice.

Ethics Beyond Compliance

We help organizations committed to ethical algorithms to make judgments about fairness and ethics beyond the requirements of legal compliance.

Public Knowledge

All our cases and corresponding advice are made publicly available, increasing collective knowledge how to devise and use algorithms in an ethical way.

Techno-Ethical Jurisprudence

From our case-based knowledge, data scientists can distil best practices for ethical algorithms. Over time a helpful resource for techno-ethical issues will emerge.

Joint Effort

Let’s remove boundaries between public and private organizations that face similar ethical concerns. We believe in a collective approach to realise ethical algorithms. We offer a platform of collaboration for academics, developers and policy makers.

Supported by

How we work

Cases we work on

Who we work with

We work together with international experts from various backgrounds, e.g. ethicists, legal professionals, data scientists. The composition of audit commissions varies per case. Most of the experts are affiliated with academic institutions. The Algorithm Audit team facilitates the procurement of sufficient background information about the case, after which the experts conduct an in-depth study, first individually and then collectively. Our team drafts a report that condenses the varied views of the commission.The report published by Algorithm Audit has been agreed upon by the commission members.

Why we exist

Algorithm Audit was founded on the idea that ethics in algorithmic methods urgently needs case-based experience and a bottom-up approach. We believe existing and proposed legislation is and will not suffice to realize ethical algorithms. Why not?

  • The conditions given in GDPR Article 22 (2) under which automated decision-making (ADM) and profiling is allowed are open for broad interpretation. Allowing wide-ranging ADM under the sole condition of contract agreement opens the door for large scale unethical algorithmic practices without accountability and public awareness.
  • The newly proposed AI Act of the European Commission aims to regulate the use of high-risk algorithms, but remains too generic. For example, it does not provide precise guidelines how to identify and mitigate ethical issues such as algorithmic discrimination. In addition, machine learning practice that falls outside the high-risk category is not exempt from major ethical concerns. The legal measures, which only become effective in several years, will become a playground for legal experts and lawyers and will not directly offer concrete and extensive guidelines on ethical algorithms for organizations in industry and government. Hence, organizations will still need to make up their own mind about context-specific ethical guidelines and procedures in their use of algorithmic methods.
  • Perspective 3.1.1 in the Guidelines for Algorithms of the Dutch Court of Auditors argues that ethical algorithms are not allowed to “discriminate and that bias should be minimised”. Missing from this judgment is a discussion of what precisely constitutes bias in the context of algorithms and what would be appropriate methods to ascertain and mitigate algorithmic discrimination. In the absence of a clear ethical framework, it is up to organizations to formulate context-sensitive approaches to combat discrimination.
  • The Impact Assessment Human Rights and Algorithms (IAMA) and the Handbook for Non-Discrimination, both developed by the Dutch government, assess discriminatory practice mainly by asking questions that are meant to stimulate self-reflection. It does not provide answers or concrete guidelines how to realise ethical algorithms.

We believe a case-based and context sensitive approach is indispensable to develop ethical algorithms. One should not expect top-down regulation and legislation to solve all ethical problems in AI and machine learning. Taking all contested algorithmic cases to court is practically infeasible. More importantly, organizations will always carry their own responsibility for ethical algorithms within and beyond the obligation of legal compliance. Hence, new bottom-up initiatives like Algorithm Audit are necessary to support these organizations and to strengthen ethical practice in ADM and decision support.

We provide a platform where experts in AI ethics from various disciplines can interact with, learn from and steer actual algorithmic practice and surrounding ethical concerns. We increase public knowledge and stimulate an informed and open debate about what ethical algorithms we desire as a society in various contexts. Our audit commissions shape future public values through discussion and deliberation. As such, Algorithm Audit contributes in the digital realm to SDG16 – Peace, Justice and Strong Institutions.

Get in touch

Do you have an ethical issue for review? Or want to share ideas? Let us know!