Discriminated by an algorithm

Algorithms quantify us: poor, rich, customer, no customer, homosexual, poorly educated, etc. Decisions, from tax levy to personnel policy, are based on that. That has consequences for our constitutional rights.
The shadow sides to algorithms, Big Data, the Internet of Things and Artificial Intelligence are frequently reported by the media. Often, a lot of attention goes to the usual suspects: Facebook, Google, Twitter and YouTube. Research project 鈥Algorithms and Constitutional Rights鈥 aims to look beyond that, focusing on the acts of smaller businesses, insurances, banks and, of course, the government. Business and government decisions based on algorithms may have consequences for our constitutional rights.
On to Parliament
Kasja Ollongren, Dutch Minister of the Interior and Kingdom Relations, calls the project which is commissioned by her, 鈥渋mportant鈥 and 鈥渟olid鈥 in an to parliament. The project focuses on the impact of algorithms on privacy rights, freedom rights and right to equal treatment, among others.
Currently involved in the project are professor Janneke Gerards (Fundamental Rights Law) and professor Remco Nehmelman (Public Institutional Law) and Legal Research Master student , who was responsible for an important part of the research according to Gerards. Being a part of Utrecht University, all researchers aim to explore the bottlenecks.
Discrimination by an algorithm, how does that work?
Vetzo: 鈥淎lgorithms are often used to distinguish between people or situations, for instance to determine who gets a mortgage and who doesn鈥檛, or to vary insurance premiums between different insurance companies. The problem with that, is the assumption that algorithms 鈥 as opposed to humans 鈥 have 鈥榥eutral鈥 outcomes based on such distinctions. That is not the case. The data with which an algorithm works, as well as the algorithm itself can contain bias and prejudice, leading to discrimination.鈥
People assume that algorithm outcomes are 鈥榥eutral鈥. That is not the case.
Deliberate discrimination?
Vetzo: 鈥淒iscrimination can be an unintended side effect to algorithmic decision making, but, as we argue in the report: every form unintended discrimination can be purposefully orchestrated. Because algorithms can be incredibly complex, especially in cases of 鈥榙eep learning鈥, it is difficult to grasp how and on which (possibly discriminatory) grounds a certain algorithmic decision is made. That makes it hard to transparently motivate decisions made with help of algorithms. That, in turn, leads to the problem that possible discrimination becomes hard to prove. Because algorithms are utilized in a multitude of fields 鈥 from crime control to education, to taxation, etcetera 鈥 these problems of discrimination are recurring more often.鈥

鈥淓quality of arms鈥 is being threatened, you write. That sounds serious.
Gerards: 鈥淚t is! Say, you are rejected for a job and you think that something is wrong with the procedure, in which an algorithm was used. Try and prove that! How are you going to fight that decision? You don鈥檛 have the algorithm; the opposing party does. The opposing party can refuse to give insight into the way the algorithm works, because the creator owns the rights to that algorithm. Or the opposing party can dodge responsibility by claiming to merely use the algorithm. In short: one party possesses the necessary; the other doesn鈥檛. Moreover, if there is no insight into the algorithm, it will prove difficult for a judge to give judgement. This is also true for disputes concerning price discrimination with sale offers, for example, or with insurance premiums.鈥
Algorithms can support judges the way Amazon suggests products to customers: 鈥榃ith this category of violations under these special circumstances, other judges usually give fine nr. X.鈥
Are judges already using algorithm to base their judgement on?
Gerards: 鈥淚n the Netherlands, not very often, but it can be a convenient tool, for instance for calculating the amount of a fine. Data about previous court cases can, in addition the law, guide lines and experience, support the judge. Almost like Amazon provides a suggestion for the customer: 鈥榃ith this category of violations under these special circumstances, other judges usually give fine nr. X.鈥 That can increase equality of rights. And that would also serve as an example on how to properly use an algorithm: as a tool. Technological knowledge in addition to human knowledge.鈥
Is the complexity of algorithms comprehensible for a judge?
Gerards: 鈥淭hat can prove to be challenging. Some algorithm are self-learning. When they feed themselves with data, they can get smarter, but also more stupid, drifting the outcomes away from their original intention. As with every technological advancement, it鈥檚 a matter of trial and error. I attended a workshop concerning this subject, it was mentioned that if you might able to certify algorithms, as you would food or medicine. Is an algorithm tested before it enters the market? What data does it contain? And what are the side effects? That could help, but with self-learning algorithms that is challenging, too. Either way, jurists and policy makers always have to make a translation: how does this decision influence someone鈥檚 life or a certain situation? Do we want it to? We also have to think about who is responsible for the decision: the builder of the algorithm, the one using it, or another party?鈥
We might able to certify algorithms, as you would food or medicine. Is it tested? What data does it contain? Are there side effects?
What tasks do you see for jurists?
Vetzo: 鈥淭he most important tasks for jurists, I think, is to collaborate with IT experts in order to tackle the problems described by our book. The human rights issues we address are urgent and require a combination of judicial and technological solutions. These solutions can only be offered when technicians and jurists collaborate and are willing and able to look past the confined borders of their discipline.鈥
Gerards: 鈥淪trangely, technological advancement does invite ethicists, but hardly ever jurists. Jurists, in turn, have to gain more technological knowledge, which is why I am very satisfied with the minor at Utrecht University. It is, of course, often true that jurists start regulating new forms of technology after it has been in use for a while 鈥 take the rules concerning drones, for example. I think jurists should collaborate with ethicists and technicians at an earlier stage in the process: What can be made, what are the consequences and how do want to deal with that?鈥