Rechercher
  • Fondation Abeona

Telecom ParisTech publishes a white paper on Fairness and Bias in AI, with our support

Mis à jour : 1 avr. 2019

Fondation Abeona and researchers of Telecom ParisTech met in the fall of 2018 to discuss bias, discrimination and equity in algorithms. They produced a white paper to review the existing body of work in the area and raise awareness about AI bias in French and European context.


Artificial intelligence and algorithms are increasingly used in our daily lives, changing the way we learn, work, and receive medical care. AI algorithms learn and make decisions based on the data they analyze. These algorithms look for patterns, and in theory should produce precise and fair decisions and recommendations. However, historical data that they learn can be biased and disadvantage people according to their age, social or ethnic origin, gender...These biases are not only acquired through machine learning (ML) but can even be amplified. Moreover, algorithms are developed by people with their own cognitive biases, whether conscious or not. So how can we use artificial intelligence in an equitable way?


Documented cases of discrimination and inequality in AI exist, but they are largely based on the work of North American researchers. These examples concern algorithms implemented in the United States: Medicare, predictive justice, Amazon recruitment, etc. Few European examples exist. Does this mean that in France and Europe we have little risk of bias discrimination in AI algorithms? Moreover, do recommendations for Fair AI that emerged from these American examples equally apply in the European context?


Fondation Abeona sponsored this project in line with its mission to support multidisciplinary research that uses data science and to catalyse reflection on fairness and equity in artificial intelligence. This white paper produced by Télécom ParisTech researchers in economics, machine learning and statistics, surveys the field, presents promising technical approaches, and launches a discussion on the social issues involved. We hope that this article will be followed by others discussing specific European examples, and recommendations on possible ways to address bias in AI.


Click here to view this post in french