Sarah Benichou, Director for the Promotion of Equality and Access to Rights, Défenseur des droits

Portrait de Sarah Benichou
Texte

Sarah Benichou studied law and English and became involved in various associations at an early age.  She was in charge of the legal department and then secretary general of SOS Racisme between 2000 and 2005. She then became a trainer-consultant on the fight against discrimination and on gender equality and diversity issues, and has taught in various higher education institutions. She wrote a thesis on "Le droit à la non-discrimination 'raciale'" under the supervision of Danièle Lochak. Doctor of law since 2011, she was recruited by le Défenseur des droits in 2012. For several years, this specialist in discrimination has been involved in steering the Défenseur des droits reflections on algorithms and AI. She has been Director of the Promotion of Equality and Access to Rights at this institution since September 2022.

Tell us about the Défenseur des droits (DDD) and its missions?

Sarah Benichou: Le Défenseur des droits is an independent state institution, created in 2011 and enshrined in the Constitution. It has been entrusted with two missions: to defend people whose rights are not respected on the one hand, and to enable equal access to rights for all on the other.

Any individual person or any legal person (a company, an association, etc.) may refer a matter to it directly and free of charge when they :

  • Thinks they are discriminated against;
  • Notes that a representative of public order (police, gendarmerie, customs...) or private order (a security guard...) has not respected the rules of good conduct;
  • Has difficulties in dealing with a public service (Caisse d'Allocations Familiales, Pôle Emploi, pension, etc.);
  • Believes that a child's rights are not being respected;
  • Needs protection and/or guidance as a whistleblower.

These missions of the DDD were defined by the Organic Law No. 2011-333 of 29 March 2011 on Défenseur des droits.

To ensure that everyone's rights and freedoms are respected, the DDD has two means of action: on the one hand, he deals with individual requests that he receives directly at headquarters or through his territorial network of 550 delegates, and on the other, he carries out actions to promote rights and equality.

What is the doctrine developed by the Défenseur des droits on digital technology and artificial intelligence?

Sarah Benichou: The DDD has invested in the field of rights in the digital world, which covers multiple issues. Beyond the dematerialisation of public services (their digitisation) and their necessary accessibility, it is particularly interested in the issue of minors, as shown for example by the Educadroit programme, which deals in particular with issues related to the rights of the child in a digital world. This programme aims to make young people aware of their rights: to make them aware that they are subjects of law and not just objects of law. In addition, the 2022 annual report on the rights of the child, entitled "Privacy: a right for the child", opens with the right of children to privacy in the digital and media spheres.

As the issue of algorithms cuts across all its fields of competence, the DDD has deepened its work from the perspective of discrimination, both on public and private algorithms, in the form of opinions, decisions and reports in various fields.

As far as the private sector is concerned, as early as 2015, the DDD published a guide for "Recruiting with digital tools without discriminating".

With regard to public algorithms, the report "Fighting social benefit fraud: at what cost to users", published in September 2017, is the first document in which the DDD specifically addresses the issues of algorithms and the use of data mining processes. The report points out that the infringement of users' rights and of the principles likely to guarantee them, such as equality before public services, the dignity of the person or the rights of defence, affect each of the stages of the implementation of the policy to combat fraud in social benefits (from the detection of fraud to its sanction, including the recovery of unduly paid sums). The DDD, aware of the recent developments in this field, is planning a new update of this work.

In his Opinion 18-26 of 31 October 2018 on the 2018-2022 programming and reform bill for justice, the DDD drew the legislator's attention to the need to approach the use of algorithms with caution. He noted that the experiments that have already been conducted in France, particularly by judges of the Douai and Rennes courts of appeal, have highlighted the limits of the system, which is based more on a statistical and quantitative approach than on a qualitative one, and does not always make it possible to grasp the subtleties of the motivation of legal decisions.

Furthermore, in the context of the bioethics bill, the DDD issued an Opinion 19-11 of 5 September 2019. It recalls the necessity of establishing a principle of human intervention during algorithmic processing of massive health data, which raises major ethical and legal questions, and supports the provision of good information to patients.

Then, with Decision No. 2018-323 of 21 December 2018, the DDD took up the issue of support for applicants with disabilities in the new Parcoursup procedure. It was pointed out that the software was not accessible to people with disabilities. For example, a high school graduate with a disability could not specify why there had been breaks in his or her school career, or significant absences linked to his or her state of health and not to his or her desire to go to school. This could obviously affect the assessment that could be made of their file and the consideration of their wishes. It was necessary to provide for the possibility for these baccalaureate holders with disabilities to explain these points in order to be sure that they were not discriminated against. Thus, the DDD recommended that the Minister of Higher Education, Research and Innovation take appropriate measures to ensure that persons with disabilities have access to higher education without discrimination and on an equal basis with others, in accordance with Article 24.5 of the International Convention on the Rights of Persons with Disabilities (ICRPD).

Secondly, Decision No. 2019-099 of 8 April 2019 pointed out the risks of discrimination based on place of residence on the Parcoursup platform. The risk of geographical discrimination was noted, as the criterion of place of residence was taken into account through sectorisation and the modulation of averages according to high schools. This risk of indirect discrimination on the basis of origin was pointed out with regard to the effects of school and spatial relations that exist today. The illegality of certain legal provisions relating to local algorithmic processing, i.e. implemented by higher education institutions, has been denounced before the courts and the Constitutional Council via a question prioritaire de constitutionnalité brought before the Constitutional Court. Students can only know the details of the decision-making process once a refusal decision has been taken in their regard and the information communicated at this stage concerns the general ''criteria and procedures for examining their applications'' as well as the pedagogical reasons for the decision taken. This usually leads to misunderstandings on the part of both students and their parents.

Based on this work and decisions, the institution has done a lot of thinking about algorithmic biases, which has required a better understanding of learning algorithms and deep learning machines, which are very complex systems from a technological point of view.

This resulted in particular in the organisation in May 2020 with the CNIL of a seminar on algorithmic discrimination in order to improve the understanding of the subject and to examine several approaches to meet the challenges posed by this phenomenon. Then, the DDD published with the support of the CNIL a statement in May 2020 presenting its main recommendations "Algorithms: preventing the automation of discrimination" and met with multiple actors to raise awareness of these issues, which are not very present in the French public debate.

While algorithms can work with all types of data (personal data or not, sensitive data or not), algorithms dealing with biometric data (which are part of sensitive data) present particular risks. With biometric technologies, there are also risks of discrimination and infringement of specific rights, which were the subject of a new report: "Biometric technologies: the imperative respect of fundamental rights" (2021).  The DDD alerted to the significant risks of identification technologies, especially when they are deployed on the public highway. There is a significant risk to privacy, but also a risk of discrimination (due to the biases of facial recognition algorithms and the way in which these algorithms can be used in practice), as well as a risk of dissuasive effect, i.e. a risk that individuals, because they know they are being monitored, will alter their behaviour and renounce exercising their fundamental rights, foremost among which are freedom of expression and freedom of demonstration.

Following on from this report, a survey was conducted in October 2022 on the perception of the development of biometric technologies in France. The survey revealed a significant lack of information among the public on the subject, a variable degree of confidence depending on the entities responsible for deploying these systems, a growing awareness of the risks of infringement of rights and a strong desire to see the existing legal framework strengthened.

In response to the lack of information and training among the actors, the DDD is also developing awareness-raising tools. First of all, there is the "Artificial Intelligence and Discrimination" training course (2021 and 2022 editions) carried out in partnership with the Council of Europe. This multidisciplinary training aims to better study the various problems related to the risks of discrimination.

Secondly, seminars with Equinet, the European Network of Equality Bodies, were organised in 2021 and 2022 as part of another training of Equinet members on AI and equality.

Does the term "artificial intelligence" seem clear enough to you?

Sarah Benichou: The Défenseur des droits (DDD) did not use this expression at the beginning of his work, preferring the terms algorithms ("closed" or "learning"). The terms "Artificial Intelligence", which are neither understandable nor educational, have the effect of both dumbing down and taking away responsibility from humans. However, there is always a human decision at the base, even if it is more or less important depending on the system in question.

The issues related to the definition of artificial intelligence and, above all, to who defines it, are clearly decisive today for the definition of their regulation and the modalities of their control. A definition of AI that is too broad poses a real problem for economic actors. But if the definition is too narrow, there is a risk that important technologies will not be sufficiently controlled. Moreover, any definition must be sufficiently flexible, as the use of AI and the capabilities of this technology are evolving rapidly.

There is no single definition of AI. One conception is taken from Frederik Zuiderveen Borgesius' Discrimination, Artificial Intelligence and Algorithmic Decisions, which states: Artificial intelligence (AI) can be simplified as "the science of making machines intelligent". More formally, it is "the study of designing intelligent agents". In this context, an agent is "a thing that acts", such as a computer

According to the CNIL, artificial intelligence is not a technology in the strict sense of the term but rather a scientific field in which tools can be classified when they meet certain criteria. AI is therefore based on algorithms that feed on data.

How to prevent the automation of discrimination?

Sarah Benichou: Digital tools, the use of which increased with the COVID-19 health crisis, are often based on algorithms without the general public always being aware or informed. Now used in areas such as access to social benefits, the police, justice and recruitment, they are a source of progress, but also carry risks for fundamental rights, as the DDD and the CNIL have already pointed out.

Behind the apparent neutrality of algorithms, research has revealed the extent of the biases that can occur in their design and deployment. Like the databases that feed them, they are designed and generated by humans whose stereotypes, by being repeated automatically, can lead to discrimination.

Considering that this issue should not be a blind spot in the public debate, the international seminar organised in 2020 by the DDD and the CNIL and already mentioned brought together several stakeholders to discuss the issues of transparency of algorithms and discriminatory biases. All the experts pointed out the considerable risks of discrimination that their exponential use can bring to bear on each and every one of us, in all spheres of our lives.

In order to prevent these risks, correct them and make the actors responsible, the DDD calls for a collective awareness and urges the public authorities and the actors concerned to take measures to prevent discrimination from being reproduced and amplified by these technologies.

The DDD, in partnership with the CNIL, has proposed the following guidelines: 

  • Train and raise awareness among professionals in the technical and IT engineering professions of the risks that algorithms pose to fundamental rights;
  • Support research to develop studies on measuring and preventing bias, and to further the notion of 'fair learning' - i.e. designing algorithms to meet the objectives of equality and understanding, not just performance;
  • Strengthen legal obligations in terms of information, transparency and explicability of algorithms with regard to users and persons concerned, but also third parties and professionals using these systems, in the name of the general interest, as shown for example by the questions raised by Parcoursup;
  • Conduct impact assessments to anticipate discriminatory effects of algorithms and monitor their effects after deployment.

The phenomenal development of algorithmic technologies and learning systems requires institutions to remain vigilant about the consequences of these technological developments, but also to anticipate them, to enable democratic debate to take place in an enlightened manner, while at the same time devising a legal framework and regulation that protects rights and freedoms.

The DDD and the CNIL are continuing their reflections on this subject and will contribute to those of public decision-makers. In this perspective, their compass can only be the will to guarantee to everyone the respect of their fundamental rights, and in particular the right not to be discriminated against and the protection of their personal data.

What are the main points of the opinion of the Equinet network on the draft regulation of the European Commission, in which the Défenseur des droits participated?

The European Commission's draft regulation on artificial intelligence unveiled in April 2021 aims to introduce binding rules for artificial intelligence (AI) systems for the first time. As we speak, this regulation is the subject of intense debate. Together with the Equinet network, the DDD published an opinion entitled "For a European AI that protects and guarantees the principle of non-discrimination" on 22 June 2022.

We wanted to recall a fundamental requirement: the right to non-discrimination must be respected in all circumstances and access to rights must remain guaranteed for all.

The recommendations made in this opinion are in line with the previous work of the DDD.

The recommendations stress the priority of combating algorithmic discrimination and emphasise the role that European equality bodies could play in this context. Several impact assessments of the draft regulation exist but do not address this non-discrimination requirement.

 Among the safeguards to be provided by the Regulation, the opinion recommends :

  1. Make the principle of non-discrimination a central concern in any EU regulation on AI;
  2. Establish in all European countries accessible and effective complaint and redress mechanisms for data subjects in case of violation of the principles of equality and non-discrimination or other fundamental rights when such violation results from the use of AI systems;
  3. Apply a fundamental rights approach to the definition of 'harm' and 'risk', rather than an approach drawn from product safety regimes; 
  4. Require ex-ante and ex-post equality impact assessments at regular intervals throughout the life cycle of AI systems;
  5. Assign binding and enforceable "equality obligations" to all AI developers and users;
  6. Make risk differentiation possible only after a mandatory analysis of the impact on the principle of non-discrimination and other human rights;
  7. Make the enforcement of the provisions of the future AI Regulation effective by obliging the new national supervisory authorities to consult with equality bodies and other relevant fundamental rights institutions;
  8. Mandate the establishment and adequate funding of cooperation mechanisms that allow the different bodies involved in the implementation of the AI Regulation to coordinate at both European and national level.

→ Read also: The Défenseur des droits and the Equinet network call for the principle of non-discrimination to be put at the heart of the proposed regulation on AI.

The future challenges of artificial intelligence and fundamental rights and freedoms

First of all, there is the issue of combating the amplification of discrimination that could be linked to the growing use of AI systems. Indeed, discriminatory biases may be present from the first phase, that of the coding of the algorithm, and derive from the data used by the deterministic algorithm and feeding the learning algorithm in its learning phase or later. Nevertheless, the discriminatory effects of algorithms are most often based on mechanisms that are less visible than the integration of a clearly identifiable prohibited criterion into the algorithm, with code reflecting prejudices, unquestioned data reflecting social inequalities, the phenomena of discriminatory correlations or system models targeting certain already vulnerable groups.

Secondly, there is an urgent need to combat the phenomenon of non-recourse, which is encouraged in particular by the lack of transparency of the algorithms, which remains an obstacle to access to evidence for victims of discrimination (who are often unaware that they are victims). There must be a right of access to AIS documentation, a requirement of fairness consisting of informing individuals of the use of an AIS against them and of auditability of the system by the competent authorities, as well as a guarantee of explicability. In this respect, Article 13.2-f) of the General Data Protection Regulation (GDPR) provides that "the controller shall provide the data subject, at the time when the personal data are obtained, with the following additional information necessary to ensure fair and transparent processing (...): the existence of automated decision-making, including profiling, as referred to in Article 22(1) and (4), and, at least in such cases, relevant information concerning the underlying logic, as well as the significance and the expected consequences of such processing for the data subject". Furthermore, discriminatory risks must be anticipated and prevented. Among the obligations of the GDPR is a data protection impact assessment (DPIA) which the controller is obliged to carry out in certain cases, prior to the implementation of the processing (Art.  35 GDPR). In this respect, the guidelines on DPIA adopted by the G29 (European CNIL) in 2017 state that the "high risk to the rights and freedoms of natural persons" caused by the processing and which constitutes a criterion for the mandatory performance of the DPIA refers in particular to "the prohibition of discrimination". However, the PIAs carried out by operators do not currently include this issue.

Finally, the challenges of interdisciplinarity and raising awareness of all audiences on these subjects are essential.