Socio-technicals Requirements for Artificial Intelligence Models (Les Séminaires de l'Observatoire)
L'Observatoire de l'Intelligence artificielle de Paris 1 organise une série de séminaires pour animer un dialogue interdisciplinaire entre les jeunes chercheurs de l'université Paris 1 Panthéon-Sorbonne menant leurs travaux avec ou sur les systèmes d'IA. Toute personne intéressée par cette thématique est invitée à y participer.
Le jeudi 30 novembre 2023, Dr. Claudia Negri Ribalta qui a soutenu son doctorat en informatique à Paris 1 en juillet 2023 sur "STAP : Socio-Technical dAta Protection framework for requirements with divergent mental models" sous la direction du Professeur Camille Salinesi fera une présentation dans ce cadre.
Artificial intelligence models have begun to integrate into our daily lives and impact our work. When we use AI models, end users expect them to be fair and work for their benefit, not to harm them. Similarly, various governments and international organizations are discussing regulations and limits for these technologies so that they help users and humanity, not harm them.
Socio-technical information systems, including AI models, are not only composed of "technical actors" but also of social actors that interact between all actors in the systems. In this sense, socio-technical requirements such as fairness, transparency, data protection and privacy, security, and safety play a crucial role in the responsible integration of AI models into our information system. However, socio-technical systems are complex and require inter- and transdisciplinary knowledge.
In this talk, Dr. Caludia Negri Ribalta will talk about the challenge of socio-technical requirements for AI models. We will probably find more questions than solutions, but the idea is to reflect on this pressing issue.
Pour toute information : observatoire-ia@univ-paris1.fr