Autre Publication Scientifique Année : 2025

Joint memo from the Inserm Ethics Committee, the LORIER program, and the Inserm Scientific Advisory Board

Résumé

1- Disclosure and transparency: 1.1 Naive disclosure: Scientists must be careful about not just the uncontrolled dissemination of data (loss of data confidentiality), but also the dissemination of uncontrolled data (hallucinations, unsourced data), regardless of the digital tool used. 1.2 Scientific publication: Researchers must explicitly mention the use of artificial intelligence (AI) systems in their work (tools, algorithms, parameters) and distinguish the contributions obtained using AI systems from those derived from their creative activity. In publications, a "Use of AI systems" section must detail the role of AI systems, like the software used for statistical analyses. A practical guide and references will be prepared and made available on the LORIER portal. 1.3 Research administration and support: Transparency must also apply to all other uses of AI systems at Inserm, particularly in the Human Resources sector (recruitment, careers, etc.). 2- Attribution and transparent models: Scientists must publish the details of the AI system models (training data, versions) they create or use, and ensure their long-term archiving for replication studies. They must also promote models that enable working in compliance with good ethical practices – particularly the citation of data sources, and encourage academic uses of open science. 3- Use of synthetic data: Inserm scientists are encouraged to develop the use of synthetic data that would limit the risks of research participant re-identification (anonymity within the meaning of CNIL/G291/EDPB2), while ensuring the verisimilitude of the data, their suitability for a set of secondary uses and their diversity, avoiding biases that could compromise the research. This use of qualitative anonymous synthetic data also makes it possible in certain cases to enrich datasets, particularly in areas where real data are scarce or sensitive. 4- Verification of AI system findings: Responsibility for the accuracy of the analyses generated using AI systems lies with their users and particularly the researchers, who must validate their reliability and identify potential biases. Staff are invited to thoroughly test the reproducibility and reliability of AI models by 1) comparing the findings obtained with different datasets and 2) testing the findings obtained with different AI algorithms. The creation of a national cross-cutting unit on the use of digital technologies for science for health at Inserm3 could centralize follow-up information (collection of bias detections and solutions found) and help formulate the training provision in order to support this approach. Given the rapid changes occurring in AI systems and practices, continuous reflection is necessary – involving the Ethics Committee, the Scientific Advisory Board, and the LORIER program. 5- Documentation of AI system data: Data generated using AI systems must be clearly identified to avoid any confusion with real observations. Researchers must guarantee the traceability of the AI system data used in the studies. 6- Integrity and equity: Researchers must seek to anticipate the social impacts of AI systems. They must be trained in the legal rules, particularly regarding data protection and respect for property rights. Particular vigilance is required for under-represented or historically discriminated groups. 7- Monitoring, alternative solutions and public engagement: Continuous surveillance of the societal impact of AI system use is essential. The use of AI systems in research has an environmental impact. Such systems must therefore be used sparingly and in contexts of certain relevance. AI systems using a reduced dataset and fewer parameters, such as Small Language Models (SLMs), represent frugal and sober AI. This development of responsible AI systems can also contribute to improving digital sovereignty. The national cross-cutting unit on the use of digital technologies for science for health at Inserm could gather feedback and develop a culture of transparency surrounding the uses of AI systems. 8- Continuous training: Training all staff, both scientific and administrative, in digital technologies and the use of AI systems, their principles and limits, is crucial for informed and responsible use. 9- Health data passport: A "data passport" documenting the origin, quality and biases of the data would enable their responsible reuse. 10- Transparency portal: A portal would give participants in research conducted at Inserm visibility over the use of their health data, thereby ensuring transparency, regulatory compliance and flexibility for research.

Fichier principal
Vignette du fichier
Best practice recommendations using AI in Inserm Research Feb2025_v2 EN corrected.pdf (1.82 Mo) Télécharger le fichier

Dates et versions

inserm-05003733 , version 1 (24-03-2025)

Licence

Identifiants

  • HAL Id : inserm-05003733 , version 1

Citer

Henri Atlan, Catherine Bourgain, Herve Chneiweiss, François Eisinger, Catherine Vidal, et al.. Joint memo from the Inserm Ethics Committee, the LORIER program, and the Inserm Scientific Advisory Board. Best practice recommendations using AI in Inserm Research / February 2025, 2025. ⟨inserm-05003733⟩
532 Consultations
230 Téléchargements

Partager

  • More