Graphic representing Red teaming large language models (LLMs) for resilience to scientific disinformation

Red teaming large language models (LLMs) for resilience to scientific disinformation


https://royalsociety.org/-/media/policy/publications/2024/science-x-ai-red-teaming-workshop-note.pdf
United States of America (the)

The red teaming event brought together 40 health and climate postgraduate students with the objective to scrutinise and bring attention to potential vulnerabilities in large language models (LLMs1 ).

Organization Type: Non-profit / charity / foundation
Status: N/A
Founded: 2023
Parent Organization: The Royal Society
Last Modified: 2/21/2025
Added on: 2/20/2025

Project Categories

Back to Top