Adjacent Fields > Digital security and privacy > Cybersecurity > Red teaming - (3)

Red-teaming is the practice of testing a computer system from the perspective of an opponent to identify potential attack vectors.

Showing 3 Results

Red teaming large language models (LLMs) for resilience to scientific disinformation

The red teaming event brought together 40 health and climate postgraduate students with the objective to scrutinise and bring attention to potential vulnerabilities in large language models (LLMs1 ).

Humane Intelligence

Humane Intelligence

United States of America (the)

Humane Intelligence is a tech nonprofit building a community of practice around algorithmic evaluations.

Red Team Lab at Open Tech Fund

Red Team Lab at Open Tech Fund

2025 M Street Northwest, Downtown, Washington, DC 20036, USA

The lab focuses on improving the software security of projects that advance OTF’s Internet freedom goals by ensuring that code, data, and people behind the tools have what they need to create a safer experience for people experiencing repressive information controls online.

Back to Top