Technologies that train models to autonomously support or function independently of direct human effort.
Suggested resources:
Talk to the City is an open-source LLM interface for improving collective deliberation and decision-making by analyzing detailed, qualitative data. It aggregates responses and arranges similar arguments into clusters.
Recursive Public is an experiment in identifying areas of consensus and disagreement among the international Artificial Intelligence (AI) community, policymakers, and the general public on key questions of governance.
$30 million initial investment will bridge gaps between industry, civil society, and policymakers, broadening generative AI infrastructure in service of society
A new $5+ million partnership aims to explore ways the development of artificial intelligence (AI) can support a thriving, innovative local news field, and ensure local news organizations shape the future of this emerging technology.
Here we introduce StraightLines, a proof-of-concept tool developed by AOI’s Lucid Lens team, that automatically rewrites news headlines to reflect the content of articles in a more accurate, less sensational way.
The AI Objectives Institute (AOI) is a non-profit research lab working to ensure that both AI and future economic systems are built and deployed with genuine human objectives at their core, enabled by broad public input and scalable cooperation.
The [UK] Government is establishing an elite team of highly empowered technical experts at the heart of government. Their mission is to help departments harness the potential of AI to improve lives and the delivery of public services.
CiviClick is changing the way people interact with lawmakers through AI.
This list of civil society experts on AI contains profiles and contact information of policy experts, researchers and lawyers who can speak to the media and other stakeholders on issues such as AI regulation, facial recognition, racial justice, AI in health, border surveillance, algorithmic welfare distribution, conditions for workers training Chat-GPT and other key issues of our time.
Системы распознавания лиц являются технологиями двойного назначения и могут использоваться для давления на общество. Массовое распознавание лиц должно быть запрещено до тех пор, пока не будет обеспечена полная прозрачность и безопасность его использования для граждан. Кампания мобилизует граждан для защиты своей частной жизни
Relevant section: Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.
Aquaya is a data-driven nonprofit organization that is dedicated to improving safe water and sanitation access in low-resource settings.
The “AI Cyber Challenge” (AIxCC) will challenge competitors across the United States, to identify and fix software vulnerabilities using AI.
The First AI Legal Assistant, Made for Lawyers
Polimorphic is the first Constituent Relationship Management (CRM) software specifically built for local governments.
"The Center for AI and Digital Policy aims to promote a better society, more fair, more just — a world where technology promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law."
Public Option for AI (PO4AI) is an immersive experience designed for elected officials and city staff to explore how residents’ voices might be centered in decision-making around public-interest technologies.
An AI startup's chatbots representing each of the 17 major US presidential candidates.
By making it easy for impact funders and fundees to define, measure, analyze, report and manage their impact projects, we help purpose-driven organizations such as social enterprises and nonprofits get trust-based, continuous funding.
A critical review of the accuracy of countries' submissions to the OECD's AI Policy Observatory site, tracking national government AI programs
One of multiple studies in the US using artificial intelligence to analyze the tone and word choice police officers use, to determine whether it leads to unnecessarily escalations with the public..
The Civic AI Handbook is for digital leads of civic organisations who want to develop generative AI strategies for their organisations / departments / teams.
JUST AI is an independent network of researchers and practitioners committed to understanding the social and ethical value of data and AI
Bringing African voices, values and experiences to global debates on AI.
POPVOX Foundation is working with Demand Progress and the Foundation for American Innovation (formerly the Lincoln Network) to facilitate information-sharing and best practices for the testing, deployment and oversight of automated technologies including Large Language Models (LLMs) in the legislative branch.
Dive is not just an AI assistant; it's your partner in optimizing meetings for maximum efficiency.
In the project “reframe[Tech] – Algorithms for the Common Good”, we are committed to ensuring that efforts to develop and use algorithms and artificial intelligence are more closely aligned with the common good.
Nesta and Newspeak House announce the Civic AI Observatory (civicai.uk), an initiative to support civic organisations plan and adapt to the rapidly evolving field of Generative AI.
The AI4D multidisciplinary research lab run by two public academic institutions in Tanzania, UDOM & NM-AIST. We set forth an objective to establish a multidisciplinary AI4D lab that fosters capacity development, research and innovation in responsible AI and its application in addressing societal and developmental problems in Africa.
At AI Impact Labs, we are dedicated to helping nonprofits and mission-driven companies stay ahead of the curve by providing comprehensive support and training on using generative AI in their workflows.
The HF Hub is the central place to explore, experiment, collaborate and build technology with Machine Learning.
Clara es la Asistencia Virtual del portal Decide Madrid. Utiliza tecnologías como la inteligencia artificial y el procesamiento del lenguaje natural.
The Al Accountability Fellowships seek to support journalists working on in-depth AI accountability stories that examine governments' and corporations’ uses of predictive and surveillance technologies to guide decisions in policing, medicine, social welfare, the criminal justice system, hiring, and more.
Learn AI's role in addressing big challenges. Build skills combining human and machine intelligence for positive real-world impact using AI.
A global collection of AI projects and proposals that impacts UN Sustainable Development Goals, both positively and negatively.
Diagnosing perceived and actual risks impeding responsible AI acquisition in government
"Albus is an AI powered brainstorming tool that allows you to explore topics in depth. Helps prompt ideas so you can expand your knowledge with a mind map style visuals. Great for education and research." -futureailab.com
Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.
AI Forensics is a European non-profit that investigates influential and opaque algorithms. We hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms.
Our Massively Multilingual Speech AI research models can identify more than 4,000 spoken languages, 40 times more than any known previous technology. These models expand text-to-speech and speech-to-text technology from around 100 languages to more than 1,100.