Technologies that train models to autonomously support or function independently of direct human effort.
Suggested resources:
This interactive global map of Public Interest AI projects aims to help foster research on Public Interest AI projects, demonstrate their self-understanding, and provide publicly accessible data about them to the broader public.
A project map of Public Interest AI and related research
Representative for Massachusetts Jake Auchincloss used ChatGPT to help introduce legislation supporting AI research
An 8-week program for current undergraduates to learn how to use data science and AI for social impact
An article tracking the development of pro-social deepfakes, despite the vast majority consisting of pornographic images of women
The CDEI is a UK government expert body enabling the trustworthy use of data and AI.
ParityBOT is a Twitter bot that spins the abuse and toxicity directed at women in politics into positive, uplifting and encouraging messages. The artificial intelligence technology that powers ParityBOT detects and classifies hateful, harmful and toxic tweets directed at women in leadership or public office. For every toxic tweet that passes a certain threshold, the bot posts a “positivitweet.” ParityBOT has been deployed in Canada during the federal election in 2019, and the Alberta election in 2019. During this time: +245,000 tweets were processed 393 candidates were tracked +20,000 positive tweets sent
Explore and analyze large collections of documents. By Google Journalist Studio.
Technology to strengthen citizens - Transforma a tu organizaciĂłn en aliada de la ciudadanĂa.
Massive collective intelligence is the capacity to mobilize communities on a large scale (hundreds and thousands of participants) around key stakes and challenges to co-create new solutions in a short space of time.
Our chat bots, voice bots and process automation tools are used by millions of people, and businesses in every sector.
Non-binding guidance from the White House
JusticeAI is an open source platform and suite of tools that strategically apply machine learning, computer vision, and metadata analysis to sort, identify, and analyze digital media.
LandOS facilitates project design, due diligence, finance and certification – all key to unlocking supply at scale.
Latin American fact-checking organization
No Minor Futures is a public education campaign amplifying children's hopes and fears about AI technology. It uses animations, podcasts, workshops, and social media.
A collection of essays edited by Ana Brandusescu and Jess Reia featuring essays from participants of the AI in the City: Building Civic Engagement and Public Trust symposium that took place remotely on February 10, 2022.
The Dark Data Project helps organizations uncover, deobfuscate, semantify and analyze problematic datasets
SunTec.AI is an experienced data annotation company that offers premium data annotation services across diverse industry verticals. We create customized business-specific solutions for advanced domains, like AI & ML, robotics, healthcare, and technology.
CivicSignal is Code for Africa's AI/machine learning initiative, using large-scale media monitoring to map the content/creator landscape and underlying business models for disinformation profiteers. It uses MIT's Media Cloud software.
NCRI has built its own proprietary platform that has played a critical role in the identification and forecasting of emerging threats that threaten the Economic, physical and social health of civil society.
VocĂŞ sabia que muitos problemas da gestĂŁo pĂşblica podem ser solucionados com InteligĂŞncia Artificial ?
Judgment Call is an award-winning game and team-based activity that puts Microsoft’s AI principles of fairness, privacy and security, reliability and safety, transparency, inclusion, and accountability into action.
Urban AI is a Think Tank that proposes ethical modes of governance and sustainable uses of urban AI.
Discover how deepfakes work and the visual clues you can use to identify them. We are a group of communication designers that have created this project to demonstrate our research into making our own deep fake, and to communicate the signs you can spot to identify them.
Everyone is talking about AI, but how and where is it actually being used? We've mapped out interesting examples where AI has been harmful and where it's been helpful.
DocumentCloud runs every document you upload through Thomson Reuters OpenCalais, giving you access to extensive information about the people, places and organizations mentioned in each.
For entrepreneurial MIT students looking to put their skills to work for a greater good, the Media Arts and Sciences class MAS.664 (AI for Impact) has been a destination point.
A feminist review of AI, privacy and data protection to enhance digital rights
an AI-powered scientific paper search engine. It provides a one-sentence tl;dr (too long; didn’t read) summary under every computer science paper (for now) when users use the search function or go to an author’s page.
GovLab NYU's Collection of Lectures on the Ethical implications of Data and Artificial Intelligence from Different Perspectives
Coded Bias follows M.I.T. Media Lab computer scientist Joy Buolamwini, along with data scientists, mathematicians, and watchdog groups from all over the world, as they fight to expose the discrimination within algorithms now prevalent across all spheres of daily life.Â
With less staff and fewer resources than ever before and building pressure to enable contactless government, how can you continue to delight your citizens? With smart customer service automation.
Feminist.AI works to put technology into the hands of makers, researchers, thinkers and learners to amplify unheard voices and create more accessible AI for all.
Using integrated sensors and artificial intelligence, Livio AI enhances your listening experiences, proactively manages your health and provides access to information to simplify your life.
A project by AlgorithmWatch that maps frameworks that seek to set out principles of how systems for automated decision-making (ADM) can be developed and implemented ethically
An evolving directory of tools and resources that OPSI believes may be useful for civil servants interested in the use of AI in government
Algorithm Watch is a non-profit research and advocacy organization committed to evaluating and shedding light on algorithmic decision-making processes
This popular education-inspired guide breaks down and contextualizes artificial intelligence technologies within structural racism — and provides hands-on exercises to help us imagine beyond, and dream up alternate and ideal futures with AI.