The Algorithmic Transparency Institute builds technology, gathers data, and generates insights to support research, journalism, and advocacy organizations.
In 2019, Executive Order 50 created the Algorithms Management and Policy Officer (AMPO) role. This role is unique in urban governance and is intended to help provide protocols and information about the systems and tools City agencies use to make decisions.
Algorithm Watch is a non-profit research and advocacy organization committed to evaluating and shedding light on algorithmic decision-making processes
Resources and leads for investigating algorithms in society
In the project “reframe[Tech] – Algorithms for the Common Good”, we are committed to ensuring that efforts to develop and use algorithms and artificial intelligence are more closely aligned with the common good.
The European Centre for Algorithmic Transparency (ECAT) will contribute to a safer, more predictable and trusted online environment for people and business.
A Local Law to amend the administrative code of the city of New York, in relation to reporting on algorithmic tools used by city agencies
Philadelphia should think twice about its risk-assessment algorithm.
The CLIMATE INTELLIGENCE – alias CLINT – project (https://climateintelligence.eu/) is a European-funded project whose main objective is to develop an Artificial Intelligence framework composed of Machine Learning techniques and algorithms to process big climate datasets for improving Climate Science in the detection, causation, and attribution of Extreme Events (EEs), namely tropical cyclones, heatwaves and warm nights, droughts, floods, compound and concurrent events.
Art exhibition at MIT: critical explorations of ai and cities
The AI Democracy Projects engage the implications of AI systems and tools, including predictive algorithms, machine learning, and frontier models, for democratic society.
Its goal is to promote research and disseminate best practice in the ethical application of artificial intelligence in cities.
Algorithmic tool by the Italian government to fight tax evasion by cross-checking self-reported income with assets and other databases in a privacy-protecting approach
This list of civil society experts on AI contains profiles and contact information of policy experts, researchers and lawyers who can speak to the media and other stakeholders on issues such as AI regulation, facial recognition, racial justice, AI in health, border surveillance, algorithmic welfare distribution, conditions for workers training Chat-GPT and other key issues of our time.
The use of algorithms in public administration can make our public affairs more convenient, accurate and faster, but it is important to be aware of the limitations of these technologies. Report 2.0
Public Option for AI (PO4AI) is an immersive experience designed for elected officials and city staff to explore how residents’ voices might be centered in decision-making around public-interest technologies.
The Australian Ad Observatory project is enabling independent research into the role that algorithmically targeted advertising plays in society.
A series of projects to figure out how major platforms track users, including through data donations
AI Forensics is a European non-profit that investigates influential and opaque algorithms. We hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms.
A campaign to create hack algorithmic music discovery algorithms to promote an anti-Nazi message and pressure audio platforms to stop hosting Nazi content
News site that ranks news organizations with its algorithm's partisanship score, and allows users to filter sources accordingly
The Public Technology Leadership Collaborative (PTLC) is a new peer learning collective of scholars, researchers, and government leaders committed to addressing the social and cultural implications of data and technology.
This Coursera course aims to help learners understand how inequity and injustice can become embedded in technology, science, and associated policies, and how this can be addressed.
What is a more ambitious vision for data use and regulation that can deliver a positive shift in the digital ecosystem towards people and society?
Knowing without Seeing is a research project by Amber Sinha which explores meaningful transparency solutions for opaque algorithms, and privileges comprehension over mere access to information.
This interactive dashboard displays the themes in fact-checking articles we scraped from IFCN-certified websites in the selected week.
Non-binding guidance from the White House
Hawkfish was a 2020 Election effort, supported by Mike Bloomberg, that engaged with Democratic candidates, allies & causes to reach the right voters with the right message in the right place using robust technology, data, and digital-first storytelling.
Linking river health to wellbeing in the Thames basin
BotSlayer is an application that helps track and detect potential manipulation of information spreading on Twitter.
An open source platform aiming to engage communities and citizen journalists alongside newsroom and freelance journalists for collaborative, decentralised content verification, tracking, and debunking.
Since 2012, the Programme on Democracy & Technology has been investigating the use of algorithms, automation, and computational propaganda in public life.
The Markup, a nonprofit newsroom that investigates how the world’s most powerful institutions use technology to reshape society, today announced the development of The Citizen Browser Project—an initiative designed to measure how disinformation travels across social media platforms over time.
TrustServista uses advanced Artificial Intelligence algorithms in order to provide media professionals, analysts and content distributors with in-depth content analytics and verification capabilities.
Radar is a project which uses a combination of algorithms and manual research to find posts related to three themes over a wide variety of social media platforms in Brazil.
NewsQ seeks to elevate quality journalism when algorithms rank and recommend news articles online. We approach this problem by engaging in design thinking activities in collaboration with technology, journalism, academia and other communities.
In the age of social media, many of us ambiently consume news by reading headlines and descriptions that appear in our news feeds.
We are a joint team of engineers and investigators from CERTH-ITI and Deutsche Welle, trying to build a comprehensive tool for media verification on the Web.
The hidden costs of artificial intelligence, from natural resources and labor to privacy and freedom What happens when artificial intelligence saturates political life and depletes the planet? How is AI shaping our understanding of ourselves and our societies? In this book Kate Crawford reveals how this planetary network is fueling a shift toward undemocratic governance and increased inequality. Drawing on more than a decade of research, award-winning science, and technology, Crawford reveals how AI is a technology of extraction: from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind “automated” services, to the data AI collects from us. Rather than taking a narrow focus on code and algorithms, Crawford offers us a political and a material perspective on what it takes to make artificial intelligence and where it goes wrong. While technical systems present a veneer of objectivity, they are always systems of power. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world. Kate Crawford is a senior principal researcher at Microsoft Research, the inaugural visiting chair of AI and Justice at the École Normale Supérieure, and the Miegunyah distinguished visiting fellow at the University of Melbourne. She co-founded the AI Now Institute at New York University, and leads the Foundations of Machine Learning international working group. She lives in New York City. By Kate Crawford
The Sage project will design and build a new kind of national-scale reusable cyberinfrastructure to enable AI at the edge.
Coded Bias follows M.I.T. Media Lab computer scientist Joy Buolamwini, along with data scientists, mathematicians, and watchdog groups from all over the world, as they fight to expose the discrimination within algorithms now prevalent across all spheres of daily life.
This 7 point framework will help government departments with the safe, sustainable and ethical use of automated or algorithmic decision-making systems.
In 2017, Media Democracy Fund launched a pilot for PhDX, a fellowship program designed to pair graduate or PhD level university students with a background in technology with DC-based public interest technology policy organizations for an immersive fellowship experience over two consecutive summers.
A prototype to display factchecks in real-time on live TV