While the first generations of tech-for-good work took a solutionist approach to addressing existing problems with new technology, scholars and activists are driving growing awareness of the problems with technology itself. By exposing the negative consequences, intended or otherwise, of tech, these communities draw attention to issues with tech-centric approaches. Not all of the projects here adopt an ethics lens in their work, but we use it here for simplicity's sake.
ParityBOT is a Twitter bot that spins the abuse and toxicity directed at women in politics into positive, uplifting and encouraging messages.
The artificial intelligence technology that powers ParityBOT detects and classifies hateful, harmful and toxic tweets directed at women in leadership or public office
Knowing without Seeing is a research project by Amber Sinha which explores meaningful transparency solutions for opaque algorithms, and privileges comprehension over mere access to information.
The mission of the Coalition for Independent Tech Research is to advance, defend, and sustain the right to ethically study the impact of technology on society.
The Digital Freedom Fund and its partner European Digital Rights (EDRi) are in the second phase of an initiative that emerged to decolonise the digital rights field.
We set ourselves a task to create a short, simple, and practical resource to help teams who are designing projects to identify potential risks and harms.
Coda Story reports on major currents shaping our world from disinformation to authoritarian technologies to the war on science. Coda stays on these stories to reveal why they matter, how they are connected and where they are heading next.
1991 Broadway Street, Suite 200, Redwood City, California 94063
The Tech We Want is four-year, $8 million portfolio of work focused on connecting and empowering a new wave of leaders, companies, and technologies that are built on inclusivity, mutualism, sustainability, accountability, and responsible innovation.
United Kingdom of Great Britain and Northern Ireland (the)
"In only a decade, the labour market has changed beyond all recognition - from zero-hour contracts to platform monopolies - meanwhile union memberships are at historic lows."
Policy decisions on and resource allocations for IEM tend to be made with inadequate data. The focus of the IDS programme is to coordinate a transboundary effort to develop indigenous data with indigenous stewardship.
The Civics of Technology Project shares research and curriculum and offers professional development that encourages teachers and students to critically inquire into the effects of technology on our individual and collective lives.
What the Future Wants is an interactive youth focused exhibition that presents different perspectives on technology from the personal, to the political, to the planetary.
Prior to 2019 elections, the Election Commission of India was able to convene representatives from top social media companies for a two-day brainstorming session on approaches to problematic social media content in elections.
The Santa Clara Principles On Transparency and Accountability in Content Moderation cover various aspects of content moderation, developed by legal scholars and technologists based mostly in the United States, targeting social media companies with large use bases.
The Facebook Digital Literacy Library is hosted by Facebook and currently includes learning resources made available by Youth and Media at the Berkman Klein Center for Internet & Society at Harvard University under a Creative Commons Attribution-ShareAlike 4.0 International license.
"Pledge signed by political parties and technology/social network companies committing to avoiding fake news and mechanisms of disinformation that may affect upcoming elections." - Countering Disinfo
Mnemonic is an NGO dedicated to archiving, investigating and memorializing digital information documenting human rights violations and international crimes.
Institute for the Future, Hamilton Avenue, Palo Alto, CA, USA
Most tech is designed with the best intentions. But once a product is released and reaches scale, all bets are off. The Risk Mitigation Manual presents eight risk zones where we believe hard-to-anticipate and unwelcome consequences are most likely to emerge.
Judgment Call is an award-winning game and team-based activity that puts Microsoft’s AI principles of fairness, privacy and security, reliability and safety, transparency, inclusion, and accountability into action.
We explore societal perspectives surrounding the development and application of digital technology, focusing on ethics, policy, politics, and quality of life.
2017
Coded Bias follows M.I.T. Media Lab computer scientist Joy Buolamwini, along with data scientists, mathematicians, and watchdog groups from all over the world, as they fight to expose the discrimination within algorithms now prevalent across all spheres of daily life.
Report a dark pattern today. It will help us fight back against companies usingmanipulative dark patterns to take our private information, money, andtime. You deserve respect, online and off.
The unregulated attention economy driving social media is fraying our democracy, threatening our mental and physical health, and exposing our children to violent and disturbing content.
Everyone is talking about AI, but how and where is it actually being used? We've mapped out interesting examples where AI has been harmful and where it's been helpful.
Access Now defends and extends the digital rights of people and communities at risk. By combining direct technical support, strategic advocacy, grassroots grantmaking, and convenings such as RightsCon, we fight for human rights in the digital age.
Stanford University, Serra Mall, Stanford, CA, USA
The PIT Lab is building a thoughtful community around public interest technology at Stanford. Themes include systemic inequities, democratic values, bridging the divide, career pathways.
Mechanism Design for Social Good (MD4SG) is a multi-institutional initiative using techniques from algorithms, optimization, and mechanism design, along with insights from other disciplines, to improve access to opportunity for historically underserved and disadvantaged communities.
Feminist.AI works to put technology into the hands of makers, researchers, thinkers and learners to amplify unheard voices and create more accessible AI for all.
A project by AlgorithmWatch that maps frameworks that seek to set out principles of how systems for automated decision-making (ADM) can be developed and implemented ethically