While the first generations of tech-for-good work took a solutionist approach to addressing existing problems with new technology, scholars and activists are driving growing awareness of the problems with technology itself. By exposing the negative consequences, intended or otherwise, of tech, these communities draw attention to issues with tech-centric approaches. Not all of the projects here adopt an ethics lens in their work, but we use it here for simplicity's sake.
Suggested reading:
Supports research, implementation and education projects involving multi-sector teams that focus on the responsible design, development or deployment of technologies.
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Fatima's mission is for every person in the world to have their voice heard, while ensuring a safe, secure, and ethical research experience.
Open Terms Archive publicly records every version of the terms of digital services to enable democratic oversight.
We exist to protect democracy and human rights from digital threats.
Research & Data Investigation Lab - Where indie data punk, meets media theory pop to investigate digital rights blues
Its goal is to promote research and disseminate best practice in the ethical application of artificial intelligence in cities.
FreedomBox is a private server system that empowers regular people to host their own internet services, like a VPN, a personal website, file sharing, encrypted messengers, a VoIP server, a metasearch engine, and much more.
$30 million initial investment will bridge gaps between industry, civil society, and policymakers, broadening generative AI infrastructure in service of society
"The California Privacy Protection Agency (CPPA) is a California state government agency created by the California Privacy Rights Act of 2020 (CPRA). As the first dedicated privacy regulator in the United States, the agency implements and enforces the CPRA and the California Consumer Privacy Act." - Wikipedia
The Freedom Online Coalition is a partnership of 38 governments, working to advance Internet freedom.
A publicação visa colaborar na construção de estratégias de proteção, fomentando oficinas coletivas com defensoras e defensores de direitos humanos (DDHs), movimentos sociais e organizações da sociedade civil.
In the project “reframe[Tech] – Algorithms for the Common Good”, we are committed to ensuring that efforts to develop and use algorithms and artificial intelligence are more closely aligned with the common good.
A commitment to make digital technologies work for, not against, democracy and human rights. It underlines the joint responsibility of governments, multilateral organizations, civil society, and technology companies to develop and use digital technologies to the benefit of democracy and human rights.
AI Forensics is a European non-profit that investigates influential and opaque algorithms. We hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms.
This [Canadian] report includes the following: Importance of ethics in AI and government, Understanding the impacts of AI on society, Balancing AI-driven efficiency with privacy and security concerns, and Cyber-security and digital trust.
The Center for Information Technology Policy is a nexus of expertise in technology, engineering, public policy, and the social sciences.
The Future of Privacy Forum (FPF) Training Program provides an in-depth understanding of today’s most pressing privacy and data protection topics.
Code and other resources to help businesses create more inclusive web forms and collect fewer data in general
A startup — and community — building trustworthy and open-source AI.
What builders in Germany, India, Kenya, and the United States need to know when experimenting with new approaches to data governance
The Responsible Computing Challenge - supported by the Mellon Foundation, Omidyar Network, Schmidt Futures, Craig Newmark Philanthropies, USAID, Mozilla - fund academic teams that combine faculty and practitioners from Computing, Humanities, Library and Information Science, and Social Science fields in order to reimagine how the next generation of technologists will be educated.
The World Ethical Data Forum is the only event in the world that embraces the full range of interrelated issues around the use and future of data.
Our mission is to advance, defend, and sustain the right to ethically study the impact of technology on society.
The European Centre for Algorithmic Transparency (ECAT) will contribute to a safer, more predictable and trusted online environment for people and business.
To address ethics and social responsibility in technology, we believe it is important to honor the expertise of many disciplines: anthropology, computer science, critical race and gender studies, data science, design, history, human rights, law, philosophy, political science, science technology & society studies, sociology, and so much more.
Superbloom leverages design as a transformative practice to shift power in the tech ecosystem, because everyone deserves technology they can trust.
While each organization is responsible for its own data, humanitarians under the Inter-Agency Standing Committee (IASC) – which brings together United Nations (UN) entities, Non-Governmental Organization (NGO) consortia and the International Red Cross and Red Crescent Movement – need common normative, system-wide guidance to inform individual and collective action and to uphold a high standard for data responsibility in different operating environments.
From biometrics to surveillance — when people in power abuse technology, the rest of us suffer. Written by Ellery Biddle.
The All Tech Is Human Library Podcast is a special 16-part series featuring a series of rapid-fire intimate conversations with academics, AI ethicists, activists, entrepreneurs, public interest technologists, and integrity workers, who help us answer: How do we build a responsible tech future?
Short for “human experience,” HX is an approach to talking about, engaging with, and designing technology in a way that is aligned with our needs as humans — not users.
All Tech Is Human has developed this free program in order to help build the Responsible Tech pipeline – by facilitating connections and career development among talented students, career changers, and practitioners.
ParityBOT is a Twitter bot that spins the abuse and toxicity directed at women in politics into positive, uplifting and encouraging messages. The artificial intelligence technology that powers ParityBOT detects and classifies hateful, harmful and toxic tweets directed at women in leadership or public office. For every toxic tweet that passes a certain threshold, the bot posts a “positivitweet.” ParityBOT has been deployed in Canada during the federal election in 2019, and the Alberta election in 2019. During this time: +245,000 tweets were processed 393 candidates were tracked +20,000 positive tweets sent
Knowing without Seeing is a research project by Amber Sinha which explores meaningful transparency solutions for opaque algorithms, and privileges comprehension over mere access to information.
This paper identifies and addresses persistent gaps in the consideration of ethical practice in ‘technology for good’ development contexts.
The Digital Freedom Fund and its partner European Digital Rights (EDRi) are in the second phase of an initiative that emerged to decolonise the digital rights field.
We set ourselves a task to create a short, simple, and practical resource to help teams who are designing projects to identify potential risks and harms.
An interactive scorecard where people can see if they would be flagged for social welfare fraud under the Netherlands' broken system
Policy decisions on and resource allocations for IEM tend to be made with inadequate data. The focus of the IDS programme is to coordinate a transboundary effort to develop indigenous data with indigenous stewardship.
Take Back The Tech! is a call to everyone, especially women and girls, to take control of technology to end violence against women.
The Santa Clara Principles On Transparency and Accountability in Content Moderation cover various aspects of content moderation, developed by legal scholars and technologists based mostly in the United States, targeting social media companies with large use bases.
Judgment Call is an award-winning game and team-based activity that puts Microsoft’s AI principles of fairness, privacy and security, reliability and safety, transparency, inclusion, and accountability into action.