Adjacent Fields > Ethical tech and responsible tech - (85)

While the first generations of tech-for-good work took a solutionist approach to addressing existing problems with new technology, scholars and activists are driving growing awareness of the problems with technology itself. By exposing the negative consequences, intended or otherwise, of tech, these communities draw attention to issues with tech-centric approaches. Not all of the projects here adopt an ethics lens in their work, but we use it here for simplicity's sake.

Suggested reading:

Showing 85 Results

Responsible Design, Development, and Deployment of Technologies (ReDDDoT)

Supports research, implementation and education projects involving multi-sector teams that focus on the responsible design, development or deployment of technologies.

Anthropic

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

Fatima

Fatima

United Kingdom of Great Britain and Northern Ireland (the)

Fatima's mission is for every person in the world to have their voice heard, while ensuring a safe, secure, and ethical research experience.

Open Terms Archive

Open Terms Archive publicly records every version of the terms of digital services to enable democratic oversight.

Digital Action

Digital Action

United Kingdom of Great Britain and Northern Ireland (the)

We exist to protect democracy and human rights from digital threats.

SHARE LAB

Research & Data Investigation Lab - Where indie data punk, meets media theory pop to investigate digital rights blues

Global Observatory of Urban Artificial Intelligence

Its goal is to promote research and disseminate best practice in the ethical application of artificial intelligence in cities.

FreedomBox Foundation

FreedomBox is a private server system that empowers regular people to host their own internet services, like a VPN, a personal website, file sharing, encrypted messengers, a VoIP server, a metasearch engine, and much more.

Omidyar Responsible AI funding

$30 million initial investment will bridge gaps between industry, civil society, and policymakers, broadening generative AI infrastructure in service of society

California Privacy Protection Agency (CPPA)

"The California Privacy Protection Agency (CPPA) is a California state government agency created by the California Privacy Rights Act of 2020 (CPRA). As the first dedicated privacy regulator in the United States, the agency implements and enforces the CPRA and the California Consumer Privacy Act." - Wikipedia

Freedom Online Coalition

The Freedom Online Coalition is a partnership of 38 governments, working to advance Internet freedom.

Guia de Proteção para Defensoras e Defensores de Direitos Humanos

A publicação visa colaborar na construção de estratégias de proteção, fomentando oficinas coletivas com defensoras e defensores de direitos humanos (DDHs), movimentos sociais e organizações da sociedade civil.

reframe[Tech] – Algorithms for the Common Good

In the project “reframe[Tech] – Algorithms for the Common Good”, we are committed to ensuring that efforts to develop and use algorithms and artificial intelligence are more closely aligned with the common good.

The Copenhagen Pledge on Tech for Democracy

A commitment to make digital technologies work for, not against, democracy and human rights. It underlines the joint responsibility of governments, multilateral organizations, civil society, and technology companies to develop and use digital technologies to the benefit of democracy and human rights.

AI Forensics

AI Forensics is a European non-profit that investigates influential and opaque algorithms. We hold major technology platforms accountable by conducting independent and high-profile technical investigations to uncover and expose the harms caused by their algorithms.

Ethics in Artificial Intelligence and Government report

This [Canadian] report includes the following: Importance of ethics in AI and government, Understanding the impacts of AI on society, Balancing AI-driven efficiency with privacy and security concerns, and Cyber-security and digital trust.

Princeton Center for Information Technology Policy (CITP)

Princeton Center for Information Technology Policy (CITP)

Princeton University, Princeton, New Jersey, USA

The Center for Information Technology Policy is a nexus of expertise in technology, engineering, public policy, and the social sciences.

Future of Privacy Forum Training Program

The Future of Privacy Forum (FPF) Training Program provides an in-depth understanding of today’s most pressing privacy and data protection topics.

Tech Policy Design Lab

A space for policymakers, tech companies and web users to shape a safer, more empowering web.

Mozilla AI

Mozilla AI

Mountain View, CA, USA

A startup — and community — building trustworthy and open-source AI.

Responsible Computing Challenge

Responsible Computing Challenge

Mountain View, CA, USA

The Responsible Computing Challenge - supported by the Mellon Foundation, Omidyar Network, Schmidt Futures, Craig Newmark Philanthropies, USAID, Mozilla - fund academic teams that combine faculty and practitioners from Computing, Humanities, Library and Information Science, and Social Science fields in order to reimagine how the next generation of technologists will be educated.

The World Ethical Data Forum

The World Ethical Data Forum is the only event in the world that embraces the full range of interrelated issues around the use and future of data.

European Centre for Algorithmic Transparency

The European Centre for Algorithmic Transparency (ECAT) will contribute to a safer, more predictable and trusted online environment for people and business.

Ethical Tech Collective

To address ethics and social responsibility in technology, we believe it is important to honor the expertise of many disciplines: anthropology, computer science, critical race and gender studies, data science, design, history, human rights, law, philosophy, political science, science technology & society studies, sociology, and so much more.

Superbloom (formerly Simply Secure)

Superbloom leverages design as a transformative practice to shift power in the tech ecosystem, because everyone deserves technology they can trust.

IASC Operational Guidance on Data Responsibility in Humanitarian Action

While each organization is responsible for its own data, humanitarians under the Inter-Agency Standing Committee (IASC) – which brings together United Nations (UN) entities, Non-Governmental Organization (NGO) consortia and the International Red Cross and Red Crescent Movement – need common normative, system-wide guidance to inform individual and collective action and to uphold a high standard for data responsibility in different operating environments.

Authoritarian Tech Newsletter

From biometrics to surveillance — when people in power abuse technology, the rest of us suffer. Written by Ellery Biddle.

All Tech Is Human Library Podcast Series

The All Tech Is Human Library Podcast is a special 16-part series featuring a series of rapid-fire intimate conversations with academics, AI ethicists, activists, entrepreneurs, public interest technologists, and integrity workers, who help us answer: How do we build a responsible tech future?

HX Project

Short for “human experience,” HX is an approach to talking about, engaging with, and designing technology in a way that is aligned with our needs as humans — not users.

Responsible Tech Mentorship Program

All Tech Is Human has developed this free program in order to help build the Responsible Tech pipeline – by facilitating connections and career development among talented students, career changers, and practitioners.

Areto Analyzer (formerly ParityBOT)

ParityBOT is a Twitter bot that spins the abuse and toxicity directed at women in politics into positive, uplifting and encouraging messages. The artificial intelligence technology that powers ParityBOT detects and classifies hateful, harmful and toxic tweets directed at women in leadership or public office. For every toxic tweet that passes a certain threshold, the bot posts a “positivitweet.” ParityBOT has been deployed in Canada during the federal election in 2019, and the Alberta election in 2019. During this time: +245,000 tweets were processed 393 candidates were tracked +20,000 positive tweets sent

Knowing without Seeing

Knowing without Seeing is a research project by Amber Sinha which explores meaningful transparency solutions for opaque algorithms, and privileges comprehension over mere access to information.

Addressing ethical gaps in ‘Technology for Good’: Foregrounding care and capabilities

This paper identifies and addresses persistent gaps in the consideration of ethical practice in ‘technology for good’ development contexts.

Decolonising Digital Rights

The Digital Freedom Fund and its partner European Digital Rights (EDRi) are in the second phase of an initiative that emerged to decolonise the digital rights field.

Identifying Potential Data Risks and Harms

We set ourselves a task to create a short, simple, and practical resource to help teams who are designing projects to identify potential risks and harms.

Fraudescorekaart

Fraudescorekaart

Netherlands (the)

An interactive scorecard where people can see if they would be flagged for social welfare fraud under the Netherlands' broken system

Indigenous data sovereignty

Policy decisions on and resource allocations for IEM tend to be made with inadequate data. The focus of the IDS programme is to coordinate a transboundary effort to develop indigenous data with indigenous stewardship.

Take Back the Tech

Take Back The Tech! is a call to everyone, especially women and girls, to take control of technology to end violence against women.

The Santa Clara Principles On Transparency and Accountability in Content Moderation

The Santa Clara Principles On Transparency and Accountability in Content Moderation cover various aspects of content moderation, developed by legal scholars and technologists based mostly in the United States, targeting social media companies with large use bases.

Judgment Call

Judgment Call is an award-winning game and team-based activity that puts Microsoft’s AI principles of fairness, privacy and security, reliability and safety, transparency, inclusion, and accountability into action.

Dark Patterns Tipline

Report a dark pattern today. It will help us fight back against companies usingmanipulative dark patterns to take our private information, money, andtime. You deserve respect, online and off.

Back to Top