an AI-powered scientific paper search engine. It provides a one-sentence tl;dr (too long; didn’t read) summary under every computer science paper (for now) when users use the search function or go to an author’s page.
For entrepreneurial MIT students looking to put their skills to work for a greater good, the Media Arts and Sciences class MAS.664 (AI for Impact) has been a destination point.
Discover how deepfakes work and the visual clues you can use to identify them. We are a group of communication designers that have created this project to demonstrate our research into making our own deep fake, and to communicate the signs you can spot to identify them.
Judgment Call is an award-winning game and team-based activity that puts Microsoft’s AI principles of fairness, privacy and security, reliability and safety, transparency, inclusion, and accountability into action.
A platform built for accessible and consensus-driven public consultation, harnessing machine learning to derive quality insights. Cornell Tech Studio project
Coded Bias follows M.I.T. Media Lab computer scientist Joy Buolamwini, along with data scientists, mathematicians, and watchdog groups from all over the world, as they fight to expose the discrimination within algorithms now prevalent across all spheres of daily life.
What if we could use rules, tests, and parameters to isolate hate speech? Can we identify and analyze elements like speaker intent, context, identity, tone, audience, or any number of indicators that transform words into meanings and change an innocuous statement into a verbal assault?
Combating the proliferation of online hate speech and understanding its mechanics is a complex undertaking. We believe, however, that it can be done. And one way we are working to do so is by teaching machines to recognize hate.
Synthetic Messenger is a botnet that artificially inflates the value of climate news. Everyday it searches the internet for news articles covering climate change. Then 100 bots visit each article and click on every ad they can find.
Everyone is talking about AI, but how and where is it actually being used? We've mapped out interesting examples where AI has been harmful and where it's been helpful.
United Kingdom of Great Britain and Northern Ireland (the)
UnBias aims to provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ co-produced with young people and other stakeholders that will include educational materials and resources to support youth understanding about online environments as well as raise awareness among online providers about the concerns and rights of young internet users.
DocumentCloud runs every document you upload through Thomson Reuters OpenCalais, giving you access to extensive information about the people, places and organizations mentioned in each.
United Kingdom of Great Britain and Northern Ireland (the)
This 7 point framework will help government departments with the safe, sustainable and ethical use of automated or algorithmic decision-making systems.
302 South 4th Street, Suite 500 Manhattan, Kansas 66502
With less staff and fewer resources than ever before and building pressure to enable contactless government, how can you continue to delight your citizens? With smart customer service automation.
Feminist.AI works to put technology into the hands of makers, researchers, thinkers and learners to amplify unheard voices and create more accessible AI for all.
Using integrated sensors and artificial intelligence, Livio AI enhances your listening experiences, proactively manages your health and provides access to information to simplify your life.
A Feminist AI Research network that gathers a cohort of social scientists, economists, and activists, side by side with data, machine learning and computer scientists to discuss how to fix the system and leverage AI for women’s rights.
With every genuine advance in the field of ‘artificial intelligence,’ we see a parallel increase in hype, myths, misconceptions and inaccuracies. These misunderstandings contribute to the opacity of AI systems, rendering them magical, inscrutable and inaccessible in the eyes of the public.
This popular education-inspired guide breaks down and contextualizes artificial intelligence technologies within structural racism — and provides hands-on exercises to help us imagine beyond, and dream up alternate and ideal futures with AI.
Driven by the rapid progress in Artificial Intelligence (AI) research, intelligent machines are gaining the ability to learn, improve and make calculated decisions in ways that will enable them to perform tasks previously thought to rely solely on human experience, creativity, and ingenuity. As a result, we will in the near future see large parts of our lives influenced by AI.
The AI Now Institute at New York University is an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence.
A project by AlgorithmWatch that maps frameworks that seek to set out principles of how systems for automated decision-making (ADM) can be developed and implemented ethically
The AI Initiative is dedicated to shaping the global policy framework to govern the rise of Artificial Intelligence, addressing holistically short-, mid- and long-term governance challenges.
In partnership with researcher Dr Jun-E Tan, EngageMedia has produced research that aims to provide an understanding of AI and its governance from the perspective of civil society in Southeast Asia.