Loading map...

Last updated: 12 March 2026

An overview of the key organizations, programs, and projects operating in the AI safety space.

PauseAI

Campaign group aiming to convince governments to pause AI development – through public outreach, engaging with decision-makers, and organizing protests.

Category

Advocacy

Collective Action for Existential Safety (CAES)

Aiming to catalyze collective effort towards reducing existential risk, including through an extensive action list for individuals, organizations, and nations.

Category

Advocacy

Existential Risk Observatory (ERO)

Informing the public debate on existential risks, on the basis that awareness is the first step towards reducing those risks.

Category

Advocacy

AI Safety Awareness Project (AISAP)

Raising awareness about modern AI, highlighting its benefits and risks, and letting the public know how they can help – mainly through workshops.

Category

Advocacy

Evitable

Nonprofit founded by David Krueger aiming to inform and organize the public around societal-scale risks and harms of AI.

Category

Advocacy

Explainable

Empowering organizations, researchers, and creators to translate the future of AI into stories the world understands.

Category

Advocacy

Global AI Moratorium (GAIM)

Calling on policymakers to implement a global moratorium on large AI training runs until alignment is solved.

Category

Advocacy

Humans in Control

New advocacy org demanding transparency, accountability, and real safeguards to protect humanity from the risks of unchecked AI.

Category

Advocacy

Legal Advocates for Safe Science and Technology (LASST)

Researching ways to use legal advocacy to make science and technology safer, informing legal professionals about how to help, and advocating in the courts and policy-setting institutions.

Category

Advocacy

Legal Safety Lab

Using legal advocacy in Europe to address risks from frontier technologies, working to foster responsible development and implementation practices.

Category

Advocacy

Stop AI

Non-violent civil resistance organization working to permanently ban the development of smarter-than-human AI.

Category

Advocacy

The Alliance for Secure AI Action

Communications nonprofit focused on changing the narrative air that policymakers in Washington D.C. breathe with respect to AI safety.

Category

Advocacy

The Midas Project (TMP)

Watchdog nonprofit monitoring tech companies, countering corporate propaganda, raising awareness about corner-cutting, and advocating for the responsible development of AI.

Category

Advocacy

Future of Life Institute (FLI)

Steering transformative technology towards benefitting life and away from extreme large-scale risks through outreach, policy advocacy, grantmaking, and event organization.

Category

Advocacy, Governance, Funding

Buddhism & AI Initiative

A collaborative effort to bring together Buddhist communities, technologists, and contemplative researchers worldwide to help shape the future of AI.

Category

Advocacy, Governance, Strategy

AI Safety Foundation

Conducting educational initiatives, research support, and public awareness campaigns. Project of Geoffrey Hinton.

Category

Advocacy, Research support

Astral Codex Ten (ACX)

Blog covering many topics, including reasoning, science, psychiatry, medicine, ethics, genetics, economics, politics, and AI. Often book summaries and commentary on AI safety.

Category

Blog

Don't Worry about the Vase

Blog by Zvi Mowshowitz on various topics, including AI, offering detailed analysis, personal insights, and a rationalist perspective. Posts very frequently.

Category

Blog

AI Frontiers

Platform from the Center for AI Safety (CAIS) posting articles written by experts from a wide range of fields discussing the impacts of AI.

Category

Blog

AI Safety Takes

Blog by AI safety researcher Daniel Paleka curating and succinctly analyzing the latest research and news in AI safety. Posts about every two months.

Category

Blog

Planned Obsolescence

Substack by Ajeya Cotra, researcher at METR, covering AI capabilities forecasting, AI agent benchmarks, timelines for automating AI R&D, and implications for AI safety.

Category

Blog

AI Policy Bulletin

Publishes policy-relevant perspectives on frontier AI governance, including research summaries, opinions, interviews, and explainers.

Category

Blog

AI Prospects

Blog by Eric Drexler on AI prospects and their surprising implications for technology, economics, environmental concerns, and military affairs.

Category

Blog

Bounded Regret

Blog on AI safety by Jacob Steinhardt, a UC Berkeley statistics professor, analysing risks, forecasting future breakthroughs, and discussing alignment strategies.

Category

Blog

ChinaTalk

Deep, thoughtful coverage on China, technology, and US-China relations – written by Jordan Schneider.

Category

Blog

Cold Takes

Blog about transformative AI, futurism, research, ethics, philanthropy etc. by Holden Karnofsky. Includes the "Most Important Century" post series. Last posted in 2024.

Category

Blog

DeepMind Safety Research

Blog from the Google DeepMind safety team discussing research ideas about building AI safely.

Category

Blog

Miles's Substack

Blog from ex-OpenAI (now independent) AI policy researcher Miles Brundage on the rapid evolution of AI and the urgent need for thoughtful governance.

Category

Blog

Obsolete

Reporting and analysis on capitalism, great power competition, and the race to build superintelligence by freelance journalist Garrison Lovely.

Category

Blog

Paul Christiano's Blog

Blog on aligning prosaic AI by one of the leading AI safety researchers. No longer active but the archive is high quality.

Category

Blog

Rising Tide

Blog by Helen Toner (director of CSET and former OpenAI board member) offering analysis on navigating the transition to advanced AI systems.

Category

Blog

The Building Capacity Blog

Blog about building the fields of AI safety and Effective Altruism, discussing big-picture strategy and sharing personal experience from working in the space.

Category

Blog

The Power Law

Top forecaster Peter Wildeford forecasts the future and discusses AI, national security, innovation, emerging technology, and the powers – real and metaphorical – that shape the world.

Category

Blog

Threading the Needle

Publication by Anton Leicht on the political economy of AI progress, featuring weekly posts on how institutions and political incentives interface with fast technological change.

Category

Blog

Victoria Krakovna's Blog

Blog by an AI safety researcher at Google DeepMind, covering alignment research, rationality, and personal productivity insights. Last AI post was 2023.

Category

Blog

AISafety.com: Media channels

The AI safety space is changing rapidly. This directory of key information sources can help you keep up to date with the latest developments.

Category

Blog, Newsletter, Podcast, Video

xAI

Capabilities lab led by Elon Musk with the mission of advancing our collective understanding of the universe. Created Grok.

Category

Capabilities research

DeepSeek

Chinese capabilities lab developing and releasing open-weights large language models. Created DeepSeek-R1.

Category

Capabilities research

Astera Neuro & AGI

Astera Institute's research program exploring neuroscience-informed approaches to AGI, with teams focused on neuroscience, neuro-AI, and AI safety.

Category

Capabilities research, Conceptual research

Anthropic

Capabilities company focusing on LLM alignment, particularly interpretability. Featuring Chris Olah, Jack Clark, and Dario Amodei. Created Claude.

Category

Capabilities research, Empirical research

Google DeepMind

London-based capabilities company with a strong safety team, led by Demis Hassabis. Created AlphaGo, AlphaFold, and Gemini.

Category

Capabilities research, Empirical research

OpenAI

Capabilities company that created ChatGPT, led by Sam Altman. Throughout 2024, roughly half of then-employed AI safety researchers left the company.

Category

Capabilities research, Empirical research

Safe Superintelligence Inc. (SSI)

Research lab founded by Ilya Sutskever, comprised of a small team of engineers and researchers working towards building a safe superintelligence.

Category

Capabilities research, Empirical research

Cyborgism

A strategy for accelerating alignment research by using human-in-the-loop systems which empower human agency rather than outsource it.

Category

Capabilities research, Empirical research

Gray Swan

For-profit company developing tools that automatically assess the risks of AI models and developing its own AI models aiming to provide best-in-class safety and security.

Category

Capabilities research, Empirical research

How to pursue a career in technical AI alignment

A guide written for people who are familiar with the arguments for the importance of AI alignment and are considering pursuing a career working on it.

Category

Career support

80,000 Hours Job Board

Curated list of job posts around the world tackling pressing problems, including AI safety. Also has a newsletter.

Category

Career support

80,000 Hours Problem Profile

Regularly-updated article with motivation and advice around pursuing a career in AI safety.

Category

Career support

AI Safety Quest

Grassroots volunteer organization helping people contribute to reducing catastrophic risk from AI by directing them to the most relevant resources and communities.

Category

Career support

AISafety.com: Advisors

Directory of advisors offering free guidance calls to help you discover how best to contribute to AI safety, tailored to your skills and interests.

Category

Career support

Successif

Helping professionals transition to high-impact work by performing market research on impactful jobs and providing career mentoring, opportunity matching, and professional training.

Category

Career support

Effective Thesis

Empowering students to use their theses as a pathway to impact. Lists research topic ideas and runs an accelerator and fellowship coaching people working on them.

Category

Career support

Heron

Working to bridge the gap between frontier AI models and the level of cybersecurity they need by connecting professionals to high-leverage opportunities in AI security.

Category

Career support

High Impact Professionals (HIP)

Supporting working professionals to maximize their positive impact through their talent directory and Impact Accelerator Program.

Category

Career support

Probably Good

Helps those who want to have a meaningful impact with their careers brainstorm career paths, evaluate options, and plan next steps.

Category

Career support

Upgradable

Nonprofit helping existential safety advocates to systematically optimize their lives and work.

Category

Career support

Arcadia Impact

Runs various projects aimed at education, skill development, and creating pathways into impactful careers.

Category

Career support, Training and education

Alignment Research Center (ARC)

Research organization trying to understand how to formalize mechanistic explanations of neural network behavior.

Category

Conceptual research

Arbital

Wiki on AI alignment theory, mostly written by Eliezer Yudkowsky. Includes foundational concepts, open problems, and proposed solutions.

Category

Conceptual research

Center for Human-Compatible AI (CHAI)

Developing the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems. Led by Stuart Russell at UC Berkeley.

Category

Conceptual research

John Wentworth

Independent alignment researcher working on selection theorems, abstraction, and agency.

Category

Conceptual research

Modeling Cooperation

Conducting long-term future research on improving cooperation in competition for the development of transformative AI.

Category

Conceptual research

Orthogonal

Formal alignment organization led by Tamsin Leake, focused on agent foundations. Also has a public Discord server.

Category

Conceptual research

Alignment of Complex Systems Research Group (ACS)

Studying questions about multi-agent systems composed of humans and advanced AI. Based at Charles University, Prague.

Category

Conceptual research

Computational Rational Agents Laboratory (CORAL)

Research group studying agent foundations in order to create the mathematical tools to align the objectives of an AI with human values.

Category

Conceptual research

Dovetail

Research group working on foundational mathematics that provides an understanding of the nature of AI agents.

Category

Conceptual research

Dr. Roman Yampolskiy

Professor at University of Louisville with a background in cybersecurity, and author of over 100 publications – including 2 books on AI safety.

Category

Conceptual research

Dylan Hadfield-Menell

Associate professor at MIT working on agent alignment. Runs the Algorithmic Alignment Group.

Category

Conceptual research

Steve Byrnes's Brain-Like AGI Safety

Brain-inspired framework using insights from neuroscience and model-based reinforcement learning to guide the design of aligned AGI systems.

Category

Conceptual research

Team Shard

Small group of independent researchers trying to find reward functions which reliably instill certain values in agents.

Category

Conceptual research

Softmax

Research organization developing a theory of "organic alignment" to foster adaptive, non-hierarchical cooperation between humans and digital agents.

Category

Conceptual research, Empirical research

AE Studio

Large team taking a 'Neglected Approaches' approach to alignment, tackling the problem from multiple, often overlooked angles in both technical and policy domains.

Category

Conceptual research, Empirical research, Governance

Formation Research

Aiming to reduce lock-in risks by researching fundamental lock-in dynamics and implementing high-impact interventions.

Category

Conceptual research, Empirical research, Governance

Frontier AI Research (FAIR)

Argentine nonprofit conducting both theoretical and empirical research to advance frontier AI safety as a sociotechnical challenge.

Category

Conceptual research, Empirical research, Governance

MIT Algorithmic Alignment Group

Working towards better conceptual understanding, algorithmic techniques, and policies to make AI safer and more socially beneficial.

Category

Conceptual research, Empirical research, Governance

AI Alignment Forum

Hub for researchers to discuss all ideas related to AI safety. Discussion ranges from technical models of agency to the strategic landscape, and everything in between.

Category

Conceptual research, Empirical research, Governance, Strategy

Association for Long Term Existence and Resilience (ALTER)

Israeli research and advocacy nonprofit working to investigate, demonstrate, and foster useful ways to safeguard and improve the future of humanity.

Category

Conceptual research, Governance

Equilibria Network

Using category theory and agent-based simulations to predict where multi-agent AI systems break down and identify control levers to prevent collective AI behavior from being harmful.

Category

Conceptual research, Governance

Center on Long-Term Risk (CLR)

Research, grants, and community-building around AI safety, focused on conflict scenarios as well as technical and philosophical aspects of cooperation.

Category

Conceptual research, Strategy, Funding

Iliad

Applied mathematics research nonprofit dedicated to advancing foundational alignment research. Organizes events and a research residency.

Category

Conceptual research, Training and education

Conjecture

Alignment startup born out of EleutherAI, following a Cognitive Emulation approach to build controllable LLMs and tackle core AI safety challenges.

Category

Empirical research

EquiStamp

Engineering company providing evaluation implementation, data annotation, and project operations so AI safety researchers can focus on research.

Category

Empirical research

Model Evaluation & Threat Research (METR)

Researching, developing, and running evaluations of AI capabilities, including broad autonomous capabilities and the ability of AI systems to conduct AI R&D.

Category

Empirical research

Redwood Research

Nonprofit researching AI control and alignment faking. Also consults governments and AI companies on AI safety practices.

Category

Empirical research

Apollo Research

Aiming to detect deception by designing AI model evaluations and conducting interpretability research to better understand frontier models. Also provides guidance to policymakers.

Category

Empirical research

FAR.AI

Conducting research, hosting events, and running programs – including the FAR.Lab coworking space.

Category

Empirical research

Meaning Alignment Institute (MAI)

Researching how to align AI, markets, and democracies with what people value – from theory to practice.

Category

Empirical research

Ought

Product-driven research lab developing mechanisms for delegating high-quality reasoning to ML systems. Built Elicit, an AI assistant for researchers and academics.

Category

Empirical research

TruthfulAI

Nonprofit led by Owain Evans, researching situational awareness, deception, and hidden reasoning in LLMs.

Category

Empirical research

Aether

Aiming to conduct technical research that yields valuable insights into the risks and opportunities that LLM agents present for AI safety.

Category

Empirical research

Aligned AI

Oxford-based startup attempting to use mathematical and theoretical techniques to achieve safe off-distribution generalization.

Category

Empirical research

Beneficial AI Foundation (BAIF)

Founded by Max Tegmark, supports a range of technical AI safety research – including mechanistic interpretability, red-teaming, and guaranteed safe AI.

Category

Empirical research

Cadenza Labs

Researching, benchmarking, and developing lie detectors for LLMs, including using cluster normalization techniques.

Category

Empirical research

Cavendish Labs

AI safety (and pandemic prevention) research community based in a small town in Vermont, USA.

Category

Empirical research

Compassion in Machine Learning (CaML)

Working on research to make transformative AI more compassionate towards all sentient beings through new data generation methods.

Category

Empirical research

Computational and Biological Learning Lab (CBL)

Research group using engineering approaches to understand the brain and to develop artificial learning systems.

Category

Empirical research

Contramont Research

Working on LM backdoors, real-world evals, and scalable oversight. Published a paper demonstrating cryptographic backdoors that evade detection even with full white-box access.

Category

Empirical research

Coordinal Research

Developing tools that accelerate the rate human researchers can make progress on alignment, and building automated research systems that can assist in alignment work today.

Category

Empirical research

Geodesic

Research nonprofit focused on leading projects with the shortest path to impact for AI safety – currently working on chain-of-thought health and monitoring.

Category

Empirical research

Goodfire

Mechanistic interpretability lab building infrastructure to allow researchers to decode neural networks in order to make them more understandable, editable, and safer.

Category

Empirical research

Krueger AI Safety Lab (KASL)

AI safety research group at Mila, led by David Krueger. Previously based at the University of Cambridge.

Category

Empirical research

LawZero

Advancing research and developing technical solutions for safe-by-design AI systems based on Scientist AI, a research direction led by Yoshua Bengio.

Category

Empirical research

Luthien

Developing AI Control into production-ready solutions that can be deployed in real-world AI systems.

Category

Empirical research

NYU Alignment Research Group (ARG)

Group of researchers at New York University doing empirical work with LLMs in order to address risks posed by advanced AI.

Category

Empirical research

Palisade Research

Investigating cyber offensive AI capabilities and the controllability of frontier AI models in order to advise policy makers and the public on AI risks.

Category

Empirical research

Simplex

Small team of researchers and engineers aiming to bring the best of physics and computational neuroscience together in order to understand and control AGI.

Category

Empirical research

Transluce

Building open source, scalable, AI-driven tools to understand and analyze AI systems and steer them in the public interest.

Category

Empirical research

AI Objectives Institute (AOI)

Creating tools and programs to ensure that AI and economic systems of the future are built with genuine human objectives.

Category

Empirical research, Capabilities research

EleutherAI

Open-source research lab focused on interpretability and alignment. Operates primarily through a public Discord server, where research is discussed and projects are coordinated.

Category

Empirical research, Capabilities research

Workshop Labs

For-profit working on mitigating gradual disempowerment and the intelligence curse by creating personalized models.

Category

Empirical research, Capabilities research

Timaeus

Using singular learning theory to develop the science of understanding how training data shapes AI model behavior.

Category

Empirical research, Conceptual research

Center for AI Safety (CAIS)

Conducting safety research, building the field of AI safety researchers, and advocating for safety standards.

Category

Empirical research, Conceptual research, Advocacy

Center for Long-Term Cybersecurity (CLTC)

UC Berkeley research center bridging academic research and practical policy needs in order to anticipate and address emerging cybersecurity challenges.

Category

Empirical research, Governance, Strategy

Harmony Intelligence

For-profit creating defensive cybersecurity products that mitigate AI x cyber risk. Also supports the Australian government with AI safety evals and policymaking.

Category

Empirical research, Strategy

AI Futures Project (AIFP)

Small research group forecasting the future of AI. Created 'AI 2027', a detailed forecast scenario projecting the development of artificial superintelligence.

Category

Forecasting

Forecasting Research Institute (FRI)

Advancing the science of forecasting for the public good by working with policymakers and nonprofits to design practical forecasting tools, and test them in large experiments.

Category

Forecasting

Manifold Markets

Prediction market platform covering many topics, using play money called "mana". Features markets on AI and AI safety.

Category

Forecasting

Metaculus

Well-calibrated forecasting platform covering a wide range of topics, including AI and AI safety.

Category

Forecasting

Quantified Uncertainty Research Institute (QURI)

Advancing forecasting and epistemics to improve the long-term future of humanity. Conducts research and builds software.

Category

Forecasting

Transformative Futures Institute (TFI)

Research nonprofit utilizing foresight to mitigate societal-scale risks from advanced AI and other potential global catastrophic risks.

Category

Forecasting

MIT FutureTech

Interdisciplinary group aiming to identify and understand trends in computing that create opportunities for (or pose risks to) our ability to sustain economic growth.

Category

Forecasting

Epoch AI

Research institute examining the driving forces behind AI and forecasting its economic and societal impact.

Category

Forecasting, Strategy

AI Safety Tactical Opportunities Fund (AISTOF)

Pooled multi-donor fund structured to be fast and rapidly capture emerging opportunities, including in governance, technical alignment, and evaluations. Managed by JueYan Zhang.

Category

Funding

Coefficient Giving (CG)

The largest funder in the existential risk space, backed primarily by Dustin Moskovitz and Cari Tuna. Previously called Open Philanthropy.

Category

Funding

Future of Life Institute (FLI): Fellowships

Includes PhD and postdoctoral fellowships in technical AI safety, and a PhD fellowship in US-China AI governance.

Category

Funding

Long-Term Future Fund (LTFF)

Making grants addressing global catastrophic risks, promoting longtermism, and otherwise increasing the likelihood that future generations will flourish.

Category

Funding

Longview Philanthropy

Devises and executes bespoke giving strategies for major donors, working with them at all stages of their giving journey.

Category

Funding

Manifund

Marketplace for funding new charities, including in AI safety. Users can find impactful projects, buy impact certificates, and weigh in on what gets funded.

Category

Funding

Survival and Flourishing Fund (SFF)

The second largest funder in AI safety, using an algorithm and meeting procedure called 'The S-process' to allocate grants.

Category

Funding

AE Studio Research

Empowering innovators and scientists to increase human agency by creating the next generation of responsible AI. Providing support, resources, and open-source software.

Category

Funding

AI Risk Mitigation (ARM) Fund

Aiming to reduce catastrophic risks from advanced AI through grants towards technical research, policy, and training programs for new researchers.

Category

Funding

AISafety.com: Funding

Comprehensive and up-to-date directory of sources of financial support for AI safety projects, ranging from grant programs to venture capitalists.

Category

Funding

Astralis Foundation

Funding initiative with $25M annual giving, backing people and ideas with the funding, strategic guidance, and networks they need to steer transformative AI toward beneficial outcomes.

Category

Funding

Center on Long-Term Risk (CLR): Fund

Supports projects and individuals aiming to address worst-case suffering risks from the development and deployment of advanced AI systems.

Category

Funding

Cooperative AI Foundation (CAIF)

Charity foundation backed by a large philanthropic commitment supporting research into improving cooperative intelligence of advanced AI.

Category

Funding

EA Infrastructure Fund (EAIF)

Aiming to increase the impact of effective altruism projects (including AI safety) by increasing their access to talent, capital, and knowledge.

Category

Funding

Ergo Impact

Helping philanthropists find, fund, and scale the most promising people and solutions to the world’s most pressing problems.

Category

Funding

Foresight Institute: Funding

Foresight's 'AI for Science & Safety Nodes' program offers funding, a community hub, and local compute in either San Francisco or Berlin.

Category

Funding

Future of Life Foundation (FLF)

Accelerator aiming to steer transformative technology towards benefiting life and away from extreme large-scale risks.

Category

Funding

Giving What We Can (GWWC)

Community of donors who have pledged to donate a significant portion of their income to highly effective charities, including those in AI safety.

Category

Funding

Science of Trustworthy AI

Funder housing the science-focused philanthropic efforts of Eric and Wendy Schmidt, moving large amounts of funding towards AI safety via its AI institute.

Category

Funding

The Navigation Fund

Jed McCaleb's fund, making grants to organizations and projects in various cause areas – including AI safety.

Category

Funding

Advanced Research + Invention Agency (ARIA)

UK government R&D funding agency aiming to unlock scientific and technological breakthroughs that benefit everyone. Similar to DARPA in the US.

Category

Funding

AI2050

Philanthropic initiative supporting researchers working on key opportunities and hard problems that are critical to get right for society to benefit from AI.

Category

Funding

AISafety.com: Donation Guide

Regularly-updated guide on how to donate most effectively to the AI safety field given the funding and time you have available.

Category

Funding

An Overview of the AI Safety Funding Situation

An analysis of the main funding sources in AI safety over time, useful for gaining a better understanding of what opportunities exist in the space.

Category

Funding

Lionheart Ventures

VC firm investing in ethical founders developing transformative technologies that have the potential to impact humanity on a meaningful scale.

Category

Funding

Macroscopic Ventures

Swiss VC focused on reducing suffering risks, including that posed by catastrophic AI misuse and AI conflict.

Category

Funding

Meta Charity Funders

Network of donors funding charitable projects that work one level removed from direct impact, often cross-cutting between cause areas.

Category

Funding

Mythos Ventures

VC aiming to empower founders building a radically better world with safe AI systems by investing in ambitious teams with defensible strategies that can scale to post-AGI.

Category

Funding

Nonlinear AI Safety Advocacy Grants

Grant program providing funding to those raising awareness about AI risks or advocating for a pause in AI development.

Category

Funding

Saving Humanity from Homo Sapiens (SHfHS)

Small organization with a long history of finding the people doing the best work to prevent human-created existential risks and financially supporting them.

Category

Funding

Halcyon Futures

Identifying leaders from business, policy, and academia, and helping them take on ambitious projects in AI safety.

Category

Funding, Career support

Juniper Ventures

VC firm investing in AI safety startups. Run by exited founders and backed by Reid Hoffman, Eric Ries, and Geoff Ralston.

Category

Funding, Career support

Alignment Foundation

Funding neglected approaches to AI alignment through grants and fiscal sponsorship, and conducting in-house technical safety research.

Category

Funding, Research support

UK AI Security Institute (UK AISI)

UK government organization conducting research and building infrastructure to test the safety of advanced AI and measure its impacts. Also working to shape global policy.

Category

Governance

AI Policy Institute (AIPI)

Channeling public concern around AI into effective regulation through engaging with policymakers, media, and the public.

Category

Governance

Center for Security and Emerging Technology (CSET)

Georgetown University think tank providing decision-makers with data-driven analysis on the security implications of emerging technologies.

Category

Governance

Centre for Future Generations (CFG)

Brussels think tank focused on helping governments anticipate and responsibly govern the societal impacts of rapid technological change.

Category

Governance

Centre for the Governance of AI (GovAI)

AI governance research group at Oxford, producing research tailored towards decision-makers and running career development programmes.

Category

Governance

European AI Office

Established within the European Commission as the centre of AI expertise in the EU, playing a key role in implementing the AI Act.

Category

Governance

Institute for Law & AI (LawAI)

Think tank researching and advising on the legal challenges posed by AI, premised on the idea that sound legal analysis will promote security, welfare, and the rule of law.

Category

Governance

SaferAI

French nonprofit developing quantitative risk models, evaluating company practices, and leading standards development that shapes AI regulation worldwide.

Category

Governance

The Future Society (TFS)

Drawing on their analyses and networks to provide decision-makers worldwide with pragmatic guidance on navigating technological, catastrophic, and political risks from AI.

Category

Governance

Oxford Martin AI Governance Initiative (AIGI)

Research center housed in the Martin School of the University of Oxford researching AI governance from both technical and policy perspectives.

Category

Governance

AI Governance & Safety Canada (AIGS Canada)

Nonpartisan nonprofit and community of people across Canada, producing white papers, legislative recommendations, and government submissions.

Category

Governance

AI Standards Lab

Politically and geographically neutral nonprofit converting insights from the existing literature into ready-made text for AI safety standards.

Category

Governance

Americans for Responsible Innovation (ARI)

Bipartisan nonprofit seeking to address a broad range of policy issues raised by AI, including current harms, national security concerns, and emerging risks.

Category

Governance

Beijing Institute of AI Safety and Governance (Beijing-AISI)

Developing AI safety and governance frameworks in order to provide a safe foundation for AI innovation and applications.

Category

Governance

Center for AI Standards and Innovation (CAISI)

US government organization developing voluntary AI standards and conducting security evaluations of AI systems. Formerly the USAISI.

Category

Governance

Center for Law & AI Risk (CLAIR)

Supporting research at the intersection of law and AI safety, aiming to build a field of legal scholars working to understand how law can reduce catastrophic risks from advanced AI.

Category

Governance

China Al Safety & Development Association (CnAISDA)

China’s self-described counterpart to the AI safety institutes of other countries. Its primary function is to represent China in international AI conversations.

Category

Governance

General Purpose AI Policy Lab

Producing research that helps French institutional actors address the security and international coordination requirements posed by the development of general purpose AI.

Category

Governance

Good Ancestors

Australian lobbying organization focused on AI safety policy and other AI-related issues, including cybersecurity and biosecurity. Also runs Australians for AI Safety.

Category

Governance

RAND Global and Emerging Risks

Delivering rigorous and objective public policy research on the most consequential challenges to civilization and global security.

Category

Governance

Secure AI Project

Nonprofit developing and advocating for AI safety principles to be put into practice in US state and federal legislatures.

Category

Governance

Simon Institute for Longterm Governance

Conducting research on international AI governance, facilitating exchange between technical and policy communities, and educating diplomats and civil servants about frontier AI.

Category

Governance

The AI Policy Network (AIPN)

Building bipartisan support for federal policies that prepare the US for the emergence of AI systems on the path to AGI and beyond.

Category

Governance

The AI Whistleblower Initiative (AIWI)

Helps AI insiders raise concerns about potential risks and misbehavior in AI development by providing whistleblowing services, expert guidance, and secure communication tools.

Category

Governance

Vista Institute for AI Policy

Promoting informed policymaking to navigate emerging challenges from AI through research, knowledge-sharing, and skill building.

Category

Governance

ControlAI

Nonprofit fighting to keep humanity in control of AI by developing policy and conducting public outreach.

Category

Governance, Advocacy

International AI Governance Alliance (IAIGA)

Nonprofit dedicated to establishing an independent global organization capable of effectively mitigating extinction risks from AI and fairly distributing its economic benefits to all.

Category

Governance, Advocacy

Machine Intelligence Research Institute (MIRI)

The original AI safety technical research organization, co-founded by Eliezer Yudkowsky. Now focusing on policy and public outreach.

Category

Governance, Advocacy, Conceptual research

Effective Institutions Project (EIP)

Research and advisory organization focused on improving the way institutions make decisions on critical global challenges.

Category

Governance, Strategy

Institute for AI Policy and Strategy (IAPS)

Research and field-building organization focusing on policy and standards, compute governance, and international governance and China.

Category

Governance, Strategy

International Association for Safe & Ethical AI (IASEAI)

Connecting experts from academia, policy groups, civil society, industry, and beyond to promote research, shape policy, and build understanding around AI safety.

Category

Governance, Strategy

Centre pour la Sécurité de l'IA (CeSIA)

French nonprofit dedicated to research, education, and awareness-raising about the most extreme risks of advanced AI.

Category

Governance, Strategy, Advocacy

Import AI

Weekly updates on the latest developments in AI research (including governance) written by Jack Clark, co-founder of Anthropic.

Category

Newsletter

Transformer

Aiming to help decision-makers understand what’s happening in AI and why it matters – through news roundups, explainers, features, and opinion pieces.

Category

Newsletter

AI Safety Events & Training

Weekly newsletter listing newly-announced AI safety events and training programs, both online and in-person.

Category

Newsletter

AI Safety Newsletter

Newsletter published every few weeks discussing recent developments in AI and AI safety. No technical background required.

Category

Newsletter

AI Safety at the Frontier

Johannes Gasteiger, an alignment researcher at Anthropic, selects and summarizes the most interesting AI safety papers each month.

Category

Newsletter

AI Safety Funding

Newsletter listing newly announced funding opportunities for individuals and organizations working to reduce AI existential risk.

Category

Newsletter

AI Safety in China

Newsletter from Concordia AI, a Beijing-based social enterprise, providing updates on AI safety developments in China.

Category

Newsletter

Can We Secure AI With Formal Methods?

Current-events newsletter for keeping up to date with FMxAI (formal methods and AI) with a gear toward safety, doing a mix of shallow technical reviews of papers and movement updates.

Category

Newsletter

ML Safety Newsletter

Infrequent newsletter aiming to inform readers about recent ML safety research, focussing on areas like adversarial robustness, interpretability, and control.

Category

Newsletter

80,000 Hours Podcast

In-depth conversations about the world’s most pressing problems – including AI safety – and what you can do to help solve them.

Category

Podcast

AI X-risk Research Podcast (AXRP)

Interviews with (mostly technical) AI safety researchers about their research, aiming to get a sense of why it was written and how it might reduce existential risk from AI.

Category

Podcast

Future of Life Institute (FLI) Podcast

Interviews with existential risk researchers, policy experts, philosophers, and a range of other influential thinkers.

Category

Podcast

The Cognitive Revolution

Biweekly podcast where host Nathan Labenz interviews AI innovators and thinkers, discussing the transformative impact AI will likely have in the near future.

Category

Podcast

Dwarkesh Podcast

Well-researched interviews with influential intellectuals going in-depth on AI and technology, and their broader societal implications.

Category

Podcast

The AI Policy Podcast

Podcast from the Center for Strategic & International Studies (CSIS) discussing AI regulation, innovation, national security, and geopolitics.

Category

Podcast

Alignment Ecosystem Development (AED)

Building and maintaining key online resources for the AI safety community, including AISafety.com and AISafety.info. Volunteers welcome.

Category

Research support

Centre for Enabling EA Learning & Research (CEEALAR aka EA Hotel)

Free or subsidised accommodation and board in Blackpool, England, for people working on/transitioning to working on global catastrophic risks.

Category

Research support

European Network for AI Safety (ENAIS)

Community of researchers and policymakers from over 13 countries across Europe, united in their efforts to advance AI safety.

Category

Research support

Ashgro

Providing fiscal sponsorship to AI safety projects, saving them time and allowing them to access more funding.

Category

Research support

Berkeley Existential Risk Initiative (BERI)

Providing flexible funding and operations support to university research groups working on existential risk, enabling projects otherwise hindered by university administration.

Category

Research support

Catalyze Impact

Incubating early-stage AI safety research organizations. The program involves co-founder matching, mentorship, and seed funding, culminating in an in-person building phase.

Category

Research support

Constellation

Center for collaborative research in AI safety, supporting promising work through fellowships, an incubator, and hosting individuals and teams.

Category

Research support

Future Matters

Conducts research and provides strategy consulting services to clients trying to advance AI safety – and other causes – through policy, politics, coalitions or social movements.

Category

Research support

Lightcone Infrastructure

Nonprofit maintaining LessWrong, the Alignment Forum, and Lighthaven (an event space in Berkeley, USA).

Category

Research support

London Initiative for Safe AI (LISA)

Coworking space hosting organizations (including BlueDot Impact, Apollo Research), acceleration programs (including MATS, ARENA), and independent researchers.

Category

Research support

5050

12/14-week program run by Fifty Years, helping scientists and engineers build startups working on AI safety.

Category

Research support

Atlas Computing

Creating new AI safety orgs by identifying unowned problems, mapping stakeholders, drafting milestones, sourcing early funders, and recruiting an expert leader to take ownership.

Category

Research support

Good Impressions

Marketing firm helping socially impactful projects grow, both through consulting and running full-scale campaigns.

Category

Research support

Impact Ops

Providing consultancy and hands-on support to help high-impact organizations upgrade their operations.

Category

Research support

Mox

Incubator and coworking space in San Francisco aimed at various groups, including those working on AI safety.

Category

Research support

Nonlinear

Nonprofit aiming to prevent extinction and reduce suffering risks by providing funding, 1-1 coaching, and career advice.

Category

Research support

Pause House

Free accomodation and board in Blackpool, England, for individuals doing work related to pushing for a pause to AI development.

Category

Research support

PEAKS

Coworking space in Zurich, Switzerland, for people working on AI safety or effective altruism. Regularly hosts events – ranging from relaxed gatherings to lightning talks and discussions.

Category

Research support

RAISEimpact

Program designed to help AI safety organizations strengthen their management and leadership practices in order to amplify their effectiveness.

Category

Research support

Rethink Wellbeing

Nonprofit running programs to give altruists the tools they need to improve their mental health, so they can have a greater impact on the world.

Category

Research support

Safe AI Forum (SAIF)

Fostering responsible governance of AI to reduce catastrophic risks through shared understanding and collaboration among key global actors.

Category

Research support

Singapore AI Safety Hub (SASH)

Coworking, events, and community space for people working on AI safety. Runs regular talks, hackathons, and networking events.

Category

Research support

Trajectory Labs

AI safety coworking and events space in Toronto, Canada, providing a physical space, hosting events, and maintaining a collaborative community.

Category

Research support

Apart Research

AI safety research lab hosting open-to-all research sprints, publishing papers, and incubating talented researchers.

Category

Research support, Career support

Meridian

Coworking space and field-building nonprofit based in Cambridge, UK. Houses projects like CAISH and the ERA:AI Fellowship, as well as visiting AI safety researchers.

Category

Research support, Training and education

AISafety.com

Hub for key resources for the AI safety community, including directories of courses, jobs, upcoming events and training programs etc. – and this map!

Category

Resource

Effective Altruism Forum

Forum on doing good as effectively as possible, including AI safety. Also has a podcast featuring text-to-speech narrations of top posts.

Category

Resource

LessWrong

Online forum dedicated to improving human reasoning, containing a lot of AI safety content. Also has a podcast featuring text-to-speech narrations of top posts.

Category

Resource

AI Digest

Concise visual explainers of important trends in AI, grounded in concrete examples of what AI models can do right now.

Category

Resource

AI Watch

Database tracking people, organizations, and “products” in the AI safety community, serving as a reference for positions, affiliations, and related data.

Category

Resource

AI Alignment Slack

The biggest real-time online community of people interested in AI safety, with channels ranging from general topics to specific fields to local groups.

Category

Resource

AI Risk Explorer (AIRE)

Online platform monitoring the emergence of large-scale AI risks, featuring curated information on evaluations, incidents, and policies.

Category

Resource

AI Risk: Why Care?

AI safety chatbot that excels at addressing hard questions and counterarguments about existential risk.

Category

Resource

AI Safety for Fleshy Humans

Accessible, comic-illustrated series describing the history of AI capabilities, the alignment problem, and potential solutions.

Category

Resource

AI Safety Map Anki Deck

Flashcards for helping to learn and memorize the main organizations, projects, and programs currently operating in the AI safety space.

Category

Resource

AI Safety Support (AISS)

Field-building organization now chiefly serving as the home for an extensive resources list called Lots of Links.

Category

Resource

Effective Altruism Domains

Directory of domains freely available to be used for high-impact projects, including those contributing to AI safety.

Category

Resource

MIT AI Risk Repository

Comprehensive living database of over 1600 AI risks, categorized by their cause and risk domain.

Category

Resource

RiesgosIA.org

Spanish-language website featuring AI safety educational tools, including a daily‑updated monitor of AI safety research papers.

Category

Resource

Neuronpedia

Open source interpretability platform including infrastructure, data, and tools for circuit tracing, activation steering, and semantic search.

Category

Resource, Empirical research

AI-Plans

Ranked and scored contributable compendium of alignment plans and their problems. Runs regular hackathons.

Category

Strategy

Centre for the Study of Existential Risk (CSER)

Interdisciplinary research center at the University of Cambridge dedicated to the study and mitigation of existential and global catastrophic risks.

Category

Strategy

Global Partnership on AI (GPAI)

International initiative with 44 member countries working to implement human-centric, safe, secure, and trustworthy AI embodied in the principles of the OECD Recommendation on AI.

Category

Strategy

Leverhulme Centre for the Future of Intelligence (CFI)

Interdisciplinary research center at the University of Cambridge exploring the nature, ethics, and impact of AI.

Category

Strategy

Median Group

Research nonprofit working on models of past and future progress in AI, intelligence enhancement, and sociology related to existential risks.

Category

Strategy

Partnership on AI (PAI)

Convening academic, civil society, industry, and media organizations to create solutions so that AI advances positive outcomes for people and society.

Category

Strategy

AI Lab Watch

Collects actions frontier AI companies can take to improve safety and public information on what they're doing, then evaluates them accordingly.

Category

Strategy

Center for AI Risk Management & Alignment (CARMA)

Conducting interdisciplinary research supporting global AI risk management. Also produces policy and technical research.

Category

Strategy

Forethought

Small research nonprofit focused on how to navigate the transition to a world with superintelligent AI systems.

Category

Strategy

Global Catastrophic Risk Institute (GCRI)

Small think tank developing solutions for reducing existential risk by leveraging both scholarship and the demands of real-world decision-making.

Category

Strategy

Narrow Path

A series of proposals developed by ControlAI intended for action by policymakers in order for humanity to survive artifical superintelligence.

Category

Strategy

Rethink Priorities

Researching solutions and strategies and mobilizing resources across various cause areas – including AI safety – in order to safeguard a flourishing present and future.

Category

Strategy

The Compendium

Living document aiming to present a coherent worldview explaining the race to AGI and extinction risks and what to do about them – in a way that is accessible to non-technical readers.

Category

Strategy

Convergence Analysis

Building a foundational series of sociotechnical reports on key AI scenarios and governance recommendations, and conducting awareness campaigns to the general public.

Category

Strategy, Advocacy

AI Governance and Safety Institute (AIGSI)

Aiming to improve institutional response to existential risk from AI by conducting research and outreach, and developing educational materials.

Category

Strategy, Advocacy

AI Impacts

Answering decision-relevant questions about the future of AI, including through research, a wiki, and expert surveys. Run by MIRI.

Category

Strategy, Forecasting

Centre for Long-Term Resilience (CLTR)

Think tank aiming to transform global resilience to extreme risks by improving relevant governance, processes, and decision-making.

Category

Strategy, Governance

Odyssean Institute

Pushing academic and policy discourse forward through research and engagement on horizon scanning, decision making under deep uncertainty, and citizens' assemblies.

Category

Strategy, Governance

Intelligence Rising

Workshop letting decision-makers experience the tensions and risks that can emerge in the highly competitive environment of AI development through an educational roleplay game.

Category

Strategy, Governance, Training and education

AISafety.info

Accessible guide to AI safety for those new to the space, in the form of a comprehensive FAQ and AI safety chatbot. Project of Rob Miles.

Category

Training and education

BlueDot Impact

Runs the standard introductory courses for people new to AI safety. Two main streams: Technical AI Safety and Frontier AI Governance.

Category

Training and education

ML Alignment & Theory Scholars (MATS)

Research program connecting talented scholars with top mentors in AI safety. Involves 12 weeks onsite mentored research in Berkeley, and, if selected, 6–12 months extended research.

Category

Training and education

ML4Good

8-day intensive in-person bootcamps upskilling participants in technical AI safety research, held in various locations around the world.

Category

Training and education

Talos Fellowship

7-month program for graduates to launch EU policy careers reducing risks from AI. Comprised of an online reading group, a Brussels policymaking summit, and optional paid placement.

Category

Training and education

AI Safety Camp (AISC)

3-month part-time online research program with mentorship, aimed at helping people who want to work on AI safety team up together on concrete projects.

Category

Training and education

AI Safety Initiative at Georgia Tech (AISI)

Georgia Tech community conducting training programs and research projects investigating open problems in AI safety.

Category

Training and education

AI Safety, Ethics and Society (AISES)

Course from the Center for AI Safety (CAIS) covering a wide range of risks while leveraging concepts and frameworks from existing research fields to analyze AI safety.

Category

Training and education

AISafety.com: Self-study

Comprehensive, up-to-date directory of AI safety curricula and reading lists for self-led learning at all levels.

Category

Training and education

Alignment Research Engineer Accelerator (ARENA)

4–5 week ML engineering upskilling program, focusing on alignment. Aims to provide individuals with the skills, community, and confidence to contribute directly to technical AI safety.

Category

Training and education

Apart Sprints

Short hackathons and challenges, both online and in-person around the world, focused on important questions in AI safety.

Category

Training and education

Cambridge Boston Alignment Initiative (CBAI)

Helping students get into AI safety research via upskilling programs and fellowships. Supports AISST and MAIA.

Category

Training and education

Cambridge ERA:AI Fellowship

In-person, paid, 8-week summer research fellowship at the University of Cambridge for aspiring AI safety and governance researchers.

Category

Training and education

Center for Human-Compatible AI (CHAI): Internship

Designed for students and professionals who are interested in research in human-compatible AI. Interns work on a research project supervised by a mentor.

Category

Training and education

Center on Long-Term Risk (CLR): Fellowship

2-3 month summer research fellowship in London working on challenging research questions relevant to reducing suffering in the long-term future.

Category

Training and education

Global Challenges Project (GCP)

Intensive 3-day workshops for students to explore the foundational arguments around risks from advanced AI (and biotechnology). Run by Kairos.

Category

Training and education

GovAI Fellowship

3-month program by the Centre for the Governance of AI, designed to help professionals transition to working on AI governance.

Category

Training and education

London AI Safety Research (LASR) Labs

3-month program where participants work in teams of 3–4, supervised by an experienced AI safety researcher, to write an academic-style paper.

Category

Training and education

Mentorship for Alignment Research Students (MARS)

Part-time research program run by Cambridge AI Safety Hub (CAISH), connecting aspiring researchers with experienced mentors to conduct AI safety research for 2–3 months.

Category

Training and education

PIBBSS Fellowship

3-month interdisciplinary program connecting researchers from diverse fields with AI safety mentors, in order to help them transition their career to AI safety.

Category

Training and education

Pivotal Research Fellowship

Annual 9-week program designed to enable promising researchers to produce impactful research and accelerate their careers in AI safety.

Category

Training and education

AI Safety Asia (AISA)

Convening stakeholders, training civil servants and civil society leaders, and conducting regional AI policy research in Asia.

Category

Training and education

Athena Mentorship Program for Women

10-week remote mentorship program for women looking to strengthen their research skills and network in technical Al alignment research.

Category

Training and education

Cooperative AI Summer School

Annual program providing students and early-career professionals in AI, computer science, and related disciplines with a firm grounding in the emerging field of cooperative AI.

Category

Training and education

Foresight Fellowship

1-year program catalyzing collaboration among young scientists, engineers, and innovators working to advance technologies for the benefit of life.

Category

Training and education

Future Impact Group (FIG) Fellowship

Remote, part-time research opportunities in AI policy and philosophy for safe AI. Provides ongoing support, including coworking sessions, issue troubleshooting, and career guidance.

Category

Training and education

Human-aligned AI Summer School

Annual 4-day program held in Prague, Czech Republic, teaching alignment research methodology through talks, workshops, and discussions.

Category

Training and education

IAPS AI Policy Fellowship

Fully-funded, 3-month program by the Institute for AI Policy and Strategy (IAPS) for professionals seeking to strengthen practical policy skills for managing the challenges of advanced AI.

Category

Training and education

Impact Academy: Global AI Safety Fellowship

Fully-funded research program connecting exceptional STEM researchers with full-time placement opportunities at AI safety organizations for up to 6 months.

Category

Training and education

Impact Research Groups (IRG)

8-week mentored research program for London students, where teams explore a research question tackling global challenges – including AI safety.

Category

Training and education

Leaf: Dilemmas and Dangers in AI

Interdisciplinary online fellowship helping teenagers explore how to steer cutting-edge AI technology toward benefitting humanity.

Category

Training and education

Non-Trivial

Fellowship helping people ages 14–20 launch research projects tackling the world’s most pressing problems – including AI safety.

Category

Training and education

Orion AI Governance Initiative

London-based talent development scheme designed to equip outstanding students with the knowledge and skills to shape the future of AI governance.

Category

Training and education

Pathfinder Fellowship

Fellowship run by Kairos supporting university organizers working on AI safety groups by offering funding, 1-on-1 mentorship, resources, and ecosystem connections.

Category

Training and education

Supervised Program for Alignment Research (SPAR)

3-month, part-time research program run by Kairos where mentees collaborate with researchers on impactful projects, with funding provided for compute and project expenses.

Category

Training and education

Technical Alignment Research Accelerator (TARA)

14-week part-time program building technical AI safety research skills through the ARENA curriculum. Consists of weekly in-person learning sessions followed by a 3-week project.

Category

Training and education

UChicago Existential Risk Laboratory (XLab) Fellowship

10-week, in-person summer research fellowship giving students the opportunity to produce high impact research on various emerging threats, including from AI.

Category

Training and education

WhiteBox Research

Filipino nonprofit aiming to develop more AI interpretability and safety researchers in and around Southeast Asia.

Category

Training and education

Tarbell Center for AI Journalism

Supporting journalism that helps society navigate the emergence of increasingly advanced AI. Runs fellowships, grants, and residencies.

Category

Training and education, Funding

ILINA Program

African-led research program dedicated to developing talent, generating research, and shaping policy to advance AI safety.

Category

Training and education, Governance

Rational Animations

Animated videos aiming to foster good thinking, promote altruistic causes, and help ensure humanity's future goes well – particularly regarding AI safety.

Category

Video

Robert Miles

The most popular AI safety education channel, explaining technical alignment concepts to general audiences through accessible explainer videos.

Category

Video

Doom Debates

Channel hosted by Liron Shapira featuring in-depth debates, explainers, and live Q&A sessions, focused on AI existential risk and other implications of superintelligence.

Category

Video

AI Explained

YouTuber discussing the latest AI developments as they happen, offering explanations and analysis of important research and events.

Category

Video

AI In Context

Channel produced by 80,000 Hours presenting thoroughly researched, cinematic stories about what's happening in AI and where the trends are taking us.

Category

Video

AI Safety Videos

Comprehensive directory of AI safety video content, from beginner-friendly introductions to in-depth expert talks.

Category

Video

Dr Waku

Computer science PhD and AI research scientist talking about how AI will likely affect the world, along with expert interviews.

Category

Video

FAR․AI YouTube channel

Talks given by AI safety experts at various events, covering topics ranging from mechanistic interpretability to evals and governance.

Category

Video

Lethal Intelligence

YouTube channel raising awareness about the lethal dangers of AGI through explainer videos and podcasts.

Category

Video

Siliconversations

YouTube channel explaining (mostly) AI safety concepts through entertaining stickman videos.

Category

Video

Species

Channel run by Drew Spartz educating a general audience about AI risk through high-effort mini-documentaries.

Category

Video

The AI Risk Network (ARN)

YouTube channel run by John Sherman, a Peabody and Emmy Award-winning former investigative journalist, aiming to make AI risk a kitchen table conversation.

Category

Video

Category

Status