Media channels

The AI safety space is changing rapidly. These information sources can help you learn more and stay up to date.

AI Alignment Forum (AF)

Hub for researchers to discuss ideas related to ensuring that transformative AI is aligned with human values.

Type

Forum

LessWrong (LW)

Forum dedicated to improving human reasoning, containing a lot of AI safety content. AI Alignment Forum posts are cross-posted here so anyone, not just approved researchers, can engage.

Type

Forum

Robert Miles AI Safety

The most popular AI safety education channel, explaining technical alignment concepts to general audiences through accessible explainer videos.

Type

YouTube channel

Rational Animations

Animated videos aiming to foster good thinking, promote altruistic causes, and help ensure humanity's future goes well – particularly regarding AI safety.

Type

YouTube channel

AISafety.com Reading Group

Fortnightly meetings beginning with a presentation summarizing a recent AI safety paper or article, followed by group discussion. The presentations are posted to YouTube.

Type

YouTube channel

AI In Context

Channel produced by 80,000 Hours presenting thoroughly researched, cinematic stories about what's happening in AI and where the trends are taking us.

Type

YouTube channel

AI Policy Weekly

Weekly newsletter from the Center for AI Policy (CAIP). Each issue explores three key AI policy developments for professionals in the field.

Type

Newsletter

Siliconversations

Entertaining stick figure explainers on (mostly) AI safety – covering alignment, corporate negligence, regulations, and societal impacts.

Type

YouTube channel

AI Explained

YouTuber discussing the latest AI developments as they happen, offering explanations and analysis of important research and events.

Type

YouTube channel

The 80,000 Hours Podcast

Interviews with people working on the world’s most pressing problems (especially AI safety), including advice on how you can use your career to help solve them.

Type

Podcast

AI X-risk Research Podcast (AXRP)

Interviews with technical AI safety researchers about their research, aiming to get a sense of why it was written and how it might reduce existential risk from AI.

Type

Podcast

AI Safety Events & Training

Weekly summaries of newly-announced events and training programs in the AI safety space – both online and in-person.

Type

Newsletter

AI Safety Funding

Newsletter listing newly announced funding opportunities for individuals and organizations working to reduce AI existential risk.

Type

Newsletter

AGI Safety Core

A followable list of accounts on Twitter/X belonging to AI thinkers and organizations that discuss AI safety.

Type

Twitter/X list

AI Policy

A followable Twitter/X list of thoughtful people working on policy to reduce risks and secure benefits from AI.

Type

Twitter/X list

Effective Altruism Forum (EA Forum)

Online hub for people dedicated to doing good in the world to share research, debate ideas, and develop strategies for addressing the world’s most pressing problems – including AI safety.

Type

Forum

Deadly by Default

Primer on existential risk from AI on the Homo Sabiens blog. Lays out the case for concern from start to finish in language that a smart high-schooler could follow.

Type

Article

The Compendium

Living document aiming to present a coherent worldview explaining the race to AGI and extinction risks and what to do about them – in a way that is accessible to non-technical readers.

Type

Article

Astral Codex Ten (ACX)

Blog covering many topics, including AI, reasoning, science, psychiatry, and politics. Often features book summaries and commentary on AI safety.

Type

Blog

Import AI

Weekly summaries of developments in AI research (including governance) written by Jack Clark, co-founder of Anthropic.

Type

Newsletter

AI Safety Newsletter

Newsletter from the Center for AI Safety (CAIS) published every few weeks, discussing developments in AI and AI safety. No technical background required.

Type

Newsletter

policy.ai

Monthly newsletter by the Center for Security and Emerging Technology (CSET) covering AI, emerging technology generally, and security policy.

Type

Newsletter

Transformer

Aims to help decision-makers understand what’s happening in AI and why it matters – through news roundups, explainers, features, and opinion pieces.

Type

Newsletter

ChinAI

The people with the most insight on AI development in China may well be Chinese people themselves. Jeff Ding provides weekly translations of their writings.

Type

Newsletter

The EU AI Act Newsletter

Fortnightly newsletter by the Future of Life Institute (FLI) summarizing and analyzing the latest developments surrounding the EU AI Act.

Type

Newsletter

The AI Evaluation Substack

A monthly digest of the latest developments, research trends, and key initiatives in the realm of AI evaluation.

Type

Newsletter

ML Safety Newsletter

Bimonthly-ish newsletter providing a roundup of the most important research developments in ML safety. Co-written by Dan Hendrycks.

Type

Newsletter

AI Safety in China

Newsletter from Concordia AI, a Beijing-based social enterprise, providing updates on AI safety developments in China.

Type

Newsletter

Can We Secure AI With Formal Methods?

Newsletter for keeping up to date with FMxAI (formal methods and AI) with a gear toward safety, doing a mix of shallow technical reviews of papers and movement updates.

Type

Newsletter

AI Safety at the Frontier

Johannes Gasteiger, an alignment researcher at Anthropic, selects and summarizes the most interesting AI safety papers each month.

Type

Newsletter

Cold Takes

Blog about transformative AI and other things by Holden Karnofsky. Includes the "most important century" series, which argues that the 21st century could be the most important ever for humanity.

Type

Blog

Bounded Regret

Blog on AI safety by Jacob Steinhardt, a UC Berkeley statistics professor, analyzing risks, forecasting future breakthroughs, and discussing alignment strategies.

Type

Blog

The AI Revolution

Pair of articles by Tim Urban on the website Wait But Why using accessible explanations and illustrations to explore AI’s evolution toward superintelligence and its existential risks for humanity.

Type

Article

If Anyone Builds It, Everyone Dies – Eliezer Yudkowsky & Nate Soares (2025)

Grounded, no-nonsense primer on why building artificial superintelligence using current techniques will predictably lead to human extinction.

Type

Book

Superintelligence – Nick Bostrom (2014)

Explores how superintelligence could be created and what it might be like, arguing that it would be difficult to control and could take over the world in order to accomplish its goals.

Type

Book

Uncontrollable – Darren McKee (2023)

Layman’s introduction to AI existential risk, covering how powerful AI systems might become, why superhuman AI might be dangerous, and what we can do about it.

Type

Book

The AI Does Not Hate You / The Rationalist's Guide to the Galaxy – Tom Chivers (2019)

Entertaining and accessible outline of the core ideas around AI existential risk, along with an exploration of the community and culture of AI safety researchers.

Type

Book

The Precipice – Toby Ord (2020)

Argues that humanity's future is imperiled by modern existential threats – including from AI – and that coordinated global action is essential to ensure our survival.

Type

Book

Life 3.0 – Max Tegmark (2017)

Explores how advanced AI could fundamentally reshape human civilisation and argues that we must carefully guide its development.

Type

Book

AI – Roman Yampolskiy (2024)

Argues that since advanced AI is intrinsically unexplainable, unpredictable, and uncontrollable, we currently lack any means to guarantee its alignment – making it a profound risk for civilization.

Type

Book

Introduction to AI Safety, Ethics and Society – Dan Hendrycks (2024)

Accessible textbook exploring AI safety risks, machine ethics, and governance frameworks. Written by the director of Center for AI Safety (CAIS).

Type

Book

Human Compatible – Stuart Russell (2019)

Explains the problem of making powerful AI compatible with humans. The book discusses potential solutions, with an emphasis on the approaches of the Center for Human-Compatible AI (CHAI).

Type

Book

The Alignment Problem – Brian Christian (2020)

Comprehensive overview of the challenges that come with attempting to align AI systems, from the perspective of a machine learning researcher.

Type

Book

Chip War – Chris Miller (2022)

An account of the decades-long battle to control what has emerged as the world's most critical resource – microchip technology – with the United States and China increasingly in conflict.

Type

Book

Doom Debates

Channel hosted by Liron Shapira featuring in-depth debates, explainers, and live Q&A sessions, focused on AI existential risk and other implications of superintelligence.

Type

YouTube channel

The Inside View

Interviews with AI safety researchers, explainers, fictional stories of concrete threat models, and paper walk-throughs.

Type

YouTube channel

Dr Waku

Computer science PhD and AI research scientist talking about how AI will likely affect all of us and society as a whole, plus interviews with experts.

Type

YouTube channel

FAR․AI

FAR.AI's channel posts talks given by AI safety experts at various events, covering topics ranging from mechanistic interpretability to evals and governance.

Type

YouTube channel

The AI Risk Network (ARN)

YouTube channel run by John Sherman, a Peabody and Emmy Award-winning former investigative journalist, aiming to make AI risk a kitchen table conversation.

Type

YouTube channel

Species

Channel run by Drew Spartz educating a general audience about AI risk through high-effort mini-documentaries.

Type

YouTube channel

Lethal Intelligence

YouTube channel raising awareness about the lethal dangers of AGI through explainer videos and podcasts.

Type

YouTube channel

The Power Law

Top forecaster Peter Wildeford forecasts the future and discusses AI, national security, innovation, emerging technology, and the powers – real and metaphorical – that shape the world.

Type

Blog

AI Futures Project

Blog from AI Futures Project (AIFP), which created 'AI 2027'. Many posts explain the thinking behind how they model the AI future, while others include advocacy.

Type

Blog

ChinaTalk

Deep coverage of technology, China, and US policy, featuring original analysis alongside interviews with thinkers and policymakers. Also has a podcast.

Type

Blog

AI Frontiers

Platform from the Center for AI Safety (CAIS) posting articles written by experts from a wide range of fields discussing the impacts of AI.

Type

Blog

Daniel Paleka's Newsletter

Blog by AI safety researcher Daniel Paleka curating and succinctly analyzing recent research and news in AI safety.

Type

Blog

Planned Obsolescence

Substack by Ajeya Cotra, researcher at METR, covering AI capabilities forecasting, AI agent benchmarks, timelines for automating AI R&D, and implications for AI safety.

Type

Blog

Miles's Substack

Blog from independent AI policy researcher Miles Brundage on the rapid evolution of AI and the urgent need for thoughtful governance.

Type

Blog

AI Policy Bulletin

Publishes policy-relevant perspectives on frontier AI governance, including research summaries, opinions, interviews, and explainers.

Type

Blog

Obsolete

Publication by freelance journalist Garrison Lovely about the intersection of capitalism, geopolitics, and AI. Posts sporadically.

Type

Blog

Threading the Needle

Approximately fortnightly blog by Anton Leicht about charting a course through the politics of rapid AI progress.

Type

Blog

AI Prospects

Blog by Eric Drexler on AI prospects and their surprising implications for technology, economics, environmental concerns, and military affairs.

Type

Blog

Rising Tide

Blog by Helen Toner (director of CSET and former OpenAI board member) offering analysis on navigating the transition to advanced AI systems.

Type

Blog

Forecasting AI Futures

Forecasting blog for AI safety relevant predictions, written by Alvin Ånestrand. Explores potential scenarios and makes predictions to inform strategies for reducing AI risks.

Type

Blog

Dwarkesh Podcast

Well-researched interviews with influential intellectuals discussing AI, technology, and their broader societal implications. Hosted by Dwarkesh Patel.

Type

Podcast

Future of Life Institute Podcast

In-depth interviews with existential risk researchers, policy experts, philosophers, and a range of other influential thinkers.

Type

Podcast

The Cognitive Revolution

Biweekly podcast where host Nathan Labenz interviews people involved with building AI, exploring the transformative impact AI will likely have in the near future.

Type

Podcast

The AI Policy Podcast

Podcast from the Center for Strategic & International Studies (CSIS) discussing AI regulation, innovation, national security, and geopolitics.

Type

Podcast

LessWrong podcast

Audio narrations of top LessWrong posts, covering all curated posts and posts with at least 125 karma.

Type

Podcast

EA Forum podcast

Audio narrations from the Effective Altruism Forum, covering all curated posts and posts with at least 125 karma.

Type

Podcast

Existential risk from artificial intelligence

Wikipedia article providing a broad overview of the idea that advanced AI poses an existential threat to humanity.

Type

Article

The Problem

Article explaining why MIRI believes that if researchers build superintelligent AI with anything like the field’s current technical understanding or methods, the expected outcome is human extinction.

Type

Article

AI Safety for Fleshy Humans (aisafety.dance)

Accessible, comic-illustrated series describing the history of AI capabilities, the alignment problem, and potential solutions.

Type

Article

AI 2027

Forecast article predicting that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.

Type

Article

Risks from power-seeking AI systems

Regularly-updated article from 80,000 Hours with motivation and advice around pursuing a career in AI safety.

Type

Article

Don't Worry About the Vase

Blog by Zvi Mowshowitz on various topics, including AI, offering detailed analysis and personal insights from a rationalist perspective. Posts very often.

Type

Blog

AI Safety Playlist

A carefully curated and regularly updated YouTube playlist to help people gain an understanding of what's going on with AI.

Type

YouTube channel

Type