Top blog recommendation
Zvi Mowshowitz's blog covering AI developments, policy, and safety with detailed analysis and commentary.
Type
Blog
Top recommended videos

A carefully curated and regularly updated YouTube playlist to help people gain an understanding of what's going on with AI.
Type
YouTube
Hub for researchers to discuss ideas related to ensuring that transformative AI is aligned with human values.
Type
Forum
Forum dedicated to improving human reasoning, containing a lot of AI safety content. AI Alignment Forum posts are cross-posted here so anyone, not just approved researchers, can engage.
Type
Forum
The most popular AI safety education channel, explaining technical alignment concepts to general audiences through accessible explainer videos.
Type
YouTube channel
Animated videos aiming to foster good thinking, promote altruistic causes, and help ensure humanity's future goes well – particularly regarding AI safety.
Type
YouTube channel
Fortnightly meetings beginning with a presentation summarizing a recent AI safety paper or article, followed by group discussion. The presentations are posted to YouTube.
Type
YouTube channel
Channel produced by 80,000 Hours presenting thoroughly researched, cinematic stories about what's happening in AI and where the trends are taking us.
Type
YouTube channel
Weekly newsletter from the Center for AI Policy (CAIP). Each issue explores three key AI policy developments for professionals in the field.
Type
Newsletter
Entertaining stick figure explainers on (mostly) AI safety – covering alignment, corporate negligence, regulations, and societal impacts.
Type
YouTube channel
YouTuber discussing the latest AI developments as they happen, offering explanations and analysis of important research and events.
Type
YouTube channel
Interviews with people working on the world’s most pressing problems (especially AI safety), including advice on how you can use your career to help solve them.
Type
Podcast
Interviews with technical AI safety researchers about their research, aiming to get a sense of why it was written and how it might reduce existential risk from AI.
Type
Podcast
Weekly summaries of newly-announced events and training programs in the AI safety space – both online and in-person.
Type
Newsletter
Newsletter listing newly announced funding opportunities for individuals and organizations working to reduce AI existential risk.
Type
Newsletter
A followable list of accounts on Twitter/X belonging to AI thinkers and organizations that discuss AI safety.
Type
Twitter/X list
A followable Twitter/X list of thoughtful people working on policy to reduce risks and secure benefits from AI.
Type
Twitter/X list
Online hub for people dedicated to doing good in the world to share research, debate ideas, and develop strategies for addressing the world’s most pressing problems – including AI safety.
Type
Forum
Primer on existential risk from AI on the Homo Sabiens blog. Lays out the case for concern from start to finish in language that a smart high-schooler could follow.
Type
Article
Living document aiming to present a coherent worldview explaining the race to AGI and extinction risks and what to do about them – in a way that is accessible to non-technical readers.
Type
Article
Blog covering many topics, including AI, reasoning, science, psychiatry, and politics. Often features book summaries and commentary on AI safety.
Type
Blog
Weekly summaries of developments in AI research (including governance) written by Jack Clark, co-founder of Anthropic.
Type
Newsletter
Newsletter from the Center for AI Safety (CAIS) published every few weeks, discussing developments in AI and AI safety. No technical background required.
Type
Newsletter
Monthly newsletter by the Center for Security and Emerging Technology (CSET) covering AI, emerging technology generally, and security policy.
Type
Newsletter
Aims to help decision-makers understand what’s happening in AI and why it matters – through news roundups, explainers, features, and opinion pieces.
Type
Newsletter
The people with the most insight on AI development in China may well be Chinese people themselves. Jeff Ding provides weekly translations of their writings.
Type
Newsletter
Fortnightly newsletter by the Future of Life Institute (FLI) summarizing and analyzing the latest developments surrounding the EU AI Act.
Type
Newsletter
A monthly digest of the latest developments, research trends, and key initiatives in the realm of AI evaluation.
Type
Newsletter
Bimonthly-ish newsletter providing a roundup of the most important research developments in ML safety. Co-written by Dan Hendrycks.
Type
Newsletter
Newsletter from Concordia AI, a Beijing-based social enterprise, providing updates on AI safety developments in China.
Type
Newsletter
Newsletter for keeping up to date with FMxAI (formal methods and AI) with a gear toward safety, doing a mix of shallow technical reviews of papers and movement updates.
Type
Newsletter
Johannes Gasteiger, an alignment researcher at Anthropic, selects and summarizes the most interesting AI safety papers each month.
Type
Newsletter
Blog about transformative AI and other things by Holden Karnofsky. Includes the "most important century" series, which argues that the 21st century could be the most important ever for humanity.
Type
Blog
Blog on AI safety by Jacob Steinhardt, a UC Berkeley statistics professor, analyzing risks, forecasting future breakthroughs, and discussing alignment strategies.
Type
Blog
Pair of articles by Tim Urban on the website Wait But Why using accessible explanations and illustrations to explore AI’s evolution toward superintelligence and its existential risks for humanity.
Type
Article
Grounded, no-nonsense primer on why building artificial superintelligence using current techniques will predictably lead to human extinction.
Type
Book
Explores how superintelligence could be created and what it might be like, arguing that it would be difficult to control and could take over the world in order to accomplish its goals.
Type
Book
Layman’s introduction to AI existential risk, covering how powerful AI systems might become, why superhuman AI might be dangerous, and what we can do about it.
Type
Book
Entertaining and accessible outline of the core ideas around AI existential risk, along with an exploration of the community and culture of AI safety researchers.
Type
Book
Argues that humanity's future is imperiled by modern existential threats – including from AI – and that coordinated global action is essential to ensure our survival.
Type
Book
Explores how advanced AI could fundamentally reshape human civilisation and argues that we must carefully guide its development.
Type
Book
Argues that since advanced AI is intrinsically unexplainable, unpredictable, and uncontrollable, we currently lack any means to guarantee its alignment – making it a profound risk for civilization.
Type
Book
Accessible textbook exploring AI safety risks, machine ethics, and governance frameworks. Written by the director of Center for AI Safety (CAIS).
Type
Book
Explains the problem of making powerful AI compatible with humans. The book discusses potential solutions, with an emphasis on the approaches of the Center for Human-Compatible AI (CHAI).
Type
Book
Comprehensive overview of the challenges that come with attempting to align AI systems, from the perspective of a machine learning researcher.
Type
Book
An account of the decades-long battle to control what has emerged as the world's most critical resource – microchip technology – with the United States and China increasingly in conflict.
Type
Book
Channel hosted by Liron Shapira featuring in-depth debates, explainers, and live Q&A sessions, focused on AI existential risk and other implications of superintelligence.
Type
YouTube channel
Interviews with AI safety researchers, explainers, fictional stories of concrete threat models, and paper walk-throughs.
Type
YouTube channel
Computer science PhD and AI research scientist talking about how AI will likely affect all of us and society as a whole, plus interviews with experts.
Type
YouTube channel
FAR.AI's channel posts talks given by AI safety experts at various events, covering topics ranging from mechanistic interpretability to evals and governance.
Type
YouTube channel
YouTube channel run by John Sherman, a Peabody and Emmy Award-winning former investigative journalist, aiming to make AI risk a kitchen table conversation.
Type
YouTube channel
Channel run by Drew Spartz educating a general audience about AI risk through high-effort mini-documentaries.
Type
YouTube channel
YouTube channel raising awareness about the lethal dangers of AGI through explainer videos and podcasts.
Type
YouTube channel
Top forecaster Peter Wildeford forecasts the future and discusses AI, national security, innovation, emerging technology, and the powers – real and metaphorical – that shape the world.
Type
Blog
Blog from AI Futures Project (AIFP), which created 'AI 2027'. Many posts explain the thinking behind how they model the AI future, while others include advocacy.
Type
Blog
Deep coverage of technology, China, and US policy, featuring original analysis alongside interviews with thinkers and policymakers. Also has a podcast.
Type
Blog
Platform from the Center for AI Safety (CAIS) posting articles written by experts from a wide range of fields discussing the impacts of AI.
Type
Blog
Blog by AI safety researcher Daniel Paleka curating and succinctly analyzing recent research and news in AI safety.
Type
Blog
Substack by Ajeya Cotra, researcher at METR, covering AI capabilities forecasting, AI agent benchmarks, timelines for automating AI R&D, and implications for AI safety.
Type
Blog
Blog from independent AI policy researcher Miles Brundage on the rapid evolution of AI and the urgent need for thoughtful governance.
Type
Blog
Publishes policy-relevant perspectives on frontier AI governance, including research summaries, opinions, interviews, and explainers.
Type
Blog
Publication by freelance journalist Garrison Lovely about the intersection of capitalism, geopolitics, and AI. Posts sporadically.
Type
Blog
Approximately fortnightly blog by Anton Leicht about charting a course through the politics of rapid AI progress.
Type
Blog
Blog by Eric Drexler on AI prospects and their surprising implications for technology, economics, environmental concerns, and military affairs.
Type
Blog
Blog by Helen Toner (director of CSET and former OpenAI board member) offering analysis on navigating the transition to advanced AI systems.
Type
Blog
Forecasting blog for AI safety relevant predictions, written by Alvin Ånestrand. Explores potential scenarios and makes predictions to inform strategies for reducing AI risks.
Type
Blog
Well-researched interviews with influential intellectuals discussing AI, technology, and their broader societal implications. Hosted by Dwarkesh Patel.
Type
Podcast
In-depth interviews with existential risk researchers, policy experts, philosophers, and a range of other influential thinkers.
Type
Podcast
Biweekly podcast where host Nathan Labenz interviews people involved with building AI, exploring the transformative impact AI will likely have in the near future.
Type
Podcast
Podcast from the Center for Strategic & International Studies (CSIS) discussing AI regulation, innovation, national security, and geopolitics.
Type
Podcast
Audio narrations of top LessWrong posts, covering all curated posts and posts with at least 125 karma.
Type
Podcast
Audio narrations from the Effective Altruism Forum, covering all curated posts and posts with at least 125 karma.
Type
Podcast
Wikipedia article providing a broad overview of the idea that advanced AI poses an existential threat to humanity.
Type
Article
Article explaining why MIRI believes that if researchers build superintelligent AI with anything like the field’s current technical understanding or methods, the expected outcome is human extinction.
Type
Article
Accessible, comic-illustrated series describing the history of AI capabilities, the alignment problem, and potential solutions.
Type
Article
Forecast article predicting that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.
Type
Article
Regularly-updated article from 80,000 Hours with motivation and advice around pursuing a career in AI safety.
Type
Article
Blog by Zvi Mowshowitz on various topics, including AI, offering detailed analysis and personal insights from a rationalist perspective. Posts very often.
Type
Blog
A carefully curated and regularly updated YouTube playlist to help people gain an understanding of what's going on with AI.
Type
YouTube channel