Seeking maintainer
Regularly scrapes all major sources of alignment data for use by the Stampy chatbot and other projects. Currently needs someone to maintain it.
Contact
Olivier Coutu
Status
Active
Content curation platform
Curated stream for AI safety content, gathering posts and research from key sources. AI helps summarize, tag, and rate the novelty of content.
Contact
Matt Brooks
Status
Active
Come along to the monthly call run by Alignment Ecosystem Development (AED) to find field-building volunteer projects you can contribute to or to pitch a project to other volunteers.
Contact
Bryce Robertson
Status
Paused
Regularly scrapes all major sources of alignment data for use by the AI Safety Chatbot and other projects. Currently needs someone to maintain it. Search "alignment research dataset" on Hugging Face for details.
Contact
Olivier Coutu
Status
Active
aisafetyfeed.com is a curated stream for AI safety content. It gathers posts and research from key sources: LessWrong, Alignment Forum, Substacks, etc. AI helps summarize, tag, and rate content for novelty, letting users quickly find what's important and relevant to them.
Contact
Matt Brooks
Status
Active
People who control democratic outcomes in Germany, France, Japan, and Korea tend to read in their own language. Thinking of a WhatsApp-shareable translated version of AI 2027 or The Compendium, for example.
Contact
Leonard Bereska
Status
Seeking owner
Create a website database of all known existential risk, existential safety, AI risk, and AI safety memes and rank all by public vote, then publish broadly. Coordinate with AI Notkilleveryoneism Memes.
Contact
Center for Existential Safety
Status
Seeking owner
Nonprofit seeking volunteers to help them run their social media, ideally with a background in AI journalism, a strong grasp of AI concepts, and an interest in discussions involving AI professionals addressing critical public concerns. Website: third-opinion.org
Contact
OAISIS
Status
Active
Members of the AI safety community located in low- and middle-income countries are often somewhat isolated. A fund would help them travel to connect with core members of the community.
Contact
HP
Status
Seeking owner
AI safety knowledge is scattered across private docs and forums, impairing coordination and funding decisions. A tiered community wiki (open editing, curated review, restricted access for sensitive info) would centralize important information.
Contact
Zoé Roy-Stang
Status
Seeking owner
Volunteer organization helping new people navigate the AI safety ecosystem, connect with like-minded people, and find projects that are a good fit for their skills. Seeking a webmaster, social media manager, and volunteer recruiter.
Contact
AI Safety Quest
Status
Active
An open source website aimed at highlighting the key risks and challenges associated with AI development, and to share potential solutions. Contributions and suggestions welcome.
Contact
Duncan Rickelton
Status
Active
Create a comprehensive test of a person's understanding of AI risks. Include brief (30 second), moderate (3–10 minute), and thorough (11–30 minute) versions. See Clearer Thinking Artificial Intelligence Quiz and Clearer Thinking Long-Term Future Quiz.
Contact
Center for Existential Safety
Status
Seeking owner
Create multiple off-grid coliving and coworking spaces in reasonably secure, low-cost places around the world that are relatively resilient to global catastrophes. Coordinate with CEEALAR and Bunker in Paradise.
Contact
Center for Existential Safety
Status
Seeking owner
Website (this one!) serving as a hub for resources for the AI safety community. Seeking assistance with user research and testing, promotion, SEO, and non-CSS development.
Contact
Bryce Robertson
Status
Active
Intended to be a single point of access for learning about AI safety, created by Rob Miles's volunteer team. Currently seeking volunteers to provide suggestions to articles via the Google Doc link on each article.
Contact
Algon
Status
Active
A directory to map current AI safety research, identify potential gaps in the landscape, and create opportunities for AI safety researchers to share work, find collaborators, and inspire others to get involved in mitigating existential AI risk.
Contact
Olamide Florence Adeoye
Status
Active
Create a research-based website that ranks the most likely ways an average person living today will die. Rogue AI and AI-developed bioweapons would likely be at the top, with more well-known causes lower. Coordinate with How We Die.
Contact
Center for Existential Safety
Status
Seeking owner
Create a “Lifetime Contribution” metric defining the number of hours someone has invested in existential/catastrophic risk reduction. This would serve as a rough objective proxy for the person’s foresight, agency, and altruism.
Contact
Center for Existential Safety
Status
Seeking owner
Our epistemics and mutual understanding could be improved with regular debates/adversarial collaborations between alignment researchers who disagree on particular topics. We need a platform to facilitate this.
Contact
Matthew Baggins
Status
Seeking owner
Research the stages of understanding of existential risk awareness, then publish broadly. Coordinate with CFAR and Upgradable for rationality development insights.
Contact
Center for Existential Safety
Status
Seeking owner
An initiative aiming to catalyze collective action to ensure humanity survives this decade, serving all existential safety advocates globally. Looking for: co-founders, dedicated volunteers, and feedback on the main and associated sites.
Contact
Collective Action for Existential Safety
Status
Active
Stampede is an Elixir chatbot framework that can serve multiple servers and services simultaneously. It's now time to start making the plugins which Rob Miles Discord members will be using daily, and the project is in need of a new dev to take over.
Contact
Matt Wilson
Status
Active
Develop existing quantitative models of existential risks in a professional, public-facing website so they are easy enough for the general public and policymakers to use, then publish broadly. Coordinate with Existential Risks from AI.
Contact
Center for Existential Safety
Status
Seeking owner
Many people are seeking PhD opportunities or supervisors to guide their work, and there are presumably also supervisors seeking PhD students. Creating a dedicated platform for PhD pairing could facilitate them finding each other and have a cascading impact.
Contact
Madhusudhan Pathak
Status
Seeking owner
A feed for AI safety content, personalized to optimise for intellectual growth. Seeking volunteers with expertise in web frontends or recommender engines, or designers with expertise in UX design and user reviews, to build the feed better and sooner.
Contact
John Beshir
Status
Active
A nonpartisan website tracking political candidates’ AI policy stances across three categories: mundane, geopolitical, and existential risks. Seeking help with both web development and research.
Contact
Liam Robins
Status
Paused
Ranked and scored contributable compendium of alignment plans and their problems. Seeking volunteers to submit alignment plans and participate in critique-athons, as well as a designer to improve the site.
Contact
Kabir Kumar
Status
Active
Crowdsourced charity evaluator, helping people find or promote the best AI safety projects (as well as projects in other cause areas). Currently seeking volunteers to help market and grow the platform.
Contact
Givewiki
Status
Active
Some approaches to solving alignment involve teaching ML systems about alignment and getting assistance from them. It may be useful to capture conversations between researchers and use them to expand our dataset of alignment content to train models on.
Contact
Bryce Robertson
Status
Paused
Conversational agent informed by the Alignment Research Dataset. This project served as a foundation for the Stampy chatbot, which is built and maintained by Rob Miles's volunteer team.
Contact
AI Alignment McGill
Status
Active
A project to raise awareness of existential risk from AI and push governments to address it by implementing a global moratorium on large AI training runs until alignment is solved.
Contact
Mikhail Samin
Status
Active
Create a list of quotes by notable people supporting AI safety, easily searchable and for use in various situations. This has been suggested by multiple AI safety communications people. See pauseai.info/quotes for something similar.
Contact
Agustín Covarrubias
Status
Seeking owner
Found an organization similar to ENAIS that tracks and connects people and organizations doing valuable work in AI safety. Could share a database with ENAIS but use more international branding. Might connect many national organizations.
Contact
Bryce Robertson
Status
Seeking owner
The more frugally people live, the more independent they are from requiring a day job. In addition, the same amount of grantmaker money could support a larger number of individuals, enabling us to do more research per dollar.
Contact
Beatricz Emmanuel
Status
Active
Increasing the ratio of AI safety to AI capabilities researchers seems like a core way to reduce existential risk from AI. Given their experience, helping capabilities researchers switch to AI safety could be especially effective.
Contact
Bryce Robertson
Status
Seeking owner
An always-active virtual coworking environment and social space – especially valuable for networking and collaboration. Seeking volunteers to be present on the platform and introduce new people to the space as they arrive.
Contact
Sasha Cooper
Status
Active
Collect and connect everyone interested in AI safety who is applying for a PhD each year. Put them in a Slack/Discord together and facilitate the effort to look into different programs and share information with each other.
Contact
Bryce Robertson
Status
Seeking owner
Interactive tree-like maps of AI safety research designed to make the research landscape more navigable. Looking for someone to continue the idea independently of the founder, who is now bandwidth-constrained.
Contact
Myles Heller
Status
Paused
Collects public materials (mostly books and longform meta papers) on various topics related to AI safety and makes it easy for individuals to read through them as part of a challenge. Seeking volunteers to improve the project.
Contact
Esben Kran
Status
Active