Founder Toolkit

Resources for starting and growing an AI safety organization – including incubators, fiscal sponsors, venture capital, and practical guides.

5050

12–14 week program run by Fifty Years, helping scientists and engineers build startups working on AI safety.

Type

Incubator

AIS Founders Discord?

Type

Alignment Foundation?

Type

Fiscal sponsor

Anthology Fund

$100M fund by Menlo Ventures and Anthropic backing AI startups from seed to Series A, including those working on "trust and safety tooling".

Type

Venture capitalist

Anti Entropy Resource Portal: Fiscal Sponsorship

An overview of what fiscal sponsorship is, its benefits and disadvantages, and best practices.

Type

Article/tool

Anti Entropy: SparkWell

Provides a temporary home for nonprofit projects so they can test ideas, develop operational capabilities, and launch as independent entities.

Type

Fiscal sponsor

Ashgro

Providing fiscal sponsorship to AI safety projects, saving them time and allowing them to access more funding.

Type

Fiscal sponsor

Astera Foundation

Private foundation providing grants to projects working to make the future with advanced AI go well. Also invests in for-profits in order to redeploy capital to AI safety nonprofits.

Type

Venture capitalist

Auspicious Fund

VC investing in startups shaping the future of AI and humanity, committed to delivering financial returns while building a resilient and beneficial future.

Type

Venture capitalist

Babuschkin Ventures

New VC fund by Igor Babuschkin (xAI co-founder). Backs AI safety research and startups building agentic systems.

Type

Venture capitalist

Beacon

Fiscal sponsorship, administrative support, and bureaucracy shielding for researchers safeguarding humanity from catastrophic risk.

Type

Fiscal sponsor

Berkeley Existential Risk Initiative (BERI)

Providing operations support (including fiscal sponsorship) to research groups working on existential risk, especially those based at a university.

Type

Fiscal sponsor

Catalyze Impact

Incubating early-stage AI safety research organizations. The program involves co-founder matching, mentorship, and seed funding, culminating in an in-person building phase.

Type

Incubator

d/acc: one year later

Blog post by Vitalik Buterin (co-founder of Ethereum) revisiting his philosophy that we should accelerate technology that is defensive and distributes power.

Type

Article/tool

Def/acc at Entrepreneur First

Incubation program focusing on "defensive" tech – the idea that the most powerful solution to technological risk is often more technology.

Type

Incubator

Ergo Impact

Helping philanthropists find, fund, and scale the most promising people and solutions to the world’s most pressing problems – including AI safety.

Type

Incubator

Future of Life Foundation (FLF)

Incubator helping start new organizations that steer transformative technology towards benefiting life and away from extreme large-scale risks.

Type

Incubator

Halcyon Futures

Identifying leaders from business, policy, and academia, and helping them take on ambitious projects in AI safety.

Type

Incubator

Juniper Ventures

VC firm investing in AI safety startups. Run by exited founders and backed by Reid Hoffman, Eric Ries, and Geoff Ralston.

Type

Venture capitalist

Lionheart Ventures

VC firm investing in ethical founders developing transformative technologies that have the potential to impact humanity on a meaningful scale.

Type

Venture capitalist

Macroscopic Ventures

Swiss VC focused on reducing suffering risks, including that posed by catastrophic AI misuse and AI conflict. Makes both grants to nonprofits and investments in for-profit companies.

Type

Venture capitalist

Metaplanet

Evergreen VC and fund by Jaan Tallinn backing tech that supports humanity's long-term survival. Profits from the VC go towards funding AI safety nonprofits.

Type

Venture capitalist

My current impressions on career choice for longtermists

Post from 2021 by Holden Karnofsky (ex-CEO of Coefficient Giving) listing the aptitudes he thought would be useful for jobs in fields like AI safety.

Type

Article/tool

Mythos Ventures

VC aiming to empower founders building a radically better world with safe AI systems by investing in ambitious teams with defensible strategies that can scale to post-AGI.

Type

Venture capitalist

Players Philanthropy Fund (PPF)

Large, generalist fiscal sponsor with over 600 charitable projects worldwide. Streamlined onboarding process.

Type

Fiscal sponsor

Recursive Middle Manager Hell

Post describing how adding layers of middle-management tends to negatively impact organizations. Answers the question of "why not just scale existing orgs?"

Type

Article/tool

Rethink Priorities: Special Projects

Accelerating initiatives by providing comprehensive support services, enabling organizations to focus on their core mission and scale efficiently.

Type

Fiscal sponsor

Safe AI Fund (SAIF)

Early-stage venture fund supporting startups developing tools to enhance AI safety. Provides both financial investment and mentorship.

Type

Venture capitalist

Seldon Lab

12-week program in San Francisco, USA, for pre-seed companies aiming to make an impact with AI safety and assurance technologies.

Type

Incubator

Some for-profit AI alignment org ideas

List of ideas by Eric Ho, along with context for why he believes a for-profit org can make a big contribution to the field.

Type

Article/tool

Ten AI safety projects I'd like people to work on

A list of projects based on the intuitions of Julian Hazell, a grantmaker on Coefficient Giving's AI governance and policy team.

Type

Article/tool

The Audacious Project

Funding initiative housed at TED helping entrepreneurs shape impactful ideas into viable multi-year plans and launching them to the world alongside visionary philanthropists.

Type

Incubator

Theory of Change Builder

Free tool for creating a theory of change. Just drop relevant documents in and it will create a draft ToC which you can then adjust and export.

Type

Article/tool

There should be more AI safety orgs

Post by Marius Hobbhahn (co-founder and CEO of Apollo Research) explaining why he thinks there should be more AI safety organizations.

Type

Article/tool

There Should Be More Alignment-Driven Startups

Post arguing that having more AI safety-focused startups would increase the probability of solving alignment.

Type

Article/tool

Why does the AI safety community need help founding projects?

LessWrong shortform post by Ryan Kidd (co-executive director of MATS) arguing that AI safety has a bottleneck in organization-building.

Type

Article/tool

Type