← All Posts
AI FundingSovereignty

The Funding Paradox: Why AI Safety Research Struggles to Compete

3 min read

AI capabilities research attracts billions while safety research scrapes by on grants. This funding imbalance isn't just unfair — it's dangerous. Here's why, and what we can do about it.

There's a troubling asymmetry in AI funding. Companies raise billions to build more powerful models. Meanwhile, researchers working on AI safety, alignment, and interpretability struggle to secure grants.

This isn't just an academic concern. It's a structural problem that could define humanity's relationship with artificial intelligence.

The Capabilities-Safety Gap

Consider the numbers:

OpenAI, Anthropic, Google DeepMind, and other AI labs collectively spend tens of billions annually on capabilities research — making models larger, faster, more powerful.

Global spending on AI safety research? Probably under $1 billion annually, and that's being generous.

This creates a predictable dynamic: capabilities advance faster than our understanding of how to control or align them.

Why the Imbalance?

The funding gap isn't arbitrary. It reflects several structural realities:

Market Incentives: Capabilities research has clear commercial applications. Better models mean better products mean more revenue.

Safety research has indirect value. It's insurance against catastrophic risks that may never materialize. That's a tough sell to VCs.

Timeline Mismatch: Investors operate on 5-10 year horizons. Capabilities improvements show quarterly progress. Safety research addresses risks that might play out over decades.

Measurement Challenges: Capabilities are easy to benchmark. We can measure accuracy, speed, cost-per-token. Safety improvements are harder to quantify. How do you measure "alignment" or "interpretability"?

The Sovereignty Angle

This funding imbalance creates concentration of power. A handful of well-funded AI labs make architectural choices that affect everyone. They decide:

  • What safety measures to implement
  • What risks are acceptable
  • How transparent to be about capabilities and limitations

This is a sovereignty issue. These decisions affect nations, economies, and individuals who have no seat at the table.

Restructuring Incentives

What would better funding structures look like?

Mandatory Safety Budgets: Companies building frontier AI systems could be required to spend X% of capabilities budgets on safety research.

Public Funding: Governments could fund safety research the way they fund public health — as a public good with positive externalities.

Long-term Capital: We need funding vehicles that match the timeline of AI safety challenges. Patient capital with 20-30 year horizons.

Open Challenges: Prize-based funding for safety breakthroughs could attract diverse approaches and democratize the field.

Making Safety Competitive

Ultimately, we need to make safety research as attractive as capabilities research. That means:

  • Competitive salaries and resources
  • Clear career paths
  • Recognition and prestige
  • Infrastructure and compute access

The current funding model treats AI safety as an afterthought. But safety isn't something you bolt on after building powerful systems. It needs to be baked in from the start.

And that requires funding structures that reflect its importance.

We're building systems that could reshape society. We should fund safety research accordingly.

Discussion

Comments are powered by GitHub Discussions. You'll need a GitHub account to participate.