Cairn Wiki
An AI safety knowledge base with 1653 entities covering risks, approaches, projects, organizations, and key people.
Explore all entitiesTop entities
AI Alignment
Comprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI)...
approachIntervention Portfolio
Provides a strategic framework for AI safety resource allocation by mapping 13+ interventions aga...
approachScheming & Deception Detection
Reviews empirical evidence that frontier models (o1, Claude 3.5, Gemini 1.5) exhibit in-context s...
organizationOpenAI Foundation
The OpenAI Foundation holds 26% equity (~\$130B) in OpenAI Group PBC with governance control, but...
approachCapability Elicitation
Capability elicitation—systematically discovering what AI models can actually do through scaffold...
approachAI Safety Cases
Safety cases are structured arguments adapted from nuclear/aviation to justify AI system safety, ...
riskBioweapons Risk
Comprehensive synthesis of AI-bioweapons evidence through early 2026, including the FRI expert su...
riskMultipolar Trap
Analysis of coordination failures in AI development using game theory, documenting how competitiv...