Your contribution helps build a comprehensive resource for understanding the AI policy landscape.
Review: All submissions are manually reviewed by our team before appearing on the map.
Enrichment: We use AI-assisted research to verify and expand entries with additional context, affiliations, and public statements.
Belief Scores: Stance, timeline, and risk assessments are weighted averages. Self-submissions carry more weight than external observations.
Privacy: We only include publicly available information. You can request changes or removal anytime.
This tool is in a pre-launch beta. We are actively improving data issues and enrichment, as well as adding new features and improving the UX. Please email us at info@mapping-ai.org if you'd like to contribute or provide any feedback.
Help us build a comprehensive map of the AI policy landscape. Submit new entries or update existing ones with corrections, additional context, or new perspectives.
Anyone with knowledge of the U.S. AI policy landscape can contribute. You can add a completely new person, organization, or resource, or search for an existing entry and submit updates, corrections, or additional information. All submissions are reviewed before publication.
See example submission
Name: Dario Amodei Role: Executive Title: CEO, Anthropic Primary org: Anthropic Location: San Francisco, CA Regulatory stance: Moderate (mandatory safety evals + transparency) How publicly stated: Explicitly stated (speeches, testimony, writing) AGI timeline: Within 2–3 years AI risk level: Potentially catastrophic Key concerns: Concentration of power, Weapons proliferation, Loss of human control Influence type: Decision-maker, Builder, Narrator Twitter/X: @DarioAmodei Notes: Co-founded @Anthropic after leaving @OpenAI over safety disagreements. Published @Machines of Loving Grace (Oct 2024). Close collaborator with @Daniela Amodei. Advocates for "responsible scaling" rather than pausing.
See example submission
Name: Anthropic Category: Frontier Lab Website: https://anthropic.com Location: San Francisco, CA Funding model: Mixed (commercial + philanthropic) Regulatory stance: Moderate (mandatory safety evals + transparency) How publicly stated: Explicitly stated (speeches, testimony, writing) AGI timeline: Within 2–3 years AI risk level: Potentially catastrophic Key concerns: Weapons proliferation, Loss of human control, Cybersecurity threats Influence type: Builder, Researcher/analyst, Advisor/strategist Twitter/X: @AnthropicAI Bluesky: @anthropic.ai Last verified: 2026 Notes: Public benefit corporation founded by @Dario Amodei and @Daniela Amodei. Pioneered "responsible scaling policy" framework. Competes directly with @OpenAI and @Google DeepMind.
See example submission
Title: Situational Awareness Author(s): Leopold Aschenbrenner Type: Essay URL: https://situational-awareness.ai Year: 2024 Category: AI Capabilities Key argument: AGI is likely by 2027, superintelligence by end of decade. The US needs to treat frontier AI as a national security priority. Notes: Written by former @OpenAI researcher @Leopold Aschenbrenner. Widely circulated in Silicon Valley and DC policy circles. Shifted discourse toward framing AI as geopolitical competition. Influenced thinking of @Dario Amodei and others.
Thank you for your contribution. We'll review your submission.
This tool is in a pre-launch beta. We are actively improving data and adding new features. Email info@mapping-ai.org to contribute or provide feedback.