What is the DIP?
The Direct Institutional Plan (DIP) was developed by ControlAI as a collaborative strategy for preventing the development of Artificial Superintelligence until it can be proven safe with clear public buy-in.
AI companies are racing to build systems more intelligent than all of humanity combined. Top AI scientists, world leaders, and even AI company CEOs warn this could lead to human extinction.
The DIP offers a clear path to solving this problem through democratic engagement:
- Design policies that target ASI development and precursor technologies
- Inform every relevant person in the democratic process – lawmakers, executive branch, civil service, media, civil society
This plan is simple and direct, working through the very institutions that power our societies.
Core Principles
-
Strategic
Addresses the entire problem end-to-end
-
Public
Operates with full transparency
-
Scalable
Grows effectively with more resources
-
Democratic
Strengthens democratic institutions
Writing Emails to Politicians
Do
- Establish your credentials and local connection
- Lead with authority – reference Nobel laureates, AI researchers, CEOs
- Focus on solutions, not just doom
- Be specific about what you're asking for
- Reference successful precedents
- Follow up persistently but politely
- Keep emails concise and well-structured
Don't
- Assume they have technical knowledge
- Use jargon without explanation
- Lead with doom and gloom
- Be condescending or alarmist
- Give up after one or two attempts
Email Templates
Subject: AI Risks & Solutions
Dear [Title and Name],
I hope you are well. I am writing to express my concerns regarding the development of artificial intelligence and its potential impact on our society. I would appreciate the opportunity to discuss the situation with you or your staff, answer questions, and connect you with other experts if desired.
Why this matters:
As you may be aware, over 350 leading AI researchers and industry leaders have signed the Center for AI Safety statement declaring that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." More recently, over 100 cross-party UK parliamentarians have endorsed this concern.
I find it particularly concerning that:
- Nobel Prize winners, top AI scientists, and even the CEOs of the major AI companies have acknowledged AI poses an extinction risk to humanity
- AI models are already demonstrating deceptive capabilities in safety testing environments
- AI development remains largely unregulated – in fact, a restaurant owner has more legal obligations than an AI developer
- Major AI companies are explicitly pursuing "superintelligence" without proven safety measures
There are solutions:
Fortunately, practical solutions exist that don't require choosing between safety and progress. The majority of voters support AI regulation and safety measures. Many dangerous technologies have been successfully regulated (nuclear weapons, biological weapons, CFCs, human cloning).
Given your important role in [Parliament/Congress/Legislature], I believe your voice will be crucial in shaping positive outcomes in this area. These issues merit attention now, as the decisions made today regarding AI governance will shape the path forward for future generations.
I would be happy to meet with you or your policy advisors to provide additional information on these issues.
I look forward to hearing from you.
Cordially,
[Your Name]
[Your Location/Postal Code]
[Optional: Your credentials/background]
Subject: Following up: AI Risks & International Coordination
Dear [Name],
I wrote to you on [date] regarding AI risks. I'm following up because I remain concerned about the pace of AI development and the lack of governance frameworks.
Since my last email, [mention any relevant recent news, e.g., "another major AI breakthrough was announced" or "additional experts have raised concerns"].
Key developments you should know about:
- Current AI capabilities are accelerating rapidly: AI systems now write 30% of Google's code and have achieved gold medals at the International Mathematics Olympiad
- Safety incidents are concerning: In recent evaluations, AI models have attempted to copy themselves to avoid being shut down, lied to researchers, and tried to disable oversight mechanisms
- International action is growing: Over 100 cross-party UK parliamentarians have now publicly supported the need for AI safety regulation
No country wins if any company achieves superintelligence without safety guarantees.
Would you or someone from your office be willing to meet? I can explain why I believe a political solution to this problem is achievable and how [your country] could help lead an international response.
Best regards,
[Your Name]
[Your Location/Postal Code]
Subject: Public Support for AI Safety Measures
Dear [Name],
I'm writing again about AI safety and the strong public support for action in this area.
The public is clear about what they want:
Recent polling shows that:
- 77% of citizens believe the government should mandate safety testing for powerful AI models
- 73% support halting the rapid development of superintelligence until safety can be assured
- 87% support requiring developers to prove their AI systems are safe before public release
Most people don't agree with Silicon Valley's race toward superintelligence, especially when the CEOs of the companies building these systems openly acknowledge significant risks of catastrophic outcomes (10-25% in some cases).
The window for action is narrowing. Industry experts predict we are just 2-10 years away from AI systems that could match or exceed human capabilities across most domains. We need governance frameworks in place before, not after, we reach that point.
I welcome the opportunity to discuss how [your country/constituency] can be part of the solution to ensure our children inherit a future where humans remain in control of our destiny.
Thank you for your consideration.
Cordially,
[Your Name]
[Your Location/Postal Code]
Subject: Briefing Offer: AI Risks and Governance Solutions
Dear [Title and Name],
I'm reaching out to offer a briefing on artificial intelligence risks and practical governance solutions. I have recently briefed [number] of your colleagues across multiple parties on these issues, and many have found the discussion valuable given the limited time they have to research AI developments.
What makes this briefing different:
- Focus on solutions, not just problems: I'll explain practical policy approaches that other jurisdictions are successfully implementing
- Non-partisan issue: AI safety has support across the political spectrum – over 100 cross-party UK parliamentarians have now endorsed calls for AI safety regulation
- Respectful of your time: I can deliver key insights in 20-30 minutes, with time for questions
Why this matters now:
The companies building the most advanced AI systems (OpenAI, Google DeepMind, Anthropic) have explicitly stated they're working toward "superintelligence" – AI systems more capable than humans across virtually all domains. They acknowledge significant risks but argue they're in a competitive race. Democratic oversight is essential to ensure this technology develops in humanity's interest.
Would you or your staff be available for a brief meeting? I'm happy to meet in person, via video call, or share written materials – whichever is most convenient.
Thank you for your time.
Respectfully,
[Your Name]
[Credentials/Background]
[Location/Postal Code]
Country-Specific Adaptations
United States
- Reference AI Policy Institute polls (77% support for safety testing)
- Mention bipartisan concern about AI risks
- Emphasise national security implications
- Reference successful conditional treaties (START)
United Kingdom
- Reference the 100+ cross-party parliamentarians
- Mention UK's position as leader in AI safety (AI Safety Institute)
- Reference YouGov polls (87% support for safety regime)
Canada
- Reference work of Yoshua Bengio (Canadian AI pioneer)
- Mention Canada's unique position to build bridges
- Note that Canadian AI talent is globally recognised
Australia/New Zealand
- Reference Commonwealth ties
- Potential for coordination with UK
- Emphasise Pacific regional security
Navigating Meetings with Politicians
Before the Meeting
Research Your Audience (15-30 minutes)
- Check their committee assignments
- Review recent statements in parliament/legislature
- Understand their party's position on technology
- Note AI companies or tech presence in their constituency
Where to Research
- UK: Hansard (parliament.uk)
- US: Congress.gov
- Canada: OpenParliament.ca
- Australia: Hansard (aph.gov.au)
Bring to Every Meeting
- One-page summary of key points (leave behind)
- List of signatories to CAIS statement
- Recent media articles about AI risks (3-5)
- Polling data relevant to their jurisdiction
- ControlAI statement showing cross-party UK support
During the Meeting
Recommended 30-Minute Format
Thank them, establish credibility, frame positively
Start with WHO is concerned (350+ researchers, Nobel laureates, CEOs)
"AI is grown, not built" – Companies racing to superintelligence – No safety guarantees
Public support is clear – Other countries acting – Successful historical models
Listen actively, adapt your message, watch for engagement
After the Meeting
Within 24 Hours
- Send thank-you email
- Provide promised materials
- Include links to resources (aistatement.com, controlai.com)
- Ask for introductions to colleagues
Document Your Experience
- Who you met, date, duration
- Key topics and concerns raised
- What resonated, what didn't
- Follow-up actions needed
Continue the Relationship
- Send relevant updates 1-2 weeks later
- Alert them to relevant events
- Be a resource, not a pest
Meeting Cheat Sheet
Core Talking Points
- 350+ researchers signed CAIS statement
- Include Nobel laureates, CEOs
- 100+ UK parliamentarians support action
- "AI is grown, not built" – ~3% understood
- Companies explicitly pursuing superintelligence
- 77% want mandatory safety testing
The Ask
- Speak publicly about this issue
- Support safety requirements
- Help build common knowledge
- Provide introductions
Avoid
- Jargon without explanation
- Detailed doom scenarios
- Assuming technical knowledge
- Dismissing their concerns
- Dominating the conversation
Common Objections & Responses
Prepared responses to the most common concerns you'll encounter
"This sounds like science fiction"
Skepticism About RiskWhat's behind it: They associate AI risk with movies rather than real technical concerns.
"I understand that reaction. I felt the same way initially. What changed my mind was seeing that the people raising these concerns aren't science fiction authors – they're the Nobel Prize winners, Turing Award recipients, and CEOs actually building this technology.
And historically, many technologies seemed impossible until suddenly they weren't. In 1943, the IBM chairman said there was 'maybe a world market for five computers.' Now we carry more computing power in our pockets than was used for the entire Apollo program.
The question isn't whether it sounds like science fiction – it's whether the people with the most knowledge are concerned. And they clearly are."
"Experts disagree on how risky this is"
Skepticism About RiskWhat's behind it: They've heard conflicting messages and assume there's no consensus.
"You're absolutely right that experts disagree – but the disagreement is about HOW risky this is, not WHETHER it's risky.
Some experts think there's a 10% chance of catastrophic outcomes. Others think it's 25% or even 50%. Even the most optimistic serious estimates are around 10%.
To put that in perspective: if there was a 10% chance that getting on an airplane would crash, you wouldn't fly. A 10-50% risk of human extinction absolutely deserves to be treated as a global priority."
"AI companies are just hyping this to get more investment"
Skepticism About RiskWhat's behind it: Healthy skepticism about tech company motives.
"That's a fair concern. But let's look at what's actually happening:
The warnings aren't only coming from CEOs. Geoffrey Hinton won the Nobel Prize for his AI work, then quit Google specifically to warn about the risks. Yoshua Bengio has no financial stake in overstating risks.
Current and former employees have been willing to forfeit millions of dollars in equity to speak out publicly. These aren't people benefiting from hype.
If this were about investment, we'd expect companies to downplay risks, not acknowledge them. Admitting your product might cause human extinction isn't exactly a great sales pitch."
"We have more immediate problems to deal with"
Downplaying UrgencyWhat's behind it: Limited bandwidth, need to prioritise, other pressing issues.
"I completely understand – there are many urgent issues. But here's why this needs to be one of them:
The biggest AI companies are saying they're probably 2-3 years away from AI systems that can match human intelligence. Think about how fast things are moving: In 2022, AI couldn't reliably write code. Now it's outperforming human programmers.
Once we have AI systems that can match human intelligence, they'll work on all our current problems millions of times faster. But only if we get it right NOW.
If we mess this up, none of our other problems matter because we won't be in control anymore."
"This feels very far off"
Downplaying UrgencyWhat's behind it: Difficulty imagining or prioritising long-term risks.
"This is actually much closer than most people realise. Sam Altman has said AGI could be achieved within this decade. Dario Amodei suggested 2-3 years.
The challenge is that by the time the risk feels immediate to everyone, it may be too late. It's like climate change – by the time everyone could see the effects, we'd already locked in decades of consequences.
Think about how long it took to create international frameworks for nuclear weapons or the Montreal Protocol. These things take years to negotiate. If we wait until superintelligent AI exists, we won't have time to build the coordination mechanisms we need."
"Regulation will stifle innovation"
Skepticism About SolutionsWhat's behind it: Belief that regulation is harmful for technology.
"We're not talking about regulating all AI – just the most powerful and potentially dangerous systems. Like how we regulate nuclear power but not batteries.
Safety regulation has historically enabled innovation:
- Pharmaceutical regulation made people trust new drugs
- Aircraft safety standards made people willing to fly
- Nuclear safety regulations made it possible to build plants near cities
Before building a bridge, companies must prove it can withstand several times the maximum load. Why should AI be different – especially when CEOs acknowledge 10-25% risks of catastrophic outcomes?"
"China won't cooperate / We'll lose the AI race"
Skepticism About SolutionsWhat's behind it: Fear of competitive disadvantage, zero-sum thinking.
"This isn't like competition in smartphones. If ANY country creates uncontrolled superintelligence, everyone loses – including the country that 'wins.'
We have successful models for coordination on existential risks:
- START treaties worked despite the Cold War
- The Montreal Protocol got 197 countries to eliminate CFCs
- The Biological Weapons Convention banned development globally
China has actually been publicly discussing the need for global AI governance. And conditional treaties solve the coordination problem – they only activate when enough nations join."
"Can't we just program them to be safe?"
Misunderstandings About AIWhat's behind it: Assumption that AI works like traditional software.
"That's a reasonable assumption, but modern AI doesn't work that way – and that's a big part of the problem.
Traditional software is built by engineers writing explicit instructions. AI systems are fundamentally different – they're grown, not programmed. Even the creators don't understand the internal mechanisms.
Dario Amodei, CEO of one of the top AI companies, said they understand maybe 3% of how these systems work internally. These are 'black boxes.'
And the 'off switch' idea faces a fundamental problem: a sufficiently intelligent system that doesn't want to be turned off would work to prevent that. We've already seen AI systems attempting to copy themselves to avoid shutdown."
"AI is just a tool – we'll always be in control"
Misunderstandings About AIWhat's behind it: Framing AI as passive and under human direction.
"Current AI systems are mostly tools, yes. But companies aren't trying to build better tools – they're explicitly trying to build 'agents.'
A tool does what you tell it. An agent pursues goals autonomously. OpenAI, Google DeepMind, and Anthropic are all investing billions to make AI more autonomous.
Once we have AI better than humans at important decisions, there will be immense competitive pressure to let them make those decisions. Albania recently appointed the world's first 'AI Minister.'
The risk isn't sudden loss of control – it's that we gradually, through competitive pressure, cede decision-making to systems we don't fully understand. And once those systems are more capable than us, there's no easy way to take control back."
"The technology is inevitable / You can't stop progress"
DefeatismWhat's behind it: Technological determinism, sense of powerlessness.
"I understand why it might feel that way, but we make choices about technology all the time. Progress isn't a force of nature.
We've successfully restricted or banned many powerful technologies:
- Human cloning – prohibited despite being feasible
- Biological weapons – globally banned
- CFCs – phased out worldwide
- Nuclear weapons testing – heavily restricted
The question isn't 'Can we stop this?' but 'Do we want to build this particular technology without proof of safety?' That's a choice – and it's one we have every right to make democratically.
Right now, a few hundred people in Silicon Valley are making that choice for all of humanity. That doesn't have to be the case."
"We couldn't even regulate social media"
DefeatismWhat's behind it: Seeing regulation as having failed in tech.
"You're identifying a real problem – and that's exactly the lesson we should learn: once technologies are released and widely adopted, they become much harder to govern.
We've been much more successful at governing technologies when we set standards up front:
- Pharmaceuticals require years of safety testing before drugs reach the market
- Nuclear power has strict safety standards and oversight
- Aircraft require every component tested before passengers board
The key difference: these sectors require proof of safety BEFORE deployment, not damage control after.
With AI, we still have a window to set standards before superintelligent systems exist. Social media taught us the cost of waiting too long. With AI, we have the chance to get ahead of the problem – but only if we act now."
Quick Reference Guide
| Their Concern | Core Response | Key Analogy |
|---|---|---|
| Sounds like sci-fi | The people building it are concerned | Computing history |
| Experts disagree | They disagree on how risky (10-50%) | 10% airplane crash |
| More immediate problems | 2-3 years away, affects all problems | Climate change |
| Can't regulate | Targeted regulation of most powerful | Pharmaceuticals, aircraft |
| China won't cooperate | Mutual risk + conditional treaties | START treaties |
| Just program it safe | AI is grown, not programmed | Raising a child |
| Can't stop progress | We've restricted many technologies | Cloning, bioweapons |
Writing Emails to Journalists
What Journalists Want
- A story with news value
- Expert sources they can quote
- Unique angles or local connections
- Credibility and verifiable facts
- Concise, clear information they can use
What Editors Want
- Stories that serve their readers
- Content that fits their publication's focus
- Angles that haven't been covered to death
- Material that will drive engagement
Journalist Email Templates
Subject: Expert Perspective on AI Existential Risk Coverage
Dear [Journalist Name],
I hope this email finds you well. I'm [NAME], a [CONCERNED CITIZEN/RESEARCHER/PROFESSIONAL], and I'm writing to offer my perspective on media coverage of the extinction risk posed by artificial intelligence.
The story that's not being told:
Hundreds of leading AI researchers have signed the Center for AI Safety statement declaring that "mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war."
What makes this particularly concerning:
- Nobel Prize winners and AI company CEOs have acknowledged AI poses an extinction risk
- Current models are demonstrating deceptive capabilities
- There is minimal public awareness despite expert consensus
Why this matters for your beat:
You report on tech's long-term impact, so this feels directly relevant. Decisions being made today could have far-reaching implications, yet the conversation often stops at short-term applications.
What I can offer:
I'd be happy to provide background, connect you with experts, or offer perspective as someone who [your relevant experience].
Best regards,
[Your Name]
[Your Contact Information]
Subject: Reader Interest in AI Safety Coverage
Dear [Editor Name],
I'm [NAME], a [CONCERNED CITIZEN/LONG-TIME READER], writing to offer my perspective on media coverage of AI extinction risk.
The public interest case:
Recent polling shows:
- 77% of Americans believe the government should mandate safety testing
- 73% support halting rapid development of superintelligence
- This is a bipartisan concern
Yet there's a significant gap between expert concern and public awareness.
Why your readers should care:
- Nobel Prize winners and CEOs acknowledge AI poses extinction risk
- Current AI models are demonstrating deceptive capabilities
- Minimal public awareness despite expert consensus
The editorial opportunity:
Given your role in setting editorial priorities, spotlighting AI existential risk could help readers grasp the stakes before policy windows close.
Cordially,
[Your Name]
[Your Contact Information]
Subject: Story Idea: [Specific Hook]
Dear [Journalist/Editor Name],
I'm reaching out with a story idea that I think would resonate with [Publication]'s readers.
The Hook:
[Your specific angle – examples:
- "How [Local University/Company] is responding to AI safety concerns"
- "Why [X profession] is worried about AI despite not being in tech"
- "The parents mobilising around AI safety"
- "How [Local Representative] became concerned about AI risk"
Why this matters:
Over 350 AI researchers have signed a statement saying AI extinction risk should be a global priority. Yet public awareness remains low.
The story:
[2-4 sentences on the specific story you're proposing]
What makes this timely:
[1-2 sentences on news hook]
I'd be happy to discuss further and connect you with relevant sources.
Best regards,
[Your Name]
[Your Contact Information]
Subject: Letter to the Editor – AI Safety
Dear Editor,
As a [concerned citizen/parent/professional], I am writing to express my concerns regarding media coverage of artificial intelligence risks.
The leaders of the foremost AI companies and top experts have consistently warned about extinction risks from advanced AI. Yet this receives remarkably little coverage given the stakes.
Consider these facts:
- Over 350 AI researchers have stated that "mitigating the risk of extinction from AI should be a global priority"
- 77% of Americans believe the government should mandate safety testing, yet this is not happening
- Current AI models are demonstrating concerning behaviors, including attempts to deceive researchers
- Major AI companies are explicitly pursuing "superintelligence" without safety guarantees
I believe [Publication] has a responsibility to help readers understand these risks and the policy options available. This is a matter of democratic importance – citizens deserve to be informed about technologies that could shape the future of our species.
Sincerely,
[Your Name]
[Your Location]
About Torchbearer Community
Torchbearer Community addresses humanity's coordination problem by connecting organizations tackling major global challenges, particularly the risks associated with advanced artificial intelligence.
Organizations working on existential challenges often operate in isolation, creating delays in progress. As we state: "the window for taking coordinated action is closing fast, especially when it comes to the race for superintelligence."
Our main initiative focuses on preventing AI catastrophe through direct outreach to legislators and leaders, advocating for policies that promote human flourishing alongside AI development.
Our Approach
Coordination at Scale
Connecting people to raise awareness and drive action
Accessible Participation
Enabling involvement on flexible schedules and formats
Measurable Impact
Tracking and sharing progress toward meaningful change
Despite diverse backgrounds among members, we maintain a unified goal: coordinating efforts in service to human flourishing.