Last week, we hosted our first WonkComms Breakfast Club in London for a while — and one thing became clear: AI isn’t coming. It’s already here, and politicians and policymakers are using it.
Think Tanks and AI: Seven lessons you can use this week
In the past few months, we’ve seen clear evidence of MPs leaning on AI to brief, analyse and summarise. And why wouldn’t they? AI offers fast, conversational access to information that was previously slow, siloed, or difficult to find.
So if your organisation targets political or policy audiences, the question is no longer “Should we use AI?” — it’s “How do we make sure AI sees and uses our work?”
That means understanding what’s under the hood of chatbots, rethinking how we produce and share content and exploring how AI might change the way our teams work internally.
Here are seven lessons from the session, with practical takeaways for think tank communicators navigating this new terrain.
1. AI is already shaping comms — treat it like a new distribution channel
AI is changing how policymakers and public audiences access information. Think of it as another gatekeeper. If your research isn’t visible to AI systems, it risks being invisible to the humans who rely on them.
The implication is simple but profound: design content so it’s discoverable by LLMs as well as people.
Action: Audit your key reports and posts. Are they indexable (HTML rather than PDF), clearly structured, and complete with metadata, citations, and summaries? If not, start there. Accessibility — human and machine — now sits at the heart of comms strategy.
2. Influence AI outputs — map the data landscape and lean in
Large language models aren’t trained on a single dataset; they draw from many. Mapping the most relevant ones helps you understand where you can meaningfully shape what AI systems “see” about your work.
High-leverage sources include:
- Wikipedia: One of the largest open datasets used to train LLMs. Contributing accurate, well-sourced information here — alongside citations from varied, credible sources — helps ensure your work is reflected responsibly. (Just don’t “game” Wikipedia: neutrality matters. You should work with an experienced editor as doing it yourself constitutes a conflict of interest, and there are PR risks if you do it badly!)
- Open-access academic platforms such as SSRN, CORE, and arXiv, which contribute directly to model training.
- Academic publisher platforms may be paywalled, but being cited by them still boosts your visibility through indirect influence.
- Public web content and forums like Reddit, which help shape how models interpret debates, tone, and language.
Action: Map where your organisation’s work appears across these ecosystems. Then prioritise platforms where your expertise is underrepresented — and contribute where you can do so ethically and transparently.
3. Making your work more discoverable and useful
Influence isn’t just about visibility; it’s about utility. AI is already curating and rephrasing information for decision-makers. To stay relevant, think tanks need to make their outputs usable in new contexts.
That means shorter, more structured summaries, better use of visuals and content designed for reuse. It also means publishing in open formats with clear, machine-readable takeaways — think “policy brief meets structured data.”
Action: Revisit your publication templates. Are your insights easy for AI (and humans) to extract? A 200-page PDF that lives on your website and nowhere else won’t cut it in the age of conversational search.
4. Making your own content more engaging and human
What can AI do for communicators themselves?
AI tools are now routinely used for summarising, drafting and repackaging content — but their real power lies in making your outputs more engaging and accessible.
At Sociopúblico, the team helps think tanks and NGOs integrate AI in creative and ethical ways. She shared examples of:
- Augmented CMS, which assists journalists in writing, titling, and editing stories — freeing them to focus on storytelling.
- The Inter-American Development Bank’s knowledge platform, which uses AI to explore and connect ideas across projects — surfacing insights humans might miss.
These examples show that AI isn’t replacing human creativity; it’s giving communicators new tools to explore, connect, and tell better stories.
Action: Test AI for creative assistance — ideation, tone refinement, audience adaptation — and measure what actually improves engagement.
5. Improving internal workflows — and tackling the culture shift
While individuals are quickly adopting AI tools, organisations often struggle to embed them, with a common challenge being cultural resistance.
This shows that the problem isn’t usually technical — it’s organisational. AI raises new questions about authorship, trust and ways of working. Teams need shared norms before they need new tools.
Action: Start small and visible. Pick one repetitive task (say, first drafts of event summaries or social posts), measure time saved, gather feedback and share the learning internally. Building trust through lived experience is the fastest route to adoption.
And remember: AI integration is change management, not IT support.
6. Guardrails, transparency and reputation — non-negotiables
With any new technology, guardrails matter. Data security, authorship, and transparency are now key dimensions of organisational reputation.
The most credible organisations are already listing which AI tools they’ve used in reports — a simple act of disclosure that builds trust.
Use paid APIs and secure systems for internal data to avoid unwanted model training. And consider simple transparency lines:
“This summary was produced with the assistance of [tool], reviewed by [author].”
Action: Draft an internal AI policy covering permitted tools, data handling, and approval processes. The goal isn’t bureaucracy — it’s confidence.
7. Quality over quantity: AI should create time for better work, not more noise
More content doesn’t mean more impact.
AI should help teams redirect effort toward strategic thinking, reflection, and deeper engagement — not just faster output.
If a workflow saves three hours, decide consciously how to use that time: for collaboration, creativity, or evaluation. Otherwise, you risk producing more of what already exists — just faster.
Action: Treat efficiency gains as an opportunity to invest in creativity, not capacity.
A simple framework to get started this month
Before you jump into tools or optimisation, take a step back.
- Identify your needs: What do you want AI to achieve — visibility, accessibility, workflow efficiency, better storytelling?
- Assess your capacity: What can you realistically do internally, and where do you need support or new skills?
- Clarify governance: Do you have leadership buy-in, IT infrastructure and agreed ethical guidelines?
- Prioritise experiments: Once you’re clear on purpose, test ideas in short sprints and share results openly.
Where agency support helps
This is where agencies like Soapbox, Sociopublico and Daimon Communications can make a difference — helping you bridge the strategy–execution gap.
We support with:
- Discovery & mapping – identifying which open datasets and platforms shape your field.
- Content transformation – turning research into machine-readable, visually engaging outputs.
- Cultural adoption – designing internal pilots, training, and communications strategies.
- Governance frameworks – helping teams communicate transparently about their AI use.
Final thought: be deliberate, not reactionary
AI is now a legitimate part of policy communications — but impact still depends on craft, ethics and culture.
The fastest wins come from honest, well-designed work: making outputs accessible, experimenting safely and giving your teams the confidence to learn by doing.
If you’d like to explore how to bring these ideas into your organisation — from discovery audits to pilot design — we’d love to help.
Let’s make AI work for think tank communications in a way that’s creative, transparent and deeply human. Contact us: [email protected]