Papers / Reports
The Open Dividend: Building an AI openness strategy to unlock the UK’s AI potential
Demos/Mozilla Report by Elizabeth Seger and Jamie Hancock (2025)
Epistemic Security 2029: Fortifying the UK’s information supply chain to tackle the democratic emergency
Demos Report by Elizabeth Seger, Jamie Hancock and Hannah Perry (2025)
International AI Safety Report
Yoshua Benjio et al. (2025)
Open Horizons: Nuanced Technical and Policy Approaches to Openness in AI
Demos/Mozilla Report by Elizabeth Seger and Bessie O’Dell (2024)
Crowdsourcing the Mitigation of disinformation and misinformation: The case of spontaneous community-based moderation on Reddit
Paper by Giulio Corsi and Elizabeth Seger. Online Networks and Social Media (2024)
AI – Trustworthy By Design: How to build trust in AI systems, the institutions that create them and the communities that use them
Demos/PwC Report by Elizabeth Seger and Maria L. Axente (2024)
Generative AI and Democracy: Impacts and Interventions
Demos Report by Elizabeth Seger (2024)
Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives
Report by Elizabeth Seger, Noemi Dreksler, Richard Moulange, Emily Dardaman, Jonas Schuett, K. Wei, et al (2023) for the Centre for the Governance of AI.
Democratizing AI: Multiple Meanings, Methods, and Goals
AIES 2023 conference paper by Elizabeth Seger, Aviv Ovadya, Ben Garfinkel, Divya Siddarth, and Allan Dafoe (2023)
Should Epistemic Security Be a Priority GCR Cause Area?
Paper by Elizabeth Seger in Intersections, Reinforcements, Cascades: Proceedings of the 2023 Stanford Existential Risk Conference (2023)
In Defence of Principlism in AI Ethics and Governance
Paper by Elizabeth Seger in Philosophy & Technology (2022)
Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world.
Report for The Alan Turing Institute by Elizabeth Seger, Shahar Avin, Gavin Pearson, Mark Briers, Seán Ó hÉigeartaigh, Helena Bacon (2020)
Toward Trustworthy AI: Mechanisms for supporting verifiable claims
Report by M. Brundage et al. incl. Allan Dafoe, Markus Anderljung, Jade Leung, Elizabeth Seger (2020)
Posts / Op-eds
What do we mean when we talk about “AI Democratisation”?
Research blog post by Elizabeth Seger for the Centre for the Governance of AI (Feb 2023)
Exploring epistemic security: The catastrophic risk of epistemic insecurity in a technologically advanced world
Article by Elizabeth Seger in the International Security Journal (2022)
The greatest security threat of the post-truth age
Article by Elizabeth Seger in BBC Future (2021)
Podcasts
Elizabeth Seger on Open Source AI
Hear This Idea – Ep. 77 (July 24, 2024)
Epistemic Security
Futurized (March 26, 2024)
Two Core Issues in the Governance of AI, with Elizabeth Seger
Carnegie Council for Ethics and International Affairs with Wendell Wallach (March 22, 2024)
AI Governance with Elizabeth Seger
AXRP – Ep. 26 (November 26, 2023)
The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish
Your Undivided Attention with Tristan Harris and Aza Raskin (November 21, 2023)
Contact
LindedIn
Twitter: @ea_seger
