My mission is to help increase the odds that the long-term future of humanity goes well.1

I expect that how we handle the development of advanced AI (basically, systems that far surpass humans at all cognitive tasks) will massively influence how the future unfolds.2

Even if we "solve alignment"3, we'll still need to navigate AI's effects on the most complex parts of our world: institutions, geopolitics, culture, morality, and so on.4 I think AI-driven feedback loops are likely to bring about such effects far faster than the world is prepared for.

Because of this, I'm currently exploring where I can contribute most in the realm of AI governance and strategy.5

Contract Researcher · Safe AI Forum
Aug 2025 —
  • Researching opportunities for international coordination on frontier AI risks.
  • Managing a publication on AI geopolitics and cooperation.
Research Fellow · Future Impact Group
Dec 2025 —
  • Working with Rose Hadshar (Researcher at Forethought) on AI futures and macrostrategy.
Co-Founder and Editor · SCHEME Magazine
Oct 2025 —
  • Building a home for AI stories that are too weird, too premature, or too awkward for the timeline.
We'd love to read your pitch! Submit here. We're looking for fiction, analysis, journalism, and pieces that transcend categorisation.
Sept 2025 —
  • Contributing to an international AI red lines project led by the French Centre for AI Safety.
  • Mapping AI threat models to pre-existing obligations relating to bioweapons.

For previous experience, see here.