My mission is to help increase the odds that the long-term future of humanity goes well.1

I expect that how we handle the development of very advanced AI (e.g. systems that far surpass humans at all cognitive tasks) will massively influence how the future unfolds.2

Even if we "solve alignment"3, we'll still need to navigate AI's effects on the most complex parts of our world: institutions, geopolitics, culture, morality, and so on.4

I think AI-driven feedback loops are likely to bring about such effects far faster than the world is prepared for.

Because of this, I'm currently exploring where I can contribute most in the realm of frontier AI governance and strategy.5

Visiting Researcher · Safe AI Forum
Aug 2025 —
  • Researching opportunities for international coordination on frontier AI risks.
  • Managing a publication on AI geopolitics and cooperation.
Research Fellow · Future Impact Group
Dec 2025 —
  • Working with Rose Hadshar (Researcher at Forethought) on AI futures and macrostrategy.
Co-Founder and Editor · SCHEME Magazine
Oct 2025 —
  • Building a home for AI stories that are too weird, too premature, or too awkward for the timeline.
Sept 2025 —
  • Researching existing international obligations to identify precedents for red lines on AI-enabled biological weapons development.

For previous experience, see here.