Alexander K. Saeri

Shaping the governance of advanced AI

I lead the AI Risk Index at MIT FutureTech and The University of Queensland, building the evidence base that decision-makers need to manage high-priority AI risks.

I also help grow Australia's AI safety ecosystem through policy collaborations, community building in Melbourne, and national convenings.

📅 Request a briefing 🗄️ Explore the AI Risk Repository

About

Alexander K. Saeri

I'm a researcher and policy analyst focused on understanding and managing risks from advanced AI systems. My work combines systematic evidence synthesis, expert consultation methods, and practical policy engagement to inform decision-makers about AI governance.

Current focus

Two streams of work:

  1. AI Risk Index (MIT FutureTech × UQ) — research leadership, collaboration, and delivery to assess which AI risks matter most, which mitigations work, and how key actors are responding.
  2. Australia's AI safety ecosystem — collaborating with Good Ancestors Policy, convening the Melbourne AI Safety community, designing and facilitating the 2024 AI Safety Forum, and maintaining AISafety.org.au.

AI Risk Index

The MIT AI Risk Initiative reviews and communicates evidence on AI risks and mitigations. The AI Risk Index turns this into a continuously updated public resource to help people and institutions understand risks, identify effective mitigations, and track organizational responses over time.

Current activities include a systematic review of AI risk mitigations, expert Delphi studies, and a review of organizational risk responses (for public benchmarking in the Index).

My role: program strategy and execution — research design, multi-institution collaboration, methods & tooling (systematic reviews, Delphi), data infrastructure, and communications.

Research questions (guide the Index):

  1. What are the risks from AI, which are most important, and what are the critical gaps in response?
  2. What are the mitigations for AI risks, and which are the highest priority to implement?
  3. Which AI risks and mitigations are relevant to which actors and sectors?
  4. Which mitigations are being implemented, and which are neglected?
  5. How is the above changing over time?

What we've built on

The AI Risk Repository (living systematic review of AI risks) distils 60+ frameworks and has significant reach (135k+ visits), wide referencing (650+ sites incl. Amazon, IBM, Trend Micro), citation in the International AI Safety Report 2025, and integration with the AI Incident Database.

Building Australia's AI safety ecosystem

Other areas

See all other projects

Selected publications & reports

Peer-reviewed

Preprints / reports

Methods & datasets

Full list: Google Scholar

Contact

Email: alexander@aksaeri.com
Phone: +61 405 519 733

Schedule a meeting