Alexander Saeri

Alexander Saeri

Ready Research

I use applied behaviour science and social science methods to understand and address complex challenges, including climate change, pandemics, and advanced artificial intelligence.

I am especially interested in how to improve decision-making and behaviour about the development, deployment and use of artificial intelligence to address catastrophic risks and safely transition to futures where advanced AI is widely used.

Interests
  • AI policy and governance
  • Behaviour science
  • Implementation science
  • Scale up
  • Effective altruism
Education
  • PhD in Social Psychology

    The University of Queensland

Projects

Safe and Responsible AI in Australia

Working with Good Ancestors Policy, I supported three streams of work to respond to a consultation by the Australian government about safe and responsible AI. This work included:

  1. Contributing to a detailed submission from Good Ancestors Policy to the consultation,
  2. Organising and delivering community workshops to help interested people write their own submissions, and
  3. Coordinating a co-signed statement and open letter from Australian AI experts.

Read more on the Good Ancestors Policy website

Safe and Responsible AI in Australia
Scale up toolkit

When we know that a behaviour change intervention has worked in a pilot or trial, how can we scale it up to achieve greater impact and reach?

I led this work at BehaviourWorks Australia in collaboration with the Victorian government Behavioural Insights Unit to develop an evidence-informed toolkit to help behavioural insights researchers and practitioners improve the scale up of their behaviour change interventions.

Read more on the BehaviourWorks Australia website

Scale up toolkit
SCRUB COVID-19 survey

The SCRUB COVID-19 survey aimed to provide current and future policy makers with actionable insights into public attitudes and behaviours relating to the COVID-19 pandemic.

I led this project, which was incubated at Ready Research, funded initially by Monash University, and then funded and conducted collaboratively with the Victorian Government.

Over 2020-2021 we conducted 21 waves of the survey, about 1 every 3 weeks, and collected rich behavioural and attitudinal data from more than 40,000 people.

Read more about the project on the BehaviourWorks Australia website

SCRUB COVID-19 survey
Climate Adaptation Mission

The BehaviourWorks Australia Climate Adaptation Mission explored how systems thinking, knowledge co-production, and behavioural public policy experiments could help Australian communities reduce harms from climate change. I co-led this project with collaborators Stefan Kaufman and Kien Nguyen.

Read more about this project on the BehaviourWorks Australia website

Climate Adaptation Mission
Ready Research

I co-founded Ready Research in 2019. In this collaboration with Peter Slattery, Michael Noetel, and Emily Grundy, we do work aligned with the principles of effective altruism. We do research, training, and communication services to help address the world’s most pressing problems.

Read more at readyresearch.org

Ready Research
Survey Assessing Risks from AI (SARA)

With Michael Noetel at The University of Queensland, I conducted a representative survey of ~1,000 Australian adults in Feb 2024 to understand public perceptions of AI risks and support for AI governance actions in Australia.

We found that:

  • Australians are most concerned about AI risks where AI acts unsafely (e.g., acting in conflict with human values, failure of critical infrastructure), is misused (e.g., cyber attacks, biological weapons), or displaces the jobs of humans; they are least concerned about AI-assisted surveillance, or bias and discrimination in AI decision-making.
  • Australians judge “preventing dangerous and catastrophic outcomes from AI” the #1 priority for the Australian Government in AI; 9 in 10 Australians support creating a new regulatory body for AI.
  • The majority of Australians (8 in 10) support the statement that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

More on this project

Contact me if you’d like a personal briefing on the findings.

Survey Assessing Risks from AI (SARA)