Review of the 7th Annual SYP Conference: AI, Peace and Security

Contents:

 

1. Summary review of the 7th Annual SYP Conference: ‘AI: Implications for peace and security’

 

2. Conference materials: video / pictures / slides / presenter contact details

 

1. Conference Review

 

“Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity”.

As highlighted in the Bletchley Declaration – the outcome of the AI Safety Summit, hosted by the UK Government in November 2023 and held at the home of the World War II codebreakers – AI, peace and security are inextricably linked. But in what ways? For example, how can this technology help us create a more peaceful world? What are the military and security implications of AI? How does AI interact with nuclear weapons? And how should this technology be regulated?

These were the questions asked of participants at Student and Young Pugwash’s ‘Artificial Intelligence, Peace and Security Conference’. The conference’s keynote panel boasted four expert speakers who provided an overview of the political, legal, ethical and technical issues AI raises for society. Speakers included: Professor Elena Simperl (King’s College London), Rachel Coldicutt (Careful Industries), Dr Matt Mahmoudi (Amnesty International), and Dr Peter Burt (Drone Wars UK). This panel was followed by a range of presentations from students and young professionals, both from the UK and several other countries.

For some of the participants, the focus was on how AI could actively promote peace. Sarah Weiler examined how AI has supported many of the UN’s peacebuilding efforts, from satellite imagery in humanitarian situations to natural language processing in diplomatic negotiations. However, the majority of presentations focused on the threat this technology poses to peace. For Dekai Liu, the domestic surveillance threat is a primary concern. Looking at the issue through a Foucauldian lens, Liu argues that Large Language Models (LLMs) have heightened the risk of Orwell’s ‘Big Brother’ becoming a reality, as illustrated by the use of Instant Messenger to bolster state’s surveillance capabilities. Jan Quosdorf and Vincent Tadday’s presentation took a more international approach. Using US defense contractor Anduril as a case-study, they argued that the adoption of AI systems in conflict has far outpaced Europe’s ability to regulate it, and that this void must be filled immediately.

The interplay between AI and nuclear weapons was at the core of a large bulk of the submissions. Whilst AI can improve remote sensing for arms control and treaty verification, Jingjie He made the case that it also threatens to undermine these very same systems. ‘Counter-AI tactics’ come in various forms, He claims, from poisoning the training data to inferring the architecture of the LLMs in an effort to steal the models. Economics PhD researcher Joel Christoph examined the possibilities for AI to reduce nuclear risk, from ameliorating verification mechanisms to enabling better diplomacy through predictive analytics. Also recommending the uptake of AI in this field – so long as it includes concomitant transparency measures – Syeda Saba Batool’s ‘AI for Peaceful Use of Nuclear Energy’ explored how AI’s ability to analyse vast swathes of diverse data can support the work of nuclear inspectors.

Beyond the nuclear realm, much of the work revolved around regulation of AI writ large. For some, successful regulation requires a look to the past. Veerle Moyson argued that the answer to regulating Autonomous Weapon Systems (AWS) lies in the nuclear regime, particularly if principles such as humanitarianism, equality of states, and long-term sustainability are drawn on. Arian Ng suggested establishing a committee of powerful nation-states that is undergirded by shared ethical standards and transparent communication. Multilateralism was a key theme in the presentations, with Mahmoud Javadi emphasising its importance. Javadi concluded with a bold trifecta of recommendations for global military AI governance: ‘legal empathy’, an ‘ambitious-cum-humble mindset’ and ‘differentiation’. Writing from a more legalistic standpoint, PhD student Marco Sanchi honed in on the ‘crisis of causality’ that would arise if an AI system commits a war crime. Sanchi’s answer: allocating liability to those responsible.

Offering a more cautionary view, Océane Van Geluwe warned against overstating the capabilities of these systems. Notwithstanding the risks posed by AI, it is crucial to separate myth from reality, Geluwe claimed, and focus on establishing agile regulatory frameworks that can keep up with the pace of change.

We hope the event will encourage further academic work into the intersection of AI, peace and security, with attention to both the opportunities and risks that this technology presents.

*Thanks to Max Murphy for his summary review of the conference

 

2. Conference materials

 

NB all videos are available on our YouTube channel

 

i) Keynote panel: 10.00 – 11.30

‘An introduction to AI and its societal challenges’

 

‘AI, the tech community and ethics’

Slides

   

  • Dr Matt Mahmoudi (Researcher / Adviser, Amnesty International, Tech Big Data, Artificial Intelligence & Human Rights)

‘AI, human rights, conflict and armed forces’

LinkedIn: https://uk.linkedin.com/in/mattmoudi

X: @DocMattMoudi

   

‘AI and the Campaign to Stop Killer Robots’

   

Chair: Dr Tim Street (Coordinator, Student / Young Pugwash)

 

Email: syp@britishpugwash.org

X: @SYPugwash_uk

ii) First presenter panel: 11.45 – 13.00

  • Marco Sanchi (PhD Student, AI & Society, University of Bologna – University of Pisa)

Artificial Intelligence War Crimes: Regulating Accountability’

Abstract 

Slides

Linkedin 

UniBo academic profile

 

  • Jan Quosdorf & Vincent Tadday (MA Candidate International Affairs, King’s College London / Peace and Security Studies, Hamburg University; Sciences Po, MPP Candidate Politics and Public Policy / Hertie School, MPP Candidate Public Policy )

‘How AI is revolutionising the Western Defense Industry – The case of Anduril and Implications for Europe’

Abstract

Slides

VT: X: @vincenttadday; LinkedIn; Website: www.vincenttadday.com

JQ: X: @GermanSYP; LinkedIn

   

  • Dr Jingjie He (Postdoctoral Fellow, The Hebrew University of Jerusalem)

‘An Expanding Counter-AI Matrix: Whither the Satellite Remote Sensing Revolution?’ (Online)

X: @Jingjie_He

LinkedIn: jingjieh

  • Veerle Moyson (Consultant, United Nations Office for Disarmament Affairs in Vienna)

‘What a future regulation on AWS can learn from the nuclear regime’

Abstract

Slides

LinkedIn: https://www.linkedin.com/in/veerle-moyson/

 

Chair: Dr Peter Burt, Drone Wars UK

iii) Break out session: 14.00 – 15.00

This facilitated session included: a mapping exercise / group discussion, on topics related to AI, peace and security.

Notes for each group are here: 1. Matteo / 2. Richard / 3. Orlanda A; Orlanda B / 4. James A;  James B 

                   

             

    

   

iv) Second presenter panel: 15.00 – 16.00

  • Soeren Taylor (Student of methods in historical and scientific inquiry / Disarmament Intern, Pax Christi International)

‘An argument against shallow risk assessment in human and automated systems, from mental health to international security’

LinkedIn: linkedin.com/in/soeren-a-taylor/

Instagram: instagram.com/sev.of.nine

Facebook: facebook.com/profile.php?id=100053218350864

Email: soeren.a.taylor@gmail.com

  • Dekai Liu (BA Student, International Studies, University of Nottingham- Ningbo Campus)

AI Surveillance and its Impact on Peace and Security: A Case Study of Instant Messenger’ (Online)

Abstract

Slides

  • Sarah Weiler (Research Fellow, Global Policy Research Group, AI Governance Program)

‘The use of AI-powered technologies in the UN’s efforts to promote peace and security globally’

Abstract

Slides

LinkedIn: in.com/in/sarah-weiler-2b2b17208/

 

  • Syeda Saba Batool (Board Chair, Emerging Voices Network, BASIC / MPhil International Relations, Quaid e Azam University, Islamabad)

‘AI for Peaceful Use of Nuclear Energy: Future Prospects’ (Online)

Abstract

Slides

Email: sababatool72@gmail.com

X: @TheSabaShahh

X: https://x.com/TheSabaShahh?t=BryG8zVwR_BwYNiK2_nSrQ&s=09

Chair: James Brady, King’s College London

v) Third presenter panel: 16.15 – 17.15 

  • Océane Van Geluwe (EI&C Nuclear Safety Qualification, Business France, AKKODIS Belgium)

‘Decoding AI Hype: Unveiling the Overrated Problem for a Race Against Time’

Abstract

Slides

 

  • Joel Christoph (PhD Researcher, European University Institute)

‘AI-Driven Nuclear Risk Reduction Strategies’

Abstract

Slides

   

  • Ng Arian Man Lok (MA, China in Comparative Perspective, London School Of Economics and Political Science)

‘AI and Global Security Dynamics: Navigating the Evolving Military and Geopolitical Landscape’

Abstract

Slides

LinkedIn: https://www.linkedin.com/in/arian-ng-41793a120

   

  • Mahmoud Javadi (AI Governance Researcher, REMIT Horizon Europe Project, School of Social and Behavioral Sciences, Erasmus University Rotterdam)

‘European Democratic Multilateralism in Shaping Global Military AI Governance’ (Online)

Abstract

Slides

Email: javadi@essb.eur.nl

LinkedIn:https://www.linkedin.com/in/mahmoud-javadi/

X: https://twitter.com/MahmoudJavadi2

Chair: Orlanda Gill, Student / Young Pugwash Board Member

X: @orlanda_gill

Other photos:

*Thanks to Ellie Smith and Orlanda Gill for taking photos, and for Marcia Clough of KCL for her support on the day.