Join the PubAffairs Network

Established in January 2002, PubAffairs is the premier network and leading resource for the public affairs, government relations, policy and communications industry.

The PubAffairs network numbers over 4,000 members and is free to join. PubAffairs operates a general e-Newsletter, as well as a number of other specific group e-Newsletters which are also available to join by completing our registration form.

The PubAffairs e-Newsletters are used to keep members informed about upcoming PubAffairs events and networking opportunities, job vacancies, public affairs news, training courses, stakeholder events, publications, discount offers and other pieces of useful information related to the public affairs and communications industry.

Join the Network

GK Senior Adviser Robert Blackmore assesses how world leaders will increasingly need to consider the implications of incorporating AI as part of their defensive capabilities, and looks ahead to this week’s summit on the governance of artificial intelligence.

This week Bletchley Park will host a global summit on safety and Artificial Intelligence. For the first time, world leaders, AI companies and researchers will come together to discuss the opportunities and risks surrounding the development of the most revolutionary technology since the discovery of nuclear fusion. The summit aims to initiate the development of a global consensus on AI governance.

Whereas much of the initial focus on the purpose of nuclear technology had military connotations, experts today highlight the impact AI may have on every aspect of our lives. Bletchley Park will therefore host discussions on a wide range of potential over-arching implications of AI. However, with an historic rise in global deaths by conflict reported by the Institute for Economics and Peace, not least following Russia’s second invasion of Ukraine in 2022, and the latest outbreak of violence in the Middle East, it is important to consider the role of AI and defence in isolation.

Earlier this year, Lord Sedwill, the former National Security Adviser, suggested to the House of Lords Select Committee on AI in Weapons Systems that AI was “the future of defence capability”. While he assured Peers that such a development would not equate to “Terminator” or “Matrix” style killer robots – a view that is not universal – the implication was clear: developments in AI have the potential to have dramatic ramifications for the defence sector, and in turn, the future of warfare.

This has not been lost on the world leaders. In 2018, rather ominously, the United States warned: ‘Our adversaries and competitors are aggressively working to define the future of these powerful technologies according to their interests, values, and societal models. Their investments threaten to erode U.S. military advantage, destabilize the free and open international order, and challenge our values and traditions with respect to human rights and individual liberties.’

In response, the Pentagon published an AI Strategy that sought to transform the ‘speed and agility’ of US military operations and identified several potential benefits of AI, including improving situational awareness and reduce collateral damage. Meanwhile, in 2022, the UK Ministry of Defence outlined its ‘Defence Artificial Intelligence Strategy’, highlighting its ambition for the UK Armed Forces to be the world’s most ‘effective, efficient, trusted and influential Defence organisation’ for its size.

Nor has the potential of AI been lost on the defence industry. This year defence contractor Leidos formed a strategic partnership with tech giant Microsoft, while Lockhead Martin has invested multi millions in AI technology and established and ‘AI factory’, as reported by Forbes, to ‘streamline access of its software writers to the resources they need for generating AI applications’.

AI already plays a crucial role in military operations. For example, it is being used by the Ukrainian military to rapidly review satellite images and video feeds from drones. Crucially, in this scenario AI acts a vital supporting tool for military operators to make decisions. Ultimately, final output remains the preserve of human, as opposed to artificial, intelligence.

This is the crux of the matter. As AI’s capabilities increase, how much agency should it be given to act according to the set of instructions established by its human handler? Can we trust a weapon acting autonomously not to make catastrophic errors in the context of life and death scenarios? Once you remove human oversight from weaponry, you are left with Automated Weapons Systems (AWS), loosely defined as systems that can select and attack a target without human intervention.

The closest such weapon in existence is believed to be STM’s Kargu-2 drone. It has been deployed by the Turkish, and reportedly Azerbaijan, armed forces. Unsurprisingly, there are significant ethical concerns about the further development of such weapons. The International Committee of the Red Cross (ICRC) has argued AWS risk escalating conflicts in an unpredictable manner and thus potentially aggravating humanitarian needs, while Amnesty International claim such machines would automate killing, treat it as a technical undertaking, and refrain from making life and death decisions with any empathy or compassion.

As it stands, there are no formal international rules governing the use of AI in a military context. United Nations Secretary-General António Guterres has called for a legally binding instrument to prohibit lethal autonomous weapon systems without human oversight by 2026. While in February, 33 Latin American and Caribbean States called for the urgent negotiation of a legally binding international treaty on autonomy in weapons systems that will guarantee human oversight. As recently as October, Pakistan’s representative to the United Nations’ First Committee on Disarmament and International Security stated that “We are at the verge of a monumental step in human technological history, heralded by the advent of artificial intelligence”, warning that there were insufficient guardrails governing the ‘design, development and deployment’ of AWS.

However, the obstacles that the creation of an international agreement would face are stern. Professor Sir Lawrence Freedman, during a hearing of the House of Lords AI in Weapons Systems Select Committee in May, told Peers that the speed at which AWS technology is developing would inevitably outpace any negotiation of such treaty. Furthermore, for it to have purpose, it would require agreement in good faith between adversaries. Negotiations over nuclear disarmament between the Cold War between the United States and the Soviet Union, culminating in the much-heralded 1987 Intermediate-Range Nuclear Forces Treaty (INF Treaty), has shown that this is possible. However, the necessary geopolitical conditions must be in place, as they where in the late 1980’s between the two global superpowers. It is for this reason, this week’s AI Safety Summit could be so crucial, even if it is not directly dealing with the use of artificial intelligence in a combat setting. For China, despite its soaring tensions with several countries present, not least the United States, will reportedly play a key role at the summit. A delegation from the Chinese Ministry of Science and Technology will be in attendance. While, the Telegraph suggests that Professor Yi Zeng, a leading academic at the CCP-controlled Chinese Academy of Sciences, will chair a behind-closed-doors session focusing on the risk that AI mechanisms may “unexpectedly develop dangerous capabilities”. It would be a great surprise if AI in the context of warfare was left unmentioned.

Therefore, at a time when the geopolitical stakes are so high, the AI Safety Summit has the potential to establish a common cause in relation to the governance of AI. This could perhaps lay the foundations for a pathway, however embryonic, towards an international guardrail in relation to Automated Weapons Systems.