IDSA COMMENT

You are here

US Voluntary Code of Conduct on AI and Implications for Military Use

Lt Col Akshat Upadhyay is Research Fellow, Strategic Technologies Centre at Manohar Parrikar Institute for Defence Studies and Analyses, New Delhi. Click here for detailed profile
  • Share
  • Tweet
  • Email
  • Whatsapp
  • Linkedin
  • Print
  • July 28, 2023

    Seven technology companies including Microsoft, OpenAI, Anthropic and Meta, with major artificial intelligence (AI) products made voluntary commitments regarding the regulation of AI at an event held in the White House on 21 July 2023.1 These eight commitments are based on three guiding principles of safety, security and trust. Areas and domains which are presumably impacted by AI have been covered by the code of conduct. While these are non-binding, unenforceable and voluntary, they may form the basis for a future Executive Order on AI, which will become critical given the increasing military use of AI. 

    The voluntary AI commitments are the following:

    1. Red-teaming (internal and external) products to be released for public use. Bio, chemical and radiological risks and ways in which barriers to entry can be lowered for weapons development and design are some of the top priorities. The effect on systems which have interactions and the ability to control physical systems needs to be evaluated apart from societal risks such as bias and discrimination;
    2. Information sharing amongst companies and governments. This is going to be challenging since the entire model is based on secrecy and competition;
    3. Invest in cybersecurity and safeguards to protect unreleased and proprietary model weights;
    4. Incentivize third party discovery and reporting of issues and vulnerabilities;
    5. Watermarking AI generated content;
    6. Publicly report model or system capabilities including discussions of societal risks;
    7. Accord priority to research on societal risks posed by AI systems; and
    8. Develop and deploy frontier AI systems to help address society’s greatest challenges.2

    The eight commitments of US’s Big Tech companies come a few days after the United Nations Security Council (UNSC) for the first time convened a session on the threat posed by AI to global peace and security.3 The UN Secretary General (UNSG) proposed the setting up of a global AI watchdog comprising experts in the field who would share their expertise with governments and administrative agencies. The UNSG also added that UN must come up with a legally binding agreement by 2026 banning the use of AI in automated weapons of war.4

    The discussion at the UNSC can be seen as elevating the focus from shorter term AI threat of disinformation and propaganda in a bilateral context between governments and Big Tech companies to a larger, global focus on advancements in AI and the need to follow certain common standards, which are transparent, respect privacy of individuals whose data is ‘scraped’ on a massive scale, and ensure robust cybersecurity.

    Threat posed by AI

    Lawmakers in the US have been attempting to rein in the exponential developments in the AI field for some time now, since not much is known about the real impact of the technology on a longer-term basis. The reactions to the so-called danger of AI have been polarizing, with some even equating AI with the atom bomb and terming the current phase of growth in AI as the ‘Oppenheimer moment’5 , after the scientist-philosopher J. Robert Oppenheimer, under whom the Manhattan Project was brought to a fruitful conclusion with the testing of the first atomic bomb. This was the moment that signaled the start of the first nuclear age—an era of living under the nuclear shadow that persists to this day. The Oppenheimer moment, therefore, is a dividing line between the conventional past and the new present and presumably the unknown future.

    Some academics, activists and even members of the Big Tech community, referred to as ‘AI doomers’ have coined a term, P(doom), in an attempt to quantify the risk of a doomsday scenario where a ‘runaway superintelligence’ causes severe harm to humanity or leads to human extinction.6 Others refer to variations of the ‘Paperclip Maximiser’, where the AI is given a particular task to optimise by the humans, understands it in the form of maximising the number of paperclips in the universe and proceeds to expend all resources of the planet in order to manufacture only paperclips.7

    This thought experiment was used to signify the dangers of two issues with AI: the ‘orthogonality thesis’, which refers to a highly intelligent AI that could interpret human goals in its own way and proceed to accomplish tasks which have no value to the humans; and ‘instrumental convergence’ which implies AI taking control of all matter and energy on the planet in addition to ensuring that no one can shut it down or alter its goals.8

    Apart from these alleged existential dangers, the new wave of generative AI9 , which has the potential of lowering and in certain cases, decimating entry barriers to content creation in text, image, audio and video format, can adversely affect societies in the short to medium term. Generative AI has the potential to birth the era of the ‘superhuman’, the lone wolf who can target state institutions through the click of his keyboard at will.10

    The use of generative AI in the hands of motivated individuals, non-state and state actors, has the potential to generate disinformation at scale. Most inimical actors and institutions have so far struggled to achieve this due to the difficulties of homing onto specific faultlines within countries, using local dialects and generating adequately realistic videos, among others. This is now available at a price—disinformation as a service (DaaS)—at the fingertips of an individual, making the creation and dissemination of disinformation at scale, very easy. This is why the voluntary commitments by the US Big Tech companies are just the beginning of a regulatory process that needs to be made enforceable, in line with legally binding safeguards agreed to by UN members for respective countries.

    Military Uses of AI   

    Slowly and steadily, the use of AI in military has been gaining ground. The Russia-Ukraine war has seen deployment of increasingly efficient AI systems on both sides. Palantir, a company which specialises in AI-based data fusion and surveillance services,11 has created a new product called the Palantir AI Platform (AIP). This uses large language models (LLMs) and algorithms to designate, analyse and serve up suggestions for neutralising adversary targets, in a chatbot mode.12

    Though Palantir’s website clarifies that the system will only be deployed across classified systems and use both classified and unclassified data to create operating pictures, there is no further information on the subject available in the open domain.13 The company has also assured on its site that it will use “industry-leading guardrails” to safeguard against unauthorized actions.14 The absence of Palantir from the White House declaration is significant since it is one of the very few companies whose products are designed for significant military use.

    Richard Moore, the head of United Kingdom’s (UK) MI6, on 19 July 2023 stated that his staff was using AI and big data analysis to identify and disrupt the flow of weapons to Russia.15 Russia is testing its unmanned ground vehicle (UGV) Marker with an inbuilt AI which will seek out Leopard and Abrams tanks on the battlefield and target them. However, despite being tested in a number of terrains such as forests, the Marker hasn’t been rolled out for combat action in ongoing conflict against Ukraine.16

    Ukraine has fitted its drones with rudimentary AI that can perform the most basic edge processing to identify platforms like tanks and pass on only the relevant information (coordinates and nature of platform) amounting to kilobytes of data to a vast shooter network.17 There are obviously challenges in misidentifying objects and the task becomes exceedingly difficult when identifying and singling out individuals from the opposing side. Facial recognition softwares have been used by the Ukrainians to identify the bodies of Russian soldiers killed in action for propaganda uses.18

    It is not a far shot to imagine the same being used for targeted killings using drones. The challenge here of course is systemic bias and discrimination in the AI model which creeps in despite the best intentions of the data scientists, which may lead to inadvertent killing of civilians. Similarly, spoofing of the senior commanders’ voice and text messages may lead to passing of spurious and fatal orders for formations. On the other hand, the UK-led Future Combat Air System (FCAS) Tempest envisages a wholly autonomous fighter with AI integrated both during the design and development phase (D&D) as well as the identification and targeting phase during operations.19 The human, at best, will be on the loop.

    Conclusion

    The military use of AI is an offshoot of the developments ripping through the Silicon Valley. As a result, the suggestions being offered to rein in the advancements in AI need to move beyond self-censorship and into the domain of regulation. This will be needed to ensure that the unwarranted effects of these technologies do not spill over into the modern battlefield, already saturated with lethal and precision-based weapons.

    Views expressed are of the author and do not necessarily reflect the views of the Manohar Parrikar IDSA or of the Government of India.

    Top