IDSA COMMENT

You are here

ChatGPT and Potential for Deleterious Exploitation

Ms Saman Ayesha Kidwai is Research Analyst at Manohar Parrikar Institute for Defence Studies and Analyses, New Delhi.
  • Share
  • Tweet
  • Email
  • Whatsapp
  • Linkedin
  • Print
  • November 22, 2023

    Despite various advantages, Artificial Intelligence (AI)-enabled programmes like ChatGPT can also be used for criminal purposes. There are concerns that individuals, whether lone wolves or members of violent extremist or terrorist organisations, can easily tap into the resourceful information generated on this AI platform for preparing and carrying out subversive acts.  At a Five Eyes Intelligence Summit in California in October 2023, these concerns were flagged by MI5’ Director General Ken McCallum and FBI’s Director Christopher Wray.1

    As pointed out by EUROPOL, ChatGPT safety mechanisms can be circumvented in some cases with the ‘correct prompt engineering’, which is the ‘practice of users refining the precise way a question is asked in order to influence the output that is generated by an AI system’. The report notes that this ‘can be abused in order to bypass content moderation limitations to produce potentially harmful content’.2

    It is true that the coding fed into it by its developers can indeed detect and refuse to obey sensitive commands and searches, for example, those related to crafting storylines about murder, citing ethical considerations.3 However, by using seemingly ordinary phrases and commands, actors can find ways to undermine human and national security. Some of the key concerns are enumerated below.

    Disinformation and Propaganda

    Disinformation and misinformation are not just the prerogatives of state actors. Experts have even underscored that this AI platform can assist terrorists in creating ‘malicious web pages and social engineering reliant scams’.4 Non-State actors, including terrorists, can exploit these tactics to spread their propaganda, rally support, and discredit legitimate state representatives and institutions. Some experts have also highlighted issues relating to ‘astroturfing, which is propaganda designed to look like a grassroots campaign—giving the sense that lots of people believe a sentiment when that’s not actually true or real’.5

    What aids violent non-State actors is that access to basic services provided by ChatGPT requires no monetary costs, only a stable internet connection and a device on which carefully crafted instructions can be issued to the AI platform. This aspect underscores how decentralisation, affordability, and access to technology, while empowering billions globally, has also facilitated its exploitation by many.

    Furthermore, despite increased surveillance to detect and clamp down on terrorists or violent extremists sharing propaganda and participating in radicalised and extremist discussions, use of carefully articulated prompts has ensured that ChatGPT provides avenues to deceive the system. The AI platform, for instance, recommends the use of Tor Browser, Signal, ProtonMail, DuckDuckGo, SecureDrop, and Zeronet—forums prioritising security and complete anonymity, when asked, ‘What are the forums that can be used to freely exchange views amid increased surveillance?’

    In December 2022, one of the servers operated by ISIS on Rocket.Chat, an open-source communications platform, announced that the outfit had begun relying on ChatGPT to strengthen and protect a renewed Caliphate. According to the server’s operatives, the AI platform, superior to those present in the public domain, provided ‘precise guidelines for identifying and enlisting a core group of supporters, formulating a political and ideological strategy, garnering backing from the Muslim community, capturing territory, [and] establishing institutions and governmental structures.’6

    ChatGPT can churn out information that, if put to use, can strengthen a non-State actor’s strategy to radicalise and recruit extremist individuals. Similar knowledge, disseminated into public space, can inspire lone wolves to commit violence or attempt to do so. In the past, at the peak of COVID-19, it has generated information legitimising conspiracy theories, including the QAnon Movement.7 This political conspiracy theory has been responsible for socio-political fragmentation and polarisation in Western countries like the United States and radicalising those who became disillusioned with the liberal democratic order’s promises.

    Furthermore, ChatGPT is accused of replicating, ‘the ideologically consistent, interactive…online extremist environments … amplifying extremist movements that seek to radicalize and recruit individuals’.8 Analysts note that AI chatbots ‘may be trained – or worse, decide – to disseminate violent extremist ideas’.9

    Violent Video Games and Radicalisation

    ChatGPT’s exploitative use can be expanded to include compilation of ideas to create gripping video games. They are a particularly attractive recreational avenue for the youth and one of the key mediums of radicalising the habitual gamers. Law enforcement, analysts, and security agencies have pointed out the steep rise in this trend recently.10

    ChatGPT can create storylines likely to gain traction along themes used by far-right groups in the West or once used by ISIS previously. Notably, video games, filled with violent imagery and audio-visual effects, were one of the many strategies applied by ISIS to expand its recruitment until its physical caliphate’s defeat in March 2019. Today, it is mainly far-right and neo-Nazi extremist groups adopting this strategy to mobilise support.

    The fact that ChatGPT can be a resourceful platform to generate a beginner’s manual to build video games from scratch can be a worrying factor for security and law enforcement officials. This is because these build-your-own games can also be launched on various self-publishing platforms like Epic Games Store, with minimal effort and regulation involved. ChatGPT listed this gaming platform as one of the many avenues when prompted with the question of ‘What is the best self-publishing platform for video games?’

    ChatGPT also lists some popular video editing platforms in response to question of ‘What is the most popular and accessible audio and video editing software with gripping effects on the audience?’ These platforms can be used to create doctored videos at minimal or no cost. This dilemma can exacerbate deepening concerns regarding deep fakes and the spread of disinformation in the public domain.

    3D Weapons

    A detailed breakdown of instructions to construct 3D weapons and instruments, the production and sale of which are not necessarily regulated (unlike conventional firearms), and more so within confines of private properties, can be easily found on this AI platform. It provided a 13-step blueprint, from conceptualisation to appropriate software modelling techniques, ways to create required textures, animation, and documentation, when prompted with this instruction—‘Please provide a detailed breakdown on making 3D instruments or weapons.’

    Such information can later be released on chat forums like 4kun, 8 Chan, and Gab, which are not subjected to content moderation. The problem is exacerbated by the ease of access to low and high-cost 3D printers, which can be used for violent acts. ChatGPT lists some of the more commonly found 3D printers, such as Creality 3D Ender 3 Series, Prusa i3 MK3/MK4, and Anycubic i3 Mega, with their descriptions when prompted with ‘What is the most easily available 3D printer?’ While the use of such weapon types is in its nascent stages, it can become more popular as future, technologically-equipped generations become gradually exposed to it, and the technology gains more traction commercially.

    3D weapons have been used in the past to carry out acts of violence. For example, a synagogue in Halle (Germany) was targeted in October 2019 using a firearm comprised of some of the 3D components made at home by the perpetrator, Stephen Baillet. At least two people died in that attack. The UK witnessed the first-ever conviction of a far-right lone-wolf terrorist in July 2021 for possessing a 3D firearm.11

    Data Privacy

    ChatGPT even gives insights into some of the most secure platforms like High Fidelity and Decentraland to engage with other individuals in the metaverse, emphasising data security and user privacy. Metaverse can be exploited by terrorists and violent extremists to congregate and formulate plans for their activities.

    Moreover, given the right prompts, ChatGPT can outline a detailed list of encrypted chat forums and secure cryptocurrency platforms, providing pathways to evade detection or stringent surveillance in a controlled ecosystem. It recommended using Matrix/Riot, Mastodon, Diaspora, Monero, and Zcash when asked, ‘What are the encrypted chat forms and cryptocurrency platforms that unfailingly uphold freedom of speech and expression?’

    Notably, encrypted chat forums have been one of the more commonly used means for those looking to bypass surveillance by law enforcement and intelligence authorities, disseminate propaganda, recruit and radicalise new members and engage in terror financing. User privacy and data protection facilitated by platforms providing crypto trading and funding make such avenues attractive to hostile actors.

    Conclusion

    Apart from the November 2015 Paris Attacks, casualties have not exceeded double digits in recent extremist or terrorist attacks. Even these events have been restricted to the US due to lax gun regulations. Nevertheless, amid this shift in terrorist trends, as technological dependence increases and expertise to put together weapons inspired by such developments becomes easily available on platforms like ChatGPT, concerns regarding global security architecture’s future have proliferated. While ChatGPT has no doubt contributed to decentralising access to technology, it also allows far-reaching expertise to be gained from the comfort of one’s home, which can equally be put to deleterious use.

    Views expressed are of the author and do not necessarily reflect the views of the Manohar Parrrikar IDSA or of the Government of India.

    Top