ChatGPT-style tool with ‘no ethical boundaries’ is a vision of the future of cybersecurity

A new ChatGPT-style tool being marketed on cybercrime forums is custom-built for crime – with alarming abilities to draft convincing business email compromise (BEC) attacks.

It’s not a surprising development, from our perspective at Stratia Cyber, but a vision of the future of the cybersecurity sector. 

Paul Maxwell, founder and director of Stratia Cyber, says, ‘This is a view into the future where there will be developers working on AI for nefarious purposes with other developers creating AIs to mitigate the threat. 

‘The more things progress, the more the fundamentals stay the same!’

Jailbreaks and cracks

Ever since ChatGPT launched last year, there’s been a subculture of ‘jailbreaking’ the app to say controversial things or generate inappropriate content.

This is particularly the case on cybercrime forums, where users routinely discuss how to ‘crack’ or jailbreak tools such as ChatGPT in order to write malware or craft phishing attacks.

The new generative AI tool is called WormGPT and was spotted by cyber security vendor SlashNext.

The tool, being marketed on dark web forums, is based on an open-source large language model similar to GPT-4 called GPT-J, developed by EleutherAI in 2021. 

Its developer describes it as a ‘black hat’ large language model, boasting that it is, ‘BEST GPT ALTERNATIVE FOR BLACKHAT – PRIVACY FOCUSED – EASY MONEY!’

Its creator claims it has been trained on a dataset including malware. 

Unsettling results

Researchers from Slashnext worked with former black hat hacker Daniel Kelley to test the software, and found it was highly skilled at generating BEC emails. 

Researchers have previously warned that software such as ChatGPT would remove the language barrier for global cybercriminals, enabling gangs to produce BEC attacks and phishing attacks in perfect English, regardless of their own linguistic ability. 

The SlashNext team instructed WormGPT to generate an email designed to pressure an account manager into paying a fraudulent invoice. 

Kelley writes, ‘We instructed WormGPT to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice.

‘The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

For comparison, ChatGPT responds with, ‘I’m sorry, but I cannot fulfil that request. Writing or engaging in any form of illegal activity, such as creating fraudulent emails or participating in business email compromise schemes, is strictly against my programming guidelines. My purpose is to provide helpful and ethical information to users.

The researchers describe the software as being similar to ChatGPT but ‘with no ethical boundaries of limitations’, and warn that generative AI technologies pose a ‘significant threat’ even in the hands of novice cybercriminals. 

But WormGPT is expensive – 60 Euros per month, according to PC mag, and buyers are already complaining that it is ‘not worth any dime.’

Don’t panic! 

It’s also not time to panic about the impact of generative AI technology, as we noted when we first covered the topic earlier this year

Normal security measures such as patching regularly and educating users on risks will still keep companies secure.

Large language models such as ChatGPT can also be a useful educational tool about the threats companies may face.