The artificial intelligence chatbot ChatGPT has become the subject of dinner-party conversations around the world in the space of just a few weeks – not to mention securing a $10 billion investment from Microsoft.
With its unearthly ability to compose conversations, songs and reports in a matter of seconds, it’s been predicted that ChatGPT will change the world, forever altering everything from customer service to how we interact with computers.
But could the easily available AI tool become a dangerous weapon in the hands of cybercriminals?
Cybercriminals on Russian forums are already interested – and working out ways to bypass the protections around ChatGPT to use it for nefarious purposes (there are many measures built into ChatGPT to prevent its misuse, including the fact it is blocked in Russia).
In recent weeks, criminals on dark web forums have boasted of using the technology for everything from creating malware to creating entire dark web marketplaces where stolen information can be sold.
‘Not a surprise’
This is not a surprise, says Paul Maxwell, co-founder and director of Stratia Cyber, who points out that cybercriminals are always quick to adopt the latest technology.
Maxwell believes that the technology has several uses which cybercriminals will be quick to capitalise on. He says, “Being able to generate convincing (and correctly spelled) text will be welcomed by phishers, and the ability for low-skilled coders to potentially generate code is another concern.”
Researchers have suggested that the bot’s ability to write code may further open up cybercrime to individuals without coding skills – or enable ‘cybercrime-as-a-service’, where unskilled individuals can pay to use malware.
Other researchers have warned that the convincing-looking text generated by ChatGPT could be used in business email compromise or romance scams.
‘Don’t panic!’
But Maxwell says that security professionals should not panic, and be aware that current security measures will still apply.
Patching regularly, adhering to normal cybersecurity practices and educating users will still keep companies secure.
Maxwell says, “While ChatGPT is clearly a new era for AI tools, standard measures such as educating employees about phishing attacks will still be effective. Using ChatGPT itself as an educational tool can help users to understand the risks that they may face.”
The technology also opens up interesting possibilities – such as using ChatGPT to help create a cyber security strategy, an idea explored by cyber security expert Tinesh Chhaya in this LinkedIn post.
Stratia Cyber asked ChatGPT itself how cybercriminals might abuse ChatGPT or OpenAI technology.
The bot replied, “Cybercriminals may potentially use GPT-based technology to automate the creation of phishing scams, spam messages, and other forms of online fraud. They could also use the technology to generate convincing fake news or propaganda.
“Additionally, a malicious actor could use GPT-based technology to impersonate a real person in online conversations, potentially for financial gain or to spread disinformation. OpenAI and other organisations are actively working to mitigate these risks.”