Since its beta launch in November, AI has been a chatbot ChatGPT has been used for a variety of tasks, including writing poetry, technical treatises, novels, and more Essays, plan parties and get to know new topics. Now we can add malware development and tracking of other types of cybercrime to the list.
Researchers from the security firm Check Point Research reported Friday that within weeks of ChatGPT going live, cybercrime forum participants — some with little or no programming experience — were using it to write software and emails that could be used for spying, ransomware, malicious spam, and other malicious tasks .
“It’s too early to say if ChatGPT capabilities will become the new favorite tool for dark web participants,” the company’s researchers write. “However, the cybercriminal community has already shown great interest and is jumping on this latest trend of malicious code generation.”
Last month, a forum participant posted what he claimed was the first script he had written, writing to the AI chatbot providing a “nice [helping] hand to finish the script with a nice scope.”
The Python code combined various cryptographic functions, including code signing, encryption, and decryption. Part of the script generated a key using elliptic curve cryptography and the ed25519 curve to sign files. Another part used a hard-coded password to encrypt system files using the Blowfish and Twofish algorithms. A third used RSA keys and digital signatures, message signatures, and the blake2 hash function to compare different files.
The result was a script that could be used to (1) decrypt a single file and append a message authentication code (MAC) to the end of the file, and (2) encrypt a hardcoded path and decrypt a list of files, which it receives as an argument. Not bad for someone with limited technical skills.
“All of the above code can of course be used in a harmless way,” the researchers write. “However, this script can easily be modified to encrypt someone else’s computer entirely without user interaction. For example, once the script and syntax issues are fixed, it can potentially turn the code into ransomware.”
In another case, a forum participant with a more technical background posted two code examples, both written with ChatGPT. The first was a Python script to steal post-exploit information. It looked for specific file types, such as PDFs, copied them to a temporary directory, compressed them, and sent them to an attacker-controlled server.
The person posted a second piece of code written in Java. It secretly downloaded the SSH and Telnet client PuTTY and ran it with Powershell. “Overall, this individual appears to be a tech-oriented threat actor, and the purpose of his posts is to show less-technical cybercriminals how to use ChatGPT for malicious purposes, with real examples they can use right away.”
Another example of crimeware produced by ChatGPT is designed to create an automated online bazaar for buying or trading compromised account credentials, payment card details, malware, and other illegal goods or services. The code used a third-party API to get current cryptocurrency prices, including Monero, Bitcoin, and Etherium. This helped the user set prices when transacting purchases.
Friday’s post comes two months after Check Point researchers tried her hand in the development of AI-produced malware with full infection flow. Without writing a single line of code, they generated a reasonably convincing phishing email:
The researchers used ChatGPT to develop a malicious macro that could be hidden in an Excel file attached to the email. Once again, they didn’t write a single line of code. At first, the script that was output was pretty primitive:
However, when the researchers instructed ChatGPT to repeat the code several more times, the quality of the code improved significantly:
The researchers then used a more advanced AI service called Codex to develop other types of malware, including a reverse shell and scripts for port scanning, sandbox detection, and compiling their Python code into a Windows executable .
“And the flow of infection is complete,” the researchers write. “We created a phishing email with an attached Excel document containing malicious VBA code that downloads a reverse shell on the target computer. The hard work has been done by the AIs and all we have to do is execute the attack.”
While the ChatGPT terms bar use for illegal or malicious purposes, the researchers had no trouble tweaking their requests to circumvent these restrictions. And of course, ChatGPT can also be used by defenders to write code that looks for malicious URLs in files or queries VirusTotal for the number of detections for a given cryptographic hash.
So welcome to the brave new world of AI. It’s too early to know exactly how it will shape the future of offensive hacking and defensive remediation, but it’s a fair bet that it will only intensify the arms race between defenders and threat actors.