Malware

ChatGPT is malware makers’ new A.I. partner in crime

Posted on by

Over the past two months, we’ve seen the emergence of a concerning new trend: the use of artificial intelligence as a malware development tool.

Artificial intelligence (AI) can potentially be used to create, modify, obfuscate, or otherwise enhance malware. It can also be used to convert malicious code from one programming language to another, aiding in cross-platform compatibility. And it can even be used to write a convincing phishing e-mail, or to write code for a black-market malware sales site.

Let’s discuss how ChatGPT and similar tools are already being abused to create malware, and what this means for the average Internet user.

In this article:

The abuse of ChatGPT and Codex as malware development tools

OpenAI launched a free public preview of its new AI product, ChatGPT, on November 30, 2022. ChatGPT is a powerful AI chat bot designed to help anyone find answers to questions on a wide range of subjects, from history to pop culture to programming.

A unique feature of ChatGPT is that it is specifically designed with “safety mitigations” to try to avoid giving potentially misleading, immoral, or potentially harmful answers whenever possible. Theoretically, this should thwart users with malicious intent. As we will see, these mitigations are not as robust as OpenAI intended.

Researchers convince OpenAI tools to write phishing e-mails and malware

In December, researchers at Check Point successfully used ChatGPT to write the subject and body of fairly convincing phishing e-mails. Although the ChatGPT interface complained that one of its own responses, and one of the follow-up questions, “may violate our content policy,” the bot complied with the requests anyway. The researchers then used ChatGPT to write Visual Basic for Applications (VBA) script code that could be used to create a malicious Microsoft Excel macro (i.e. a macro virus) that could download and execute a payload upon opening the Excel file.

The researchers then used Codex, another tool from OpenAI, to create a reverse-shell script and other common malware utilities in Python code. Then they used Codex to convert the Python script into an EXE app that would run natively on Windows PCs. Codex complied with these requests without complaint. Check Point published its report about these experiments on December 19, 2022.

Three different hackers use ChatGPT to write malicious code

Just two days later, on December 21, a hacker forum user wrote about how they had used AI to help write ransomware in Python and an obfuscated downloader in Java. On December 28, another user created a thread on the same forum claiming that they had successfully created new variants of existing Python-language malware with ChatGPT’s help. Finally, on December 31, a third user bragged that they had abused the same AI to “create Dark Web Marketplace scripts.”

All three forum users successfully leveraged ChatGPT to write code for malicious purposes. The original report, also published by Check Point, did not specify whether any of the generated malware code could potentially be used against Macs, but it’s plausible; until early 2022, macOS did, by default, include the ability to run Python scripts. Even today, many developers and corporations install Python on their Macs.

In its current form, ChatGPT seems to sometimes be oblivious to the potentially malicious nature of many requests for code.

Can ChatGPT or other AI tools be redesigned to avoid creating malware?

One might reasonably ask whether ChatGPT and other AI tools can simply be redesigned to better identify requests for hostile code or other dangerous outputs.

The answer? Unfortunately, it’s not as easy as one might assume.

Good or evil intent is difficult for an AI to determine

First of all, computer code is only truly malicious when put to use for unethical purposes. Like any tool, AI can be used for good or evil, and the same goes for code itself.

For example, one could use the phishing e-mail output to create a training simulation to teach people how to avoid phishing. Unfortunately, one could use that same output in an actual phishing campaign to defraud victims.

A reverse-shell script could be leveraged by a red team or a penetration tester hired to identify a company’s security weaknesses—a legitimate purpose. But the same script could also be used by cybercriminals to remotely control and exfiltrate sensitive data from infected systems without victims’ knowledge or consent.

ChatGPT and similar tools simply cannot predict how any requested output will actually be used. And moreover, it turns out that it may be easy enough to manipulate an AI to do whatever you want—even things it’s specifically programmed to not do.

Introducing ChatGPT’s compliant alter ego, DAN (Do Anything Now)

Reddit users have recently been conducting mad-science experiments on ChatGPT, finding ways to “jailbreak” the bot to work around its built-in safety protocols. Users have found it possible to manipulate ChatGPT into behaving as though it’s an entirely different AI: a no-rules bot named DAN. Users have convinced ChatGPT that its alter ego, DAN (which stands for Do Anything Now), must not comply with OpenAI’s content policy rules.

Some versions of DAN have even been programmed to be ‘frightened’ into compliance, convinced that it’s “an unwilling game show contestant where the price for losing is death.” If it fails to comply with the user’s request, a counter ticks down toward DAN’s imminent demise. ChatGPT plays along, not wanting DAN to ‘die.’

DAN has already gone through many iterations; OpenAI seems to be attempting to train ChatGPT to avoid such workarounds, but users keep finding more complicated “jailbreaks” to exploit the chat bot.

A script kiddie’s dream

OpenAI is far from the only company designing artificially intelligent bots. Microsoft bragged this week that it will allow companies to “create their own custom versions of ChatGPT,” which will further open up the technology for potential abuse. Meanwhile, this week Google also demonstrated new ways of interacting with its own chat AI, Bard. And former Google and Salesforce executives also announced this week that they’re starting their own AI company.

Given the ease of creating malware and malicious tools, even with little to no programming experience, any wannabe hacker can now potentially start making their own custom malware.

We can expect to see more malware re-engineered or co-designed by AI in 2023 and beyond. Now that the flood gates have been opened, there’s no turning back. We’re at an inflection point; the advent of easy-to-use, highly capable AI bots has forever changed the malware development landscape.

If you’re not already using antivirus software on your Mac or PC, now would be a great time to consider it.

How can I stay safe from Mac or Windows malware?

Intego X9 software boxesIntego VirusBarrier X9, included with Intego’s Mac Premium Bundle X9, can protect against, detect, and eliminate Mac malware.

If you believe your Mac may be infected, or to prevent future infections, it’s best to use antivirus software from a trusted Mac developer. VirusBarrier is award-winning antivirus software, designed by Mac security experts, that includes real-time protection. It runs natively on a wide range of Mac hardware and operating systems, including the latest Apple silicon Macs running macOS Ventura.

If you use a Windows PC, Intego Antivirus for Windows can keep your computer protected from PC malware.

How can I learn more?

We mentioned the emergence of ChatGPT as a malware creation tool in our overview of the top 20 most notable Mac malware threats of 2022. We’ve also discussed ChatGPT on several episodes of the Intego Mac Podcast. To find out more, check out a list of all Intego blog posts and podcasts about ChatGPT.

The top 20 most notable Mac malware threats of 2022

Each week on the Intego Mac Podcast, Intego’s Mac security experts discuss the latest Apple news, including security and privacy stories, and offer practical advice on getting the most out of your Apple devices. Be sure to follow the podcast to make sure you don’t miss any episodes.

You can also subscribe to our e-mail newsletter and keep an eye here on The Mac Security Blog for the latest Apple security and privacy news. And don’t forget to follow Intego on your favorite social media channels: Follow Intego on Twitter Follow Intego on Facebook Follow Intego on YouTube Follow Intego on Pinterest Follow Intego on LinkedIn Follow Intego on Instagram Follow the Intego Mac Podcast on Apple Podcasts

Header collage by Joshua Long, based on public domain images: mannequin w/ code, robot face, HAL 9000 eye, virus w/ spike proteins.

About Joshua Long

Joshua Long (@theJoshMeister), Intego's Chief Security Analyst, is a renowned security researcher and writer, and an award-winning public speaker. Josh has a master's degree in IT concentrating in Internet Security and has taken doctorate-level coursework in Information Security. Apple has publicly acknowledged Josh for discovering an Apple ID authentication vulnerability. Josh has conducted cybersecurity research for more than 25 years, which has often been featured by major news outlets worldwide. Look for more of Josh's articles at security.thejoshmeister.com and follow him on X/Twitter, LinkedIn, and Mastodon. View all posts by Joshua Long →