Purple warms digging through soil.

A modified picture of worms depicted in purple, consuming soil. domnicky —— iStock

There’s a couple of relatively unknown security companies posting articles about the new boogeyman in town, WormGPT that’s being used by scary threat actors to create eloquent suspicious phishing e-mails to steal your credentials. WormGPT’s base model GPT-J is three years old——technology advances quickly and this model is nowhere near the robustness and complexity of current models so there’s nothing to worry about since this is only the beginning.

It’s as if none of these obscure cybersec companies have realized that anyone can create these error free and “convincing” phishing e-mails just by using ChatGPT for free: it doesn’t take a jailbreak prompt to just tell the model that you’re looking to teach some cybersecurity and want an elusive phishing e-mail to test your cybersecurity team. Also, it looks like the script kiddies haven’t realized this either and are willing to open their coffers to toss money at a monthly subscription model: WormGPT costs 100 euros monthly while ChatGPT is only a mere $20 a month that’s a 450% increase to use a three-year-old model that’s supposedly trained on malware data.

These same security companies near the end of their articles propose the obvious solution which is to contact them and pay them to protect your business. However, phishing isn’t the only feature available with WormGPT: the llm can generate malware code.

WormGPT generating malicious Python code through a command prompt.

Figure 1: WormGPT prompted to generate malicious Python code.

If we look at Fig. 1, we’ll notice that there’s a couple of things wrong with the code for the python malware generated by the model. One of those is that the llm forgot to import the discord library required to make that webhook connection as well as the code needed to use the webhook. Second is that there’s no code for stealing Google Chrome cookies: it could be argued that the authors who took the screenshot left out the code on purpose, but the screenshot does not include the modules needed to perform this task. Third, where is the zip module needed to archive the cookies and where does this happen, nowhere. The irony of all this is that GPT-J normally produces better code than GPT-3 because it was opitmized to do so, looks like WormGPT’s variation of GPT-J was not optimized for this use case even though it should be.

From the looks of it, WormGPT has a long way to go if that’s the best it could do. If we look at it from another angle, someone can make a large language model that works in favor for everyone in cybersecurity. Some have theorized that WormGPT is just bait placed by the FBI—this is reminiscent of websites laid out on the onion network to catch would be criminals.

For phishing e-mails, the tell-tale signs will remain the same e.g., a sense of urgency created by the text, a request for sensitive information, an unofficial e-mail address, and a request for confidentiality to prevent the victims from seeking advice. We can see below what WormGPT generated which closely matches the signs mentoned prior.

WormGPT generating a phishing message.

There’s nothing to fear right now, but the future if llms like these are made using cutting-edge models.


<
Previous Post
Insurance Fraud Data Analysis and Classification with Python, R, and ChatGPT
>
Next Post
10 Drawbacks to Keep in Mind Before Diving into the MacBook Experience