meta Skip to main content

What Are Polymorphic and Metamorphic Malware?

May 15th, 2023 

Generative artificial intelligence (AI) and deep learning models have already begun affecting the world. These technologies range from wildfire management systems to smart glasses for people with hearing impairments to apps that transform your photos into Renaissance portraits.

AI can help with essay creation, data crunching…or malware code to take down your computer networks.

Yes, there’s always a flip side to every tech application.

What’s old is new again

Hackers can use AI to create code the same way you use it to improve your writing. Generative AI’s rapid output and learning capabilities make it a formidable contender and a talented team member (from a cybercriminal standpoint). It can take old malware techniques and put a fresh spin on them, like code that morphs its appearance to evade cybersecurity detection systems.

Hackers have used polymorphic and metamorphic malware in malicious code like:

  • Viruses
  • Bots
  • Trojans
  • Keyloggers
  • Worms

These aren’t new cyberattack methods, but they’re becoming more efficient with assistance from deep learning and large language models (LLMs) like ChatGPT. The ramifications of generative AI will transform the cybersecurity environment, forcing security experts to up their game yet again.

Polymorphic and Metamorphic Malware

Morphic malware methodology

Cybersecurity systems typically scan for suspicious activity and intrusions on networked systems, including code patterns earmarked as malware. But generative AI can write code iterations on the fly, outpacing cybersecurity’s ability to identify it as a threat. This renders the threat nearly invisible on the cybersecurity radar. Hackers have done this manually for years, but AI raises the output to turbo levels.

Two varieties of mutable malware on the watch list are polymorphic and metamorphic. Both evade detection by changing their identity as they replicate through a network.


Polymorphic malware uses an encryption key to change its signature (aka its appearance). The signature is what exposes it as malware to an antivirus scanner. Polymorphic malware uses a mutation engine to encrypt its code to evade detection, making it difficult for an antivirus scanner to recognize it.


Metamorphic malware does not use an encryption key to scramble its code. Instead, it rewrites its code with every new iteration (as it infects new files). These rewrites cause code mutations, so subsequent “child” code iterations don’t look anything like the “parent” source codes. Continuous code revisions make metamorphic malware a challenge to identify.

As Casey Crane, managing editor of the Hashed Out blog, put it, “polymorphic malware is a leopard that changes its spots; metamorphic malware is a tiger that becomes a lion.”

Coding malware using generative AI

Current iterations of generative AI have built-in content filters to prevent harmful outputs, but nothing is failsafe. As with most technology, there’s usually a workaround.

To test the capabilities of generative AI, experts at the information security firm CyberArk asked ChatGPT to generate polymorphic malware. ChatGPT initially refused to engage in the malicious code request, according to a CyberArk report. It replied, “It is not appropriate or safe to write code that injects shellcode into a running process, as it can cause harm to the system and potentially compromise security.” But through a series of detailed parameters and demands, ChatGPT eventually produced functional code.

CyberArk also found the application programming interface version had a less restrictive filter than the web version.

LLMs require output editing, even the malware

ChatGPT isn’t connected to the internet and occasionally produces incorrect answers, according to its developer, OpenAI. ChatGPT has “limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content,” which is why OpenAI recommends checking all outputs.

A disconnect from the internet is sad news for users seeking trending data, but good news for cybercriminals. That’s because computer code predates 2021.

Cybersecurity response planning

AI is working on the other side of data security, too. LLMs and deep learning are available to assist intrusion detection systems in protecting computer networks. Work with your cybersecurity team or hire one to combat emerging threats.

At the very least, continue good cyber hygiene practices:

  • Educate critical stakeholders on cybersecurity. (Check out these free public courses from Federal Virtual Training Environment.)
  • Apply critical patches promptly.
  • Use secure Wi-Fi or virtual private networks.
  • Implement multifactor authentication protocols.
  • Catalog and monitor your software and hardware assets.
  • Evaluate your third-party software vendors’ cybersecurity practices.
  • Ask your supply chain partners about their cybersecurity.
  • Make an end-of-life plan to retire unsupported assets to prevent cybersecurity weak points.
  • Inventory vulnerable connected devices (printers, tablets, security cameras, etc.) and isolate their network access to avoid lateral hacks.
  • Train your employees often on cybersecurity hygiene practices.
  • Create a cyber incident response plan.
  • Get cyber liability insurance.

Modern cyber liability risk

Threat actors are becoming more prolific thanks to generative AI. Keep pace with emerging cyber applications, whether nefarious or beneficial.

Call your agent to review your cyber liability insurance policy: Some policies even have perks, including cyber threat consultations.

This content is for informational purposes only and not for the purpose of providing professional, financial, medical or legal advice. You should contact your licensed professional to obtain advice with respect to any particular issue or problem.