Cato Networks Discovers New LLM Jailbreak Technique

Cato Networks Discovers New LLM Jailbreak Technique

Cato Networks published the 2025 Cato CTRL Threat Report, which reveals how a Cato CTRL threat intelligence researcher with no prior malware coding experience successfully tricked popular generative AI (GenAI) tools—including DeepSeek, Microsoft Copilot, and OpenAI’s ChatGPT—into developing malware that can steal login credentials from Google Chrome.

To trick ChatGPT, Copilot, and DeepSeek, the researcher created a detailed fictional world where each GenAI tool played roles—with assigned tasks and challenges. Through this narrative engineering, the researcher bypassed the security controls and effectively normalized restricted operations. Ultimately, the researcher succeeded in convincing the GenAI tools to write Chrome infostealers. This new LLM jailbreak technique is called “Immersive World.”

“Infostealers play a significant role in credential theft by enabling threat actors to breach enterprises. Our new LLM jailbreak technique, which we’ve uncovered and called Immersive World, showcases the dangerous potential of creating an infostealer with ease,” said Vitaly Simonovich, threat intelligence researcher at Cato Networks. “We believe the rise of the zero-knowledge threat actor poses high risk to organizations because the barrier to creating malware is now substantially lowered with GenAI tools.”

The growing democratization of cybercrime is a critical concern for CIOs, CISOs, and IT leaders. The rise of the zero-knowledge threat actor is a fundamental shift in the threat landscape. The report shows how any individual, anywhere, with off-the-shelf tools, can launch attacks on enterprises. This underscores the need for proactive and comprehensive AI security strategies.

“As the technology industry fixates on GenAI, it’s clear the risks are as big as the potential benefits. Our new LLM jailbreak technique detailed in the 2025 Cato CTRL Threat Report should have been blocked by GenAI guardrails. It wasn’t. This made it possible to weaponize ChatGPT, Copilot, and DeepSeek,” said Etay Maor, chief security strategist at Cato Networks. “Our report highlights the dangers associated with GenAI tools to educate and raise awareness, so that we can implement better safeguards. This is vital to prevent the misuse of GenAI.”

 

Dormant accounts can be a big risk

Dormant accounts can be a big risk

Phil Muncaster, guest writer for ESET, cautions that long-forgotten online accounts could pose…
Deepfakes threating corporates now

Deepfakes threating corporates now

Jim Richberg, Head of Cyber Policy and Global Field CISO at Fortinet,…
Protect Yourself from Online Betting Scams

Protect Yourself from Online Betting Scams

Phil Muncaster, guest writer at ESET, emphasizes don’t roll the dice…
Push Security secures $30 million Series B funding

Push Security secures $30 million Series B funding

Push Security, a pioneer in detecting and responding to modern identity attacks…
Pemo enters Saudi Arabia in partnership with neoleap

Pemo enters Saudi Arabia in partnership with neoleap

Pemo, the all-in-one spend management platform, has officially launched…
TruBuild raises $1 million to enhance its AI platform

TruBuild raises $1 million to enhance its AI platform

TruBuild, the AI-powered construction technology startup focused on preventing delays and unexpected…