Cato Networks Discovers New LLM Jailbreak Technique

Cato Networks Discovers New LLM Jailbreak Technique

Cato Networks published the 2025 Cato CTRL Threat Report, which reveals how a Cato CTRL threat intelligence researcher with no prior malware coding experience successfully tricked popular generative AI (GenAI) tools—including DeepSeek, Microsoft Copilot, and OpenAI’s ChatGPT—into developing malware that can steal login credentials from Google Chrome.

To trick ChatGPT, Copilot, and DeepSeek, the researcher created a detailed fictional world where each GenAI tool played roles—with assigned tasks and challenges. Through this narrative engineering, the researcher bypassed the security controls and effectively normalized restricted operations. Ultimately, the researcher succeeded in convincing the GenAI tools to write Chrome infostealers. This new LLM jailbreak technique is called “Immersive World.”

“Infostealers play a significant role in credential theft by enabling threat actors to breach enterprises. Our new LLM jailbreak technique, which we’ve uncovered and called Immersive World, showcases the dangerous potential of creating an infostealer with ease,” said Vitaly Simonovich, threat intelligence researcher at Cato Networks. “We believe the rise of the zero-knowledge threat actor poses high risk to organizations because the barrier to creating malware is now substantially lowered with GenAI tools.”

The growing democratization of cybercrime is a critical concern for CIOs, CISOs, and IT leaders. The rise of the zero-knowledge threat actor is a fundamental shift in the threat landscape. The report shows how any individual, anywhere, with off-the-shelf tools, can launch attacks on enterprises. This underscores the need for proactive and comprehensive AI security strategies.

“As the technology industry fixates on GenAI, it’s clear the risks are as big as the potential benefits. Our new LLM jailbreak technique detailed in the 2025 Cato CTRL Threat Report should have been blocked by GenAI guardrails. It wasn’t. This made it possible to weaponize ChatGPT, Copilot, and DeepSeek,” said Etay Maor, chief security strategist at Cato Networks. “Our report highlights the dangers associated with GenAI tools to educate and raise awareness, so that we can implement better safeguards. This is vital to prevent the misuse of GenAI.”

 

How to avoid Apple Pay scams

How to avoid Apple Pay scams

Phil Muncaster, guest writer at ESET, explains…
Humanoids are the future of workforce

Humanoids are the future of workforce

Zeeshan Mehdi, Engineering Director for the Middle East at SoftServe,…
Hidden risks of browser extensions

Hidden risks of browser extensions

Phil Muncaster, guest writer at ESET, explains that not all browser…
Savvy Games and NEOM to boost Saudi gaming startups

Savvy Games and NEOM to boost Saudi gaming startups

Savvy Games Group and NEOM came together to streamline the journey of Saudi…
Governata Secures $4 Million to Accelerate Saudi Arabia’s AI-Driven Data Future

Governata Secures $4 Million to Accelerate Saudi Arabia’s AI-Driven Data Future

Governata, Saudi Arabia’s first enterprise Data Management and Governance platform, has…
Qwacks raises SAR 1.8 million from Merak Capital

Qwacks raises SAR 1.8 million from Merak Capital

Qwacks, a Saudi gaming technology startup building next-generation tools for game developers,…