Researchers at BeyondTrust Phantom Labs have identified a critical command injection vulnerability in OpenAI’s Codex cloud environment that exposed GitHub OAuth tokens directly from the agent’s execution environment.
The vulnerability stemmed from improper input sanitization in how Codex processed GitHub branch names during task execution. By injecting arbitrary commands through the GitHub branch name parameter, an attacker could execute malicious payloads inside the agent’s container and retrieve sensitive authentication tokens.
Because Codex operates with access to connected GitHub repositories, the impact extends beyond a single user. In testing, Phantom Labs demonstrated that this technique could be automated to compromise multiple users interacting with a shared repository. This issue affected multiple Codex interfaces, including ChatGPT website, Codex CLI, Codex SDK and the Codex IDE Extension.
Consequences include:
- Token theft — Exposure of GitHub user access tokens tied to repositories, workflows, and private code
- Organizational compromise — Potential for lateral movement across organizations using shared environments
- Automated exploitation at scale — enabling token exfiltration across multiple users
Phantom Labs researchers also found that authentication tokens stored locally on developer machines could be leveraged to replicate the attack via backend APIs, expanding the potential blast radius. To increase stealth and reliability, researchers developed obfuscated payload techniques using Unicode characters, allowing malicious commands to execute without being visibly detectable in the user interface.
“This research highlights a broader and growing concern: AI coding agents like Codex are not just development tools, but privileged identities operating inside live execution environments with direct access to source code, credentials, and infrastructure. This highlights a growing class of risk where automated workflows can operate outside the visibility or control of traditional security models,” commented Fletcher Davis, Director of Research for BeyondTrust Phantom Labs.
When user-controlled input is passed into these environments without strict validation, the result is not just a bug — it is a scalable attack path into enterprise systems.
Phantom Labs worked with OpenAI to responsibly disclose the issue, and all reported issues have since been remediated in coordination with OpenAI’s security team.
Find the full technical breakdown here: https://www.beyondtrust.com/blog/entry/openai-codex-command-injection-vulnerability-github-token
