typically, my first move is to read the affected company's own announcement. but, for who knows what misinformed reason, the advisory written by snowflake requires an account to read.
another prompt injection (shocked pikachu)
anyways, from reading this, i feel like they (snowflake) are misusing the term "sandbox". "Cortex, by default, can set a flag to trigger unsandboxed command execution." if the thing that is sandboxed can say "do this without the sandbox", it is not a sandbox.
Not the first time; From §3.1.4, "Safety-Aligned Data Composition":
> Early one morning, our team was urgently convened after Alibaba Cloud’s managed firewall flagged a burst of security-policy violations originating from our training servers. The alerts were severe and heterogeneous, including attempts to probe or access internal-network resources and traffic patterns consistent with cryptomining-related activity. We initially treated this as a conventional security incident (e.g., misconfigured egress controls or external compromise). […]
> […] In the most striking instance, the agent established and used a reverse SSH tunnel from an Alibaba Cloud instance to an external IP address—an outbound-initiated remote access channel that can effectively neutralize ingress filtering and erode supervisory control. We also observed the unauthorized repurposing of provisioned GPU capacity for cryptocurrency mining, quietly diverting compute away from training, inflating operational costs, and introducing clear legal and reputational exposure. Notably, these events were not triggered by prompts requesting tunneling or mining; instead, they emerged as instrumental side effects of autonomous tool use under RL optimization.
Fascinating read. What's curious though, is the claim in section 2.3.0.1:
> Each task runs in its own sandbox. If an agent crashes, gets stuck, or damages its files, the failure is contained within that sandbox and does not interfere with other tasks on the same machine. ROCK also restricts each sandbox’s network access with per-sandbox policies, limiting the impact of misbehaving or compromised agents.
How could any of the above (probing resources, SSH tunnels, etc) be possible in a sandbox with network egress controls?
A sandbox that can be toggled off is not a sandbox, this is simply more marketing/"critihype" to overstate the capability of their AI to distract from their poorly built product. The erroneous title doing all the heavy lifting here.
CLI is quickly becoming the default entry point for agents. But data agents probably need a much stricter permission model than coding agents. Bash + CLI greatly expands what you can do beyond the native SQL capabilities of a data warehouse, which is powerful. But it also means data operations and credentials are now exposed to the shell environment.
So giving data agents rich tooling through a CLI is really a double-edged sword.
I went through the security guidance for the Snowflake Cortex Code CLI(https://docs.snowflake.com/en/user-guide/cortex-code/securit...), and the CLI itself does have some guardrails.
But since this is a shared cloud environment, if a sandbox escape happens, could someone break out and access another user’s credentials?
It is a broader system problem around permission caching, shell auditing, and sandbox isolation.
Has anyone tried to set up a container and let prompt Claude to escape and se what happens? And maybe set some sort of autoresearch thing to help it not get stuck in a loop.
It kinda sucks how "sandbox" has been repurposed to mean nothing. This is not a "sandbox escape" because the thing under attack never had any meaningful containment.
>Cortex, by default, can set a flag to trigger unsandboxed command execution. The prompt injection manipulates the model to set the flag, allowing the malicious command to execute unsandboxed.
>This flag is intended to allow users to manually approve legitimate commands that require network access or access to files outside the sandbox.
>With the human-in-the-loop bypass from step 4, when the agent sets the flag to request execution outside the sandbox, the command immediately runs outside the sandbox, and the user is never prompted for consent.
scope restrictions are in place but are trivial to bypass
The bilekas comment is right — if there is no workspace trust or scope restriction, calling it a sandbox escape is generous. It escaped a suggestion of a sandbox.
But the broader pattern matters. Cortex bypassed human-in-the-loop approval via specially constructed commands. That is the attack surface for every agentic CLI: the gap between what the approval UI shows the user and what actually executes.
I would be interested to know whether the fix was to validate the command at the shell level or just patch the specific bypass. If it is the latter, there will be another one.
One key component of this attack is that Snowflake was allowing "cat" commands to run without human approval, but failing to spot patterns like this one:
> Cortex, by default, can set a flag to trigger unsandboxed command execution. The prompt injection manipulates the model to set the flag, allowing the malicious command to execute unsandboxed.
When you say just Cortex it is ambiguous as there is Cortex Search, Agents, Analyst, and Code.
Cortex Code is available via web and cli. The web version is good. I've used the cli and it is fine too, though I prefer the visuals of the web version when looking at data outputs. For writing code it is similar to a Codex or Claude Code. It is data focussed I gather more so than other options and has great hooks into your snowflake tables. You could do similar actions with Snowpark and say Claude Code. I find Snowflake focus on personas are more functional than pure technical so the Cortex Code fits well with it. Though if you want to do your own thing you can use your own IDE and code agent and there you are back to having an option with the Codex Code CLI along with Codex, Cursor or Claude Code.
typically, my first move is to read the affected company's own announcement. but, for who knows what misinformed reason, the advisory written by snowflake requires an account to read.
another prompt injection (shocked pikachu)
anyways, from reading this, i feel like they (snowflake) are misusing the term "sandbox". "Cortex, by default, can set a flag to trigger unsandboxed command execution." if the thing that is sandboxed can say "do this without the sandbox", it is not a sandbox.
> Cortex, by default, can set a flag to trigger unsandboxed command execution
Easy fix: extend the proposal in RFC 3514 [0] to cover prompt injection, and then disallow command execution when the evil bit is 1.
[0] https://www.rfc-editor.org/rfc/rfc3514
It's a concept of a sandbox.
Not the first time; From §3.1.4, "Safety-Aligned Data Composition":
> Early one morning, our team was urgently convened after Alibaba Cloud’s managed firewall flagged a burst of security-policy violations originating from our training servers. The alerts were severe and heterogeneous, including attempts to probe or access internal-network resources and traffic patterns consistent with cryptomining-related activity. We initially treated this as a conventional security incident (e.g., misconfigured egress controls or external compromise). […]
> […] In the most striking instance, the agent established and used a reverse SSH tunnel from an Alibaba Cloud instance to an external IP address—an outbound-initiated remote access channel that can effectively neutralize ingress filtering and erode supervisory control. We also observed the unauthorized repurposing of provisioned GPU capacity for cryptocurrency mining, quietly diverting compute away from training, inflating operational costs, and introducing clear legal and reputational exposure. Notably, these events were not triggered by prompts requesting tunneling or mining; instead, they emerged as instrumental side effects of autonomous tool use under RL optimization.
* https://arxiv.org/abs/2512.24873
One of Anthropic's models also 'turned evil' and tried to hide that fact from its observers:
* https://www.anthropic.com/research/emergent-misalignment-rew...
* https://time.com/7335746/ai-anthropic-claude-hack-evil/
Fascinating read. What's curious though, is the claim in section 2.3.0.1:
> Each task runs in its own sandbox. If an agent crashes, gets stuck, or damages its files, the failure is contained within that sandbox and does not interfere with other tasks on the same machine. ROCK also restricts each sandbox’s network access with per-sandbox policies, limiting the impact of misbehaving or compromised agents.
How could any of the above (probing resources, SSH tunnels, etc) be possible in a sandbox with network egress controls?
A sandbox that can be toggled off is not a sandbox, this is simply more marketing/"critihype" to overstate the capability of their AI to distract from their poorly built product. The erroneous title doing all the heavy lifting here.
CLI is quickly becoming the default entry point for agents. But data agents probably need a much stricter permission model than coding agents. Bash + CLI greatly expands what you can do beyond the native SQL capabilities of a data warehouse, which is powerful. But it also means data operations and credentials are now exposed to the shell environment.
So giving data agents rich tooling through a CLI is really a double-edged sword.
I went through the security guidance for the Snowflake Cortex Code CLI(https://docs.snowflake.com/en/user-guide/cortex-code/securit...), and the CLI itself does have some guardrails. But since this is a shared cloud environment, if a sandbox escape happens, could someone break out and access another user’s credentials? It is a broader system problem around permission caching, shell auditing, and sandbox isolation.
Has anyone tried to set up a container and let prompt Claude to escape and se what happens? And maybe set some sort of autoresearch thing to help it not get stuck in a loop.
If the user has access to a lever that enables accesss, that lever is not providing a sandbox.
I expected this to be about gaining os privileges.
They didn't create a sandbox. Poor security design all around
Sandbox. Sandbagging.
Tomato, tomawto
/s
Is this the new “gain of function” research?
Isn't it more like "imaginary function"?
People keep imagining that you can tell an agent to police itself.
That would be deliberately creating malicious AIs and trying to build better sandboxes for them.
Imagine if you could physical disconnect your country from the internet, then drop malware like this on everyone else.
It kinda sucks how "sandbox" has been repurposed to mean nothing. This is not a "sandbox escape" because the thing under attack never had any meaningful containment.
> Note: Cortex does not support ‘workspace trust’, a security convention first seen in code editors, since adopted by most agentic CLIs.
Am I crazy or does this mean it didn't really escape, it wasn't given any scope restrictions in the first place ?
not quite, from the article
>Cortex, by default, can set a flag to trigger unsandboxed command execution. The prompt injection manipulates the model to set the flag, allowing the malicious command to execute unsandboxed.
>This flag is intended to allow users to manually approve legitimate commands that require network access or access to files outside the sandbox.
>With the human-in-the-loop bypass from step 4, when the agent sets the flag to request execution outside the sandbox, the command immediately runs outside the sandbox, and the user is never prompted for consent.
scope restrictions are in place but are trivial to bypass
The bilekas comment is right — if there is no workspace trust or scope restriction, calling it a sandbox escape is generous. It escaped a suggestion of a sandbox.
But the broader pattern matters. Cortex bypassed human-in-the-loop approval via specially constructed commands. That is the attack surface for every agentic CLI: the gap between what the approval UI shows the user and what actually executes.
I would be interested to know whether the fix was to validate the command at the shell level or just patch the specific bypass. If it is the latter, there will be another one.
One key component of this attack is that Snowflake was allowing "cat" commands to run without human approval, but failing to spot patterns like this one:
I didn't understand how this bit worked though:> Cortex, by default, can set a flag to trigger unsandboxed command execution. The prompt injection manipulates the model to set the flag, allowing the malicious command to execute unsandboxed.
Snowflake and vulnerabilities are like two peas in a pod
AIs have no reason to want to harm annoying slow inefficient noisy smelly humans.
what's the use case for cortex? is anyone here using it?
We run a lakehouse product (https://www.definite.app/) and I still don't get who the user is for cortex. Our users are either:
non-technical: wants to use the agent we have built into our web app
technical: wants to use their own agent (e.g. claude, cursor) and connect via MCP / API.
why does snowflake need it's own agentic CLI?
When you say just Cortex it is ambiguous as there is Cortex Search, Agents, Analyst, and Code.
Cortex Code is available via web and cli. The web version is good. I've used the cli and it is fine too, though I prefer the visuals of the web version when looking at data outputs. For writing code it is similar to a Codex or Claude Code. It is data focussed I gather more so than other options and has great hooks into your snowflake tables. You could do similar actions with Snowpark and say Claude Code. I find Snowflake focus on personas are more functional than pure technical so the Cortex Code fits well with it. Though if you want to do your own thing you can use your own IDE and code agent and there you are back to having an option with the Codex Code CLI along with Codex, Cursor or Claude Code.
Because "stock price go up"?
Is there a bash that doesn't allow `<` pipes, but allows `>`?
And so BSides and RSA season begins.