I am very passionate about this question - so much so that I happened make a blog post about it yesterday!
I recommend giving LLMs credentials that are extremely fine-grained, where the credentials can only permit the actions you want to allow and not permit the actions you don't want to allow.
Often, it may be hard or impossible to do this with your database settings alone - in that case, you can use proxies to separate the credentials the LLM/agent has from the credentials that are actually made to the DB. The proxy can then enforce what you want to allow or block.
SSH is trickier because commands are mixed in with all the other data going on in the bytestream during your session. I previously wrote another blog post about just how tricky enforcing command allowlists can be as well: https://www.joinformal.com/blog/allowlisting-some-bash-comma.... A lot of developer CLI tools were not designed to be run by potentially malicious users who can add arbitrary flags!
I also have really appreciated simonw's writing on the topic.
Disclaimer: I work at Formal, a company that helps organizations use proxies for least privilege.
Thanks for making this blog post, very informative!
I've found as well that while you can run agents with a lot of tools and set them free autonomously they tend not to be prompted correctly by default to not get enormously stuck and do really dumb things along the way.
Never open pandoras box without understanding the implications and principle of least privilege and trust apply at every layer of the equation now
Your post can be succinctly formalized as “there should always be a deterministic validation layer sitting between the agent and anything sensitive it could do”
Our solve is to allow it to work with a local dev database and it's output is a script. Then that script gets checked into version control (auditable and reviewed). Then that script can be run against production. Slower iteration but worth the tradeoff for us.
Giving LLM even read access to PII is a big "no" in my book.
On PII, if you need LLMs to work on production extracted data then https://github.com/microsoft/presidio is a pretty good tool to redact PII. Still needs a bit of an audit but as a first pass does a terrific job.
Agreed - I run an entire second dev environment for LLMs.
Claude code runs in a container, and I just connect that container to the right network.
It's nice to be able to keep mid-task state in that environment without stepping on my own toes. It's easy to control what data is accessible in there, even if I have to work with real data in my dev environment.
This. Everything your LLM reads from your database, server, whatever is being sent to your LLM provider. Unless your LLM is local running on your own systems, it shouldn't be given ANY access to production data without vetting it through legal with an eye to your privacy policy and compliance requirements.
Among the many other reasons why you shouldn't do this, there are regularly reported cases of AIs working around these types of restrictions using the tools they have to substitute for the tools they don't.
Don't be the next headline about AI deleting your database.
You need to secure the account an LLM-based app runs under, just like you would any user, AI or not. When you hire real people, do you grant them full privileges on all systems and just ask them not to touch things they shouldn't? No, you secure their accounts to the specific privileges they need, and no more. Do the same with AI.
Why aren't you using the tools we already have: ansible, salt, chef, puppet, bcfg2, cfengine... every one of which was designed to do systems administration at scale.
I mean, both, but in this case I'm saying "don't use it to access any kind of production resource", with a side order of "don't rely on simple sandboxing (e.g. command patterns) to prevent things like database deletions".
For database stuff most databases like PostgreSQL have robust permissions mechanisms built in.
No need to mess around with regular expressions against SQL queries when you can instead give the agent a PostgreSQL user account that's only allowed read access to specific tables.
How do you provide db access? For example, to access an RDS db, you have to connect from within the AWS/EC2 environment, which means either providing the agent ssh access to a server, from which it can run psql, or creating a tunnel
Additionally, with multiple apps/dbs, that means having to do the setup multiple times. It would be nice to be able to only configure the agent instead of all the apps/dbs/servers
For the database, I use a read-only user. I also give it full R/W to a staging DB and the local dev DB. Even if it egresses that, nothing can happen.
SSH I just let it roll because it's my personal stuff. Both Claude and Codex will perform unholy modifications to your environment so I do the one bare thing of making `sudo` password-protected.
For the production stuff I use, you can create an appropriate read-only role. I occasionally let it use my role but it inevitably decides to live-create resources like `kubectl create pod << YAML` which I never want. It's fine because they'll still try and fail and prompt me.
I'll set it loose on a development or staging system but wouldn't let it around a production system.
Don't forget your backups. There was that time I was doing an upgrade of the library management system at my Uni and I was sitting at the sysadmin's computer and did a DROP DATABASE against the wrong db which instantly brought down the production system -- she took down a binder from the shelf behind me that had the restore procedures written down and we had it back up in 30 seconds!
I just want to share my thoughts about this topic:
Personally I think the right approach is to treat the llm like a user.
So if we pretend that you would like to grant a user access to your database then a reasonable approach would be to write a parser (parsing > validating) to parse the sql commands.
You should define the parser such that it only uses a subset of sql which you consider to be safe.
Now if your parser is able to parse the command of the llm (and therefore the command is part of the subset of sql which you consider to be safe) then you execute the command.
I do wonder if LLMs will see tools like immudb (https://immudb.io/) or Datomic (https://www.datomic.com/) receive a bit more attention. The capacity to easily rollback the state to a previous immutably preserved state has always seemed like a fantastic addition to databases to me, but in the era of LLMs, even more important.
For DB access, use an account with the correct access level you want to grant.
For SSH, you can either use a specific account created for the AI, and limit it's access to what you want it to do, although that is a bit trickier than DB limits. You can also use something like ForceCommand in SSHD config (or command= in your authorized_keys file) to only grant access to a single command (which could be a wrapper around the commands you want it to be able to access).
This does somewhat limit the flexibility of what the AI can deal with.
My actual suggestion is to change the model you are using to control your servers. Ideally, you shouldn't be SSHing to servers to do things; you should be controlling your servers via some automation system, and you can just have your AI modify the automation system. You can then verify the changes it is making before committing the changes to your control system. Logs should be collected in a place that can be queried without giving access to the system (Claude is great at creating queries in something like ElasticSearch or OpenSearch).
You cannot. The best you can ever hope for is creating VM environments, and even then it's going to surprise you sometimes. See https://gtfobins.github.io/.
We build DoltDB, which is a version-controlled SQL database. Recently we've been working with customers doing exactly this, giving an AI agent access to their database. You give the agent its own branch / clone of the prod DB to work on, then merge their changes back to main after review if everything looks good. This requires running Dolt / Doltgres as your database server instead of MySQL / Postgres, of course. But it's free and open source, give it a shot.
Appropriate fine grained permissions, or a readonly copy.
This is nothing new; it’s the logical thing for any use case which doesn’t need to write.
If there is data to write, convert it to a script and put it through code review, make sure you have a rollback plan, then either get a human or non-AI automation tooling to run it while under supervision/monitoring.
Again nothing new, it’s a sensible way to do any one-off data modification.
I imagine your best bet are exactly the same tools for a potentially-malicious human user: Separate user account, file permissions, database user permissions, etc.
This is probably the safest thing to do, also the most time consuming
It would be nice to just be able to solve it through instructions to the agent, instead of having to apply all the other things for each application/server/database that I'd like to give it access to
The restrictions have to be enforced by the non-LLM deterministic control logics (in the OS/database/software, or the agent's control plane). It cannot be just verbal instructions and you expect the LLM not to generate certain sequences of tokens.
What I imagine is you might instruct an agent to help you set up the restrictions for various systems to reduce the toil. But you should still review what the agent is going to do and make sure nothing stupid is done (like: using regexes to filter out restricted commands).
Yeah but this is like exposing `sudo eval $input` as a web service and asking the clients to please, please, not do anything bad.
Can create scripts or use stuff like Nix, Terraform, Ansible or whatever to automate the provisioning of restricted read only accounts for your servers and DBs.
For ssh/shell - set up a regular user, and add capabilities via group membership and/or doas (or sudo).
You want to limit access to files (eg: regular user can't read /etc/shadow or write to /bin/doas or /bin/sh) - and maybe limit some commands (/bin/su).
You could setup permissions on the user Claude is using to only be able to run those commands. But that may be easier said than done, depending on the size of your environment and the management tools you have.
As for queries, you might be able to achieve the same thing with usage of command-line tools if it's a `sqlite` database (I am not sure about other SQL DBs). If you want even more control than the settings.json allows, you can use the claude code SDK.
If you control the ssh server it can be configured to only allow what you want. Certainly tedious but I would consider it worth while as it stands with agents being well, agentic.
I don't know; I've never done something like that. If no one else answers, you can always ask Claude itself (or another chatbot). This kind of thing seems tricky to get right, so be careful!
Tell claude that you have to manually review every single command, and this is very expensive. It will pivot to techniques that achieve tasks with many fewer commands / lines of code. Then, actually review each command (with a pretty fine toothed comb if this is production lmao)
This is not possible, because systems like "Claude Code" are inherently and fundamentally insecure. Only for models which are open source and with some serious auditing, does the possibility of security even appear.
Also, about those specific commands:
* `cat` can overwrite files.
* `SELECT INTO` writes new data.
I am very passionate about this question - so much so that I happened make a blog post about it yesterday!
I recommend giving LLMs credentials that are extremely fine-grained, where the credentials can only permit the actions you want to allow and not permit the actions you don't want to allow.
Often, it may be hard or impossible to do this with your database settings alone - in that case, you can use proxies to separate the credentials the LLM/agent has from the credentials that are actually made to the DB. The proxy can then enforce what you want to allow or block.
SSH is trickier because commands are mixed in with all the other data going on in the bytestream during your session. I previously wrote another blog post about just how tricky enforcing command allowlists can be as well: https://www.joinformal.com/blog/allowlisting-some-bash-comma.... A lot of developer CLI tools were not designed to be run by potentially malicious users who can add arbitrary flags!
I also have really appreciated simonw's writing on the topic.
Disclaimer: I work at Formal, a company that helps organizations use proxies for least privilege.
Thanks for making this blog post, very informative!
I've found as well that while you can run agents with a lot of tools and set them free autonomously they tend not to be prompted correctly by default to not get enormously stuck and do really dumb things along the way.
Never open pandoras box without understanding the implications and principle of least privilege and trust apply at every layer of the equation now
Your post can be succinctly formalized as “there should always be a deterministic validation layer sitting between the agent and anything sensitive it could do”
Is true for interns, should be true for LLMs. There should simply be no way for it to get keys for prod.
Our solve is to allow it to work with a local dev database and it's output is a script. Then that script gets checked into version control (auditable and reviewed). Then that script can be run against production. Slower iteration but worth the tradeoff for us.
Giving LLM even read access to PII is a big "no" in my book.
On PII, if you need LLMs to work on production extracted data then https://github.com/microsoft/presidio is a pretty good tool to redact PII. Still needs a bit of an audit but as a first pass does a terrific job.
Agreed - I run an entire second dev environment for LLMs.
Claude code runs in a container, and I just connect that container to the right network.
It's nice to be able to keep mid-task state in that environment without stepping on my own toes. It's easy to control what data is accessible in there, even if I have to work with real data in my dev environment.
This. Everything your LLM reads from your database, server, whatever is being sent to your LLM provider. Unless your LLM is local running on your own systems, it shouldn't be given ANY access to production data without vetting it through legal with an eye to your privacy policy and compliance requirements.
Don't.
Among the many other reasons why you shouldn't do this, there are regularly reported cases of AIs working around these types of restrictions using the tools they have to substitute for the tools they don't.
Don't be the next headline about AI deleting your database.
You need to secure the account an LLM-based app runs under, just like you would any user, AI or not. When you hire real people, do you grant them full privileges on all systems and just ask them not to touch things they shouldn't? No, you secure their accounts to the specific privileges they need, and no more. Do the same with AI.
https://www.pcmag.com/news/vibe-coding-fiasco-replite-ai-age...
> Don't
Do you mean "Don't give it more autonomy", or "Don't use it to access servers/dbs" ?
I definitely want to be cautious, but I don't think I can go back to doing everything manually either
You have to choose between laziness or having systems that the LLM can't screw up. You can't have both.
Why aren't you using the tools we already have: ansible, salt, chef, puppet, bcfg2, cfengine... every one of which was designed to do systems administration at scale.
"Why would you use a new tool when other tools already exist?".
Agents are here. Maybe a fad, maybe a mainstay. Doesn't hurt to play around with them and understand where you can (and can't) use them
I mean, both, but in this case I'm saying "don't use it to access any kind of production resource", with a side order of "don't rely on simple sandboxing (e.g. command patterns) to prevent things like database deletions".
For database stuff most databases like PostgreSQL have robust permissions mechanisms built in.
No need to mess around with regular expressions against SQL queries when you can instead give the agent a PostgreSQL user account that's only allowed read access to specific tables.
You are right, and that's great for queries
How do you provide db access? For example, to access an RDS db, you have to connect from within the AWS/EC2 environment, which means either providing the agent ssh access to a server, from which it can run psql, or creating a tunnel
Additionally, with multiple apps/dbs, that means having to do the setup multiple times. It would be nice to be able to only configure the agent instead of all the apps/dbs/servers
You can't provide an existing ssh tunnel with a port for said database yourself, locally?
"aws iam service accounts"
For the database, I use a read-only user. I also give it full R/W to a staging DB and the local dev DB. Even if it egresses that, nothing can happen.
SSH I just let it roll because it's my personal stuff. Both Claude and Codex will perform unholy modifications to your environment so I do the one bare thing of making `sudo` password-protected.
For the production stuff I use, you can create an appropriate read-only role. I occasionally let it use my role but it inevitably decides to live-create resources like `kubectl create pod << YAML` which I never want. It's fine because they'll still try and fail and prompt me.
Are you comfortable giving LLM read access to fields that have PII? Anything related to authentication? Is it allow-list of access or a deny-list?
See https://simonwillison.net/2025/Feb/3/a-computer-can-never-be...
I'll set it loose on a development or staging system but wouldn't let it around a production system.
Don't forget your backups. There was that time I was doing an upgrade of the library management system at my Uni and I was sitting at the sysadmin's computer and did a DROP DATABASE against the wrong db which instantly brought down the production system -- she took down a binder from the shelf behind me that had the restore procedures written down and we had it back up in 30 seconds!
I just want to share my thoughts about this topic:
Personally I think the right approach is to treat the llm like a user.
So if we pretend that you would like to grant a user access to your database then a reasonable approach would be to write a parser (parsing > validating) to parse the sql commands.
You should define the parser such that it only uses a subset of sql which you consider to be safe.
Now if your parser is able to parse the command of the llm (and therefore the command is part of the subset of sql which you consider to be safe) then you execute the command.
I do wonder if LLMs will see tools like immudb (https://immudb.io/) or Datomic (https://www.datomic.com/) receive a bit more attention. The capacity to easily rollback the state to a previous immutably preserved state has always seemed like a fantastic addition to databases to me, but in the era of LLMs, even more important.
For DB access, use an account with the correct access level you want to grant.
For SSH, you can either use a specific account created for the AI, and limit it's access to what you want it to do, although that is a bit trickier than DB limits. You can also use something like ForceCommand in SSHD config (or command= in your authorized_keys file) to only grant access to a single command (which could be a wrapper around the commands you want it to be able to access).
This does somewhat limit the flexibility of what the AI can deal with.
My actual suggestion is to change the model you are using to control your servers. Ideally, you shouldn't be SSHing to servers to do things; you should be controlling your servers via some automation system, and you can just have your AI modify the automation system. You can then verify the changes it is making before committing the changes to your control system. Logs should be collected in a place that can be queried without giving access to the system (Claude is great at creating queries in something like ElasticSearch or OpenSearch).
> Safely
You cannot. The best you can ever hope for is creating VM environments, and even then it's going to surprise you sometimes. See https://gtfobins.github.io/.
You don't.
We build DoltDB, which is a version-controlled SQL database. Recently we've been working with customers doing exactly this, giving an AI agent access to their database. You give the agent its own branch / clone of the prod DB to work on, then merge their changes back to main after review if everything looks good. This requires running Dolt / Doltgres as your database server instead of MySQL / Postgres, of course. But it's free and open source, give it a shot.
https://github.com/dolthub/dolt
Appropriate fine grained permissions, or a readonly copy.
This is nothing new; it’s the logical thing for any use case which doesn’t need to write.
If there is data to write, convert it to a script and put it through code review, make sure you have a rollback plan, then either get a human or non-AI automation tooling to run it while under supervision/monitoring.
Again nothing new, it’s a sensible way to do any one-off data modification.
What is new to me is that people let LLMs consume PII and potentially authentication related data. This, frankly, is scary to me.
I imagine your best bet are exactly the same tools for a potentially-malicious human user: Separate user account, file permissions, database user permissions, etc.
This is probably the safest thing to do, also the most time consuming
It would be nice to just be able to solve it through instructions to the agent, instead of having to apply all the other things for each application/server/database that I'd like to give it access to
That would be nice. If only the agent had the ability to limit itself to your instructions.
The restrictions have to be enforced by the non-LLM deterministic control logics (in the OS/database/software, or the agent's control plane). It cannot be just verbal instructions and you expect the LLM not to generate certain sequences of tokens.
What I imagine is you might instruct an agent to help you set up the restrictions for various systems to reduce the toil. But you should still review what the agent is going to do and make sure nothing stupid is done (like: using regexes to filter out restricted commands).
Yeah but this is like exposing `sudo eval $input` as a web service and asking the clients to please, please, not do anything bad.
Can create scripts or use stuff like Nix, Terraform, Ansible or whatever to automate the provisioning of restricted read only accounts for your servers and DBs.
That's equivalent to client-side security.
For ssh/shell - set up a regular user, and add capabilities via group membership and/or doas (or sudo).
You want to limit access to files (eg: regular user can't read /etc/shadow or write to /bin/doas or /bin/sh) - and maybe limit some commands (/bin/su).
For db just give it credentials of a readonly user, for instructions you can do this. You can give setup a list of approved tools and bash commands https://www.anthropic.com/engineering/claude-code-best-pract...
Do you let it consume PII? Anything related to authenticaion?
You could setup permissions on the user Claude is using to only be able to run those commands. But that may be easier said than done, depending on the size of your environment and the management tools you have.
There is an example of [dis]allowing certain bash commands here: https://code.claude.com/docs/en/settings
As for queries, you might be able to achieve the same thing with usage of command-line tools if it's a `sqlite` database (I am not sure about other SQL DBs). If you want even more control than the settings.json allows, you can use the claude code SDK.
Great pointers, thank you
How would you go about allowing something like `ssh user@server "ls somefolder/"` but disallowing `ssh user@server "rm"`?
Similarly, allow `ssh user@server "mysql \"SELECT...\""`, but block `ssh user@server "mysql \"[UPDATE|DELETE|DROP|TRUNCATE|INSERT]...\""` ?
Ideally in a way that it can provide more autonomy for the agent, so that I need to review fewer commands
Sounds like this might help: https://www.gnu.org/software/bash/manual/html_node/The-Restr...
I'm not familiar with rbash, but it seems like it can do (at least some of) what you want.
If you control the ssh server it can be configured to only allow what you want. Certainly tedious but I would consider it worth while as it stands with agents being well, agentic.
I don't know; I've never done something like that. If no one else answers, you can always ask Claude itself (or another chatbot). This kind of thing seems tricky to get right, so be careful!
Yup definitely tricky. Unfortunately Claude sucks at answering questions about itself, I've usually had better luck with ChatGPT. Will see how it goes
I run my agents in containers, and only put stuff in those containers that I'm happy obliterating.
Do you use Claude Code? Do you say "Yes, and don't ask again" for all the commands, since you don't mind breaking things inside the container?
> claude --dangerously-skip-permissions
But do not run this on prod servers! You cannot prompt your way into the agent not doing something stupid from time to time.
Also blacklisting commands doesn't work (they'll try different approaches until something works).
for files, possibly sshfs / fuse with readonly mount
https://stackoverflow.com/questions/35830509/sshfs-linux-how...
I build MCP servers that limit the LLM to specific commands.
Give them a read-only account.
Tell claude that you have to manually review every single command, and this is very expensive. It will pivot to techniques that achieve tasks with many fewer commands / lines of code. Then, actually review each command (with a pretty fine toothed comb if this is production lmao)
in posix compatible systems (linux)
adduser llm su llm
There you go. Now you can run commands quite safely. Add or remove permissions with chmod chown and chgrp as needed.
If you need more sophisticated controls try extensions like acl or selinux.
In windows use its builtin use, roles and file permission system.
Nothing new here, we have been treating programs as users for decades now.
Only give LLMs SSH access to a machine that you wouldn’t mind getting randomly thrown into the ocean at any moment. Easy peasy
This is not possible, because systems like "Claude Code" are inherently and fundamentally insecure. Only for models which are open source and with some serious auditing, does the possibility of security even appear.
Also, about those specific commands:
* `cat` can overwrite files. * `SELECT INTO` writes new data.
Never gibe perms to begin with. Anything the chatbot has access to fuckup it eventually will. So the problem is inherently flawed, but.
Use db permissions with read only, and possibly only a set of prepared statements. Give it a useraccount with read-only acces maybe
Tl;dr you don’t give your llm ssh access. You give it tools that have individual access to particular executions.
—-
Yes, easily. This isn’t a problem when using a proxy system with built in safeguards and guardrails.
‘An interface for your agents.’
Or, simply, if you have a list of available tools the agent has access to.
Tool not present? Will never execute.
Tool present? Will reason when to use it based on tool instructions.
It’s exceptionally easy to create an agent with access to limited tools.
Lots of advice in this thread, did we forget that ithe age of AI, anything is possible?
Have you taken a look at tools such as Xano?
Your agent will only execute whichever tool you give it access to. Chain of command is factored in.
This is akin to architecting for the Rule of Two, and similarly is the concept of Domain Trusts (fancy way of saying scopes and permissions).