i am actually eagerly waiting for someone to show the real-deal: actually everything in a github repo, including 'artfiacts', or atleast those artifacts which can't be reconstructed from the repo itself.
maybe they could be encrypted, and you could say "well its everything but the encryption key, which is owned in physical form by the CEO."
theres a lot of power i think to have everything in one place. maybe github could add the notion of private folders? but now thats ACLs... probably pushing the tool way too far.
people talk about "one change, everywhere, all at once." That is a great way to break production on any api change. if you have a db and >2 nodes, you will have the old system using the old schema and the new system using the new schema unless you design for forwards-backwards compatible changes. While more obvious with a db schema, it is true for any networked api.
At some point, you will have many teams. And one of them _will not_ be able to validate and accept some upgrade. Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team. Yes, this happens at slightly larger orgs. I've seen it many times.
And since you have to design your changes to be backwards compatible already, why not leverage a gradual roll out?
Do you update your app lock-step when AWS updates something? Or when your email service provider expands their API? No, of course not. And you don't have to lock yourself to other teams in your org for the same reason.
Monorepos are hotbeds of cross contamination and reaching beyond API boundaries. Having all the context for AI in one place is hard to beat though.
100%, this is all true and something you have to tackle eventually. Companies like this one (Kasava) can get away with it because, well, they likely don't have very many customers and it doesn't really matter. But when you're operating at a scale where you have international customers relying on your SaaS product 24/7, suddenly deploys having a few minutes of downtime matters.
This isn't to say monorepo is bad, though, but they're clearly naive about some things;
> No sync issues. No "wait, which repo has the current pricing?" No deploy coordination across three teams. Just one change, everywhere, instantly.
It's literally impossible to deploy "one change" simultaneously, even with the simplest n-tier architecture. As you mention, a DB schema is a great example. You physically cannot change a database schema and application code at the exact same time. You either have to ensure backwards compatibility or accept that there will be an outage while old application code runs against a new database, or vice-versa. And the latter works exactly up until an incident where your automated DB migration fails due to unexpected data in production, breaking the deployed code and causing a panic as on-call engineers try to determine whether to fix the migration or roll back the application code to fix the site.
To be a lot more cynical; this is clearly an AI-generated blog post by a fly-by-night OpenAI-wrapper company and I suspect they have few paying customers, if any, and they probably won't exist in 12 months. And when you have few paying customers, any engineering paradigm works, because it simply does not matter.
I’m not sure why you made the logical leap from having all code stored in a single repo to updating/deploying code in lockstep. Where you put your code (the repo) can and should be decoupled from how you deploy changes.
> you will have the old system using the old schema and the new system using the new schema unless you design for forwards-backwards compatible changes
Of course you design changes to be backwards compatible. Even if you have a single node and have no networked APIs. Because what if you need to rollback?
> Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team.
This is an organizational issue not a tech issue. Who gives that one team the power to hold back large changes that benefit the entire org? You need a competent director or lead to say no to this kind of hostage situation. You need defined policies that balance the needs of any individual team versus the entire org. You need to talk and find a mutually accepted middle ground between teams that want new features and teams that want stability and no regressions.
The point is that the realities of not being able to deploy in lockstep erode away at a lot of the claimed benefits the monorepo gives you in being able to make a change everywhere at once.
If my code has to be backwards compatible to survive the deployment, then having the code in two different repos isn’t such a big deal, because it’ll all keep working while I update the consumer code.
We have a monorepo, we use a server framework with automated code generation for API clients for each h service derived from OpenAPI.json. One change cascades too many changes. We have a custom CI job that trawls git and figures out which projects changed (including dependencies) as to compute which services need to be rebuilt. We may just not be at scale—thank God. We a small team.
I really have never been able to grasp how people who believe that forward-compatible data schema changes are daunting can ever survive contact with the industry at scale. It's extremely simple to not have this problem. "design for forwards-backwards compatible changes" is what every grown-up adult programmer does.
I am a huge monorepo supporter, including "no development branches".
However there's a big difference between development and releases. You still want to be able to cut stable releases that allow for cherrypicks for example, especially so in a monorepo.
Atomic changes are mostly a lie when talking about cross API functions, i.e. frontend talking to a backend. You should always define some kind of stable API.
Not sure what GP had in mind, but I have a few reasons:
Cherry picks are useful for fixing releases or adding changes without having to make an entirely new release. This is especially true for large monorepos which may have all sorts of changes in between. Cherry picks are a much safer way to “patch” releases without having to create an entirely new release, especially if the release process itself is long and you want to use a limited scope “emergency” one.
Atomic changes - assuming this is related to releases as well, it’s because the release process for the various systems might not be in sync. If you make a change where the frontend release that uses a new backend feature is released alongside the backend feature itself, you can get version drift issues unless everything happens in lock-step and you have strong regional isolation. Cherry picks are a way to circumvent this, but it’s better to not make these changes “atomic” in the first place.
Do you take down all of your projects and then bring them back up at the new version? If not, then you have times at which the change is only partially complete.
each deployment is a separate "atomic change". so if a one-file commit downstream affects 2 databases, 3 websites and 4 APIs (madeup numbers), then that is actually 9 different independent atomic changes.
I like keeping old branches but a lot of places ditch them, never understood why. I also dislike git squash, it means you have to make a brand new branch for your next PR, waste of time when I should be able to pull down master / dev / main / whatever and merge it into my working branch. I guess this is another reason I prefer the forking approach of github, let devs have their own sandbox and their own branches, and let them get their work done, they will PR when its ready.
Squashing only results in a cleaner commit history if you're making a mess of the history on your branches. If you're structuring the commit history on your branches logically, squashing just throws information away.
Not everyone develops and commits the same way and mandating squashing is a much simpler management task than training up everyone to commit in a similar manner.
Besides, they probably shouldn't make PR commits atomic, but do so as often as needed. It's a good way to avoid losing work. This is in tension with leaving behind clean commits, and squashing resolves it.
True but. There's a huge trade-off in time management.
I can spend hours OCDing over my git branch commit history.
-or-
I can spend those hours getting actual work done and squash at the end to clean up the disaster of commits I made along the way so I could easily roll back when needed.
Good luck getting 100+ devs to all use the same logical commit style. And if tests fail in CI you get the inevitable "fix tests" commit in the branch, which now spams your main branch more than the meaningful changes. You could rebase the history by hand, but what's the point? You'd have to force push anyway. Squashing is the only practical method of clean history for large orgs.
I'm very fortunate to not have to use PR style forges at work (branch based, that is). Instead each commit is its own unit of code to review, test, and merge individually. I never touch branches anymore since I also use JJ locally.
I used to be against monorepos... Then I got really into claude code, and monorepo makes sense for the first time in my life, specifically because of tools like Claude. I mean technically I could open all the different repos from the parent directory I suppose, but its much nicer in one spot. Front-end and back-end changes are always in sync this way too.
Claude Code can actually work on multiple directories, so this is not strictly necessary! I do this when I'm working on a project whose dependencies also need to be refactored.
I changed my biggest project to a monorepo based on the same issue. I tinker with a lot of the bleeding-edge LLM tools and it was a nightmare trying to wire them all up properly so they would look at the different bits. So I refactored it into one just to make life easier for a computer.
Opening Claude from the parent directory is what I do, and it seems to work pretty well, but I do like this monorepo idea so that a single commit can change things in the front end and back end together, since this is a use case that's quite common
Yeah, I used to hate it, but as I was building a new project I was like, oh man, I can't believe I'm even thinking of doing this, but it makes more sense LOL Instead of prompting twice, I can prompt once in one shot and it has the context of both pieces too. I guess if I ever need them to be separate I can always do that too.
Except of course rollout will not be atomic anyway and making changes in a single commit might lead Devs to make changes without thinking about backwards compat
Rollout should be within a minute. Let's say you ship one thing a day and 1/3 things involve a backwards-incompatible api change. That's 1 minute of breakage per 3 days. Aka it's broken 0.02% of the time. Life is too short to worry about such things
You might have old clients for several hours, days or forever(mobile). This has to be taken into account, for example by aggressively forcing updates which can be annoying for users, especially if their hardware doesn't support updating.
> When you ask Claude to "update the pricing page to reflect the new limits," it can...
wat. You are running the marketing page from the same repo, yet having an LLM make the updates? You have the data file available. Just read the pricing info from your config file and display it?
I built something like this at my previous startup, Pangea [1]. Overall I think looking back on our journey I'd sign up for it again, but it's not a panacea.
Here were the downsides we ran into
- Getting buy in to do everything through the repo. We had our feature flags controlled via a yaml file in the repo as well, and pretty quickly people got mad at the time it took for us to update a feature flag (open MR -> merge MR -> have CI update feature flag in our envs), and optimizing that took quite a while. It then made branch invariants harder to reason about (everything in the production branch is what is in our live environments, but except for feature flags). So, we moved that out of the monorepo into an actual service.
- CI time and complexity. When we started getting to around 20 services that deployed independently, GitLab started choking on the size of our CI configuration and we'd see a spinner for about 5 minutes before our pipeline even launched. Couple that with special snowflakes like the feature flag system I mentioned above, eventually it got to the point that only a few people knew exactly how rollouts edge cases worked. The juice was not worth the squeeze at that point (the juice being - "the repo is the source of truth for everything")
- Test times. We ran some e2e UI tests with Cypress that required a lot of beefy instances, and for safety we'd run them every single time. Couple that with flakiness, and you'd have a lot of red pipelines when the goal was 100% green all the time.
That being said, we got a ton of good stuff out of it too. I distinctly remember one day that I updated all but 2 of our services to run on ARM without involving service authors and our compute spend went down by 70% for that month because nobody was using the m8g spot instances, which had just been released.
Did you use turbo, buck or Bazel? Without monorepo tooling (and the blood, sweat, and tears it takes to hone them for your use cases), you start hitting all kinds of scaling limits in CI.
"Conclusion
Our monorepo isn't about following a trend. It's about removing friction between things that naturally belong together, something that is critical when related context is everything.
When a feature touches the backend API, the frontend component, the documentation, and the marketing site—why should that be four repositories, four PRs, four merge coordination meetings?
The monorepo isn't a constraint. It's a force multiplier."
This post is obviously (almost insultingly) written by AI. That being said, the idea behind the post is a good one (IaC taken to an extreme). This leaves me at a really weird spot in terms of how I feel about it.
Company website in the same repo means you can find branding material and company tone from blogs, meaning you can generate customer slides, video demos
Going further, Docs + Code, why not also store Bugs, Issues etc. I wonder
I promise I only self promote when it is relevant, but this is exactly what I am building https://nimbalyst.com/ for.
We build a user-friendly way for non-technical users to interact with a repo using Claude Code. It's especially focused on markdown, giving red/green diffs on RENDERED markdown files which nobody else has. It supports developers as well, but our goal is to be much more user friendly than VSCode forks.
Internally we have been doing a lot of what they talk about here, doing our design work, business planning, and marketing with Claude Code in our main repo.
I’m curious about the authors experience with monorepo for marketing. I’ve found that using static site generators with nontechnical PMs resulted in dissatisfaction and more work for engineers that those PMs could handle independently in Wordpress/Contentful. As a huge believer in monorepo, I’d love to hear how folks have approached incorporating nonengingeers into the monorepo workflows.
So the insane thing I do is I don't use worktrees. I am using multiple Claude code instances on the same project doing different things at the same time like one is editing the CSS for the login screen while another one is changing up the settings section of the project.
I have a question about Monorepo. Do companies really expose their entire source code all in one repo for their devs to download ? I understand that people can always do bad things if they want but with monorepo, you are literally letting me download everything right ?
This is probably different between startups and enterprises. My background is purely startups, and I can't imagine not having access to 100% of the code for the company I work.
Hosting a developer environment remotely that you SSH into is very common. That’s how you would approach working with a monorepo that has any serious size to it.
I used to dread this approach (it’s part of why I like Typescript monorepos now), but LLMs are fantastic at translating most basic types/shapes between languages. Much less tedious to do this than several years ago.
Of course, it’s still a pretty rough and dirty way to do it. But it works for small/demo projects.
Each layer of your stack should have different types.
Never expose your storage/backend type. Whenever you do, any consumers (your UI, consumers of your API, whatever) will take dependencies on it in ways you will not expect or predict. It makes changes somewhere between miserable and impossible depending on the exact change you want to make.
A UI-specific type means you can refactor the backend, make whatever changes you want, and have it invisible to the UI. When the UI eventually needs to know, you can expose that in a safe way and then update the UI to process it.
Well written, anticipated my questions about pain points at the end except one: have you hit a point yet where deploying is a pain because it’s happening so frequently? I understand there’s good separation of concerns so a change in marketing/ won’t cause conflicts or anything to impact frontend/ but I have to imagine eventually you’ll hit that pain point. But fwiw I’m a big fan of monorepo containing multiple services, and only breaking up the monorepo when it starts to cause problems. Sounds like author is doing that
Interesting approach to giving LLMs full context. My only concern is the "no workspaces" approach; manual cd && npm install usually leads to dependency drift and "it works on my machine" issues once you start sharing logic between the API and the frontend. It’s a great setup for velocity now, but I'm curious if you've hit any friction with types or shared utils without a more formal monorepo tool?
I leverage git submodules and avoid the same pitfalls of monorepo scale hell we had 20 years ago. Glad it works for you though. I feel like this is the path to ARR until you need to scale engineering beyond just you and your small team. The good news here is that the author has those domains segregated out as subfolders so in the future, he/she could just pull that out into its own repo if that time came.
Still adverse to the monorepo though, but I understand why it's attractive.
Fuck yes I love this attitude to transparency and code-based organization. This is the kind of stuff that gets me going in the morning for work, the kind of organization and utility I honestly aspire to implement someday.
As many commenters rightly point out, this doesn't run the human side of the company. It could, though, if the company took this approach seriously enough. My personal two cents, it could be done as a separate monorepo, provided the company and its staff remain disciplined in its execution and maintenance. It'd be far easier to have a CSV dictate employees and RBAC rather than bootstrapping Active Directory and fussing with its integrations/tentacles. Putting department processes into open documentation removes obfuscation and a significant degree of process politics, enabling more staff to engage in self-service rather than figuring out who wields the power to do a thing.
I really love everything about this, and I'd like to see more of it, AI or not. Less obfuscation and more transparency is how you increase velocity in any organization.
I love the idea. It's bold. But, I hate it from an information architecture perspective.
This is something that is, of course, super relevant given context management for agentic AI. So there's great appeal in doing this.
And today, it might even be the best decision. But this really feels like an alpha version of something that will have much better tooling in the near-future. JSON and
Markdown are beautiful simple information containers, but they aren't friendly for humans as compared with something like Notion or Excel. Again I'll say, I'm confident that in the near-future we'll start to see solutions emerge that structure documentation that is friendly to both AIs and humans.
I really want the world to move on from monorepos to multirepos. Git submodules set multirepos back by 10 years, but they still make more sense. The are composable!
For me, integrating features that spans multiple repositories means coordinating changes, multiple PRs, switching branches on many repos to do testing. Quite time consuming. I did use submodules but I find monorepo easier to manage
The thing I dislike about monorepos is that people don't ship stuff. Multiple versions of numpy and torch exist within the codebase, mitigated by bazel or some other build tool, instead of building binaries and deb packages and shipping actual products with well-documented APIs so that one team never needs to actually touch another team's code to get stuff done.
The people who say polyrepos cause breakage aren't doing it right. When you depend across repos in a polyrepo setup, you should depend on specific versions of things across repos, not the git head. Also, ideally, depend on properly installed binaries, not sources.
That makes sense when you depend on a shared library. However, if service A depends on endpoint x in service B, then you still have to work out synchronized deployments (or have developers handle this by making multiple separate deployments).
To be fair, this problem is not solved at all by monorepos. Basically, only careful use of gRPC (and similar technology) can help solve this… and it doesn’t really solve for application layer semantics, merely wire protocol compatibility. I’m not aware of any general comprehensive and easy solution.
This is sort of a whole product, but it’s hardly managing the whole company. Financials? HR? Contracts? Pictures of the last team meeting?
It just looks like a normal frontend+backend product monorepo, with the only somewhat unusual inclusion of the marketing folder.
Yes but AI! AI!
i am actually eagerly waiting for someone to show the real-deal: actually everything in a github repo, including 'artfiacts', or atleast those artifacts which can't be reconstructed from the repo itself.
maybe they could be encrypted, and you could say "well its everything but the encryption key, which is owned in physical form by the CEO."
theres a lot of power i think to have everything in one place. maybe github could add the notion of private folders? but now thats ACLs... probably pushing the tool way too far.
people talk about "one change, everywhere, all at once." That is a great way to break production on any api change. if you have a db and >2 nodes, you will have the old system using the old schema and the new system using the new schema unless you design for forwards-backwards compatible changes. While more obvious with a db schema, it is true for any networked api.
At some point, you will have many teams. And one of them _will not_ be able to validate and accept some upgrade. Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team. Yes, this happens at slightly larger orgs. I've seen it many times.
And since you have to design your changes to be backwards compatible already, why not leverage a gradual roll out?
Do you update your app lock-step when AWS updates something? Or when your email service provider expands their API? No, of course not. And you don't have to lock yourself to other teams in your org for the same reason.
Monorepos are hotbeds of cross contamination and reaching beyond API boundaries. Having all the context for AI in one place is hard to beat though.
100%, this is all true and something you have to tackle eventually. Companies like this one (Kasava) can get away with it because, well, they likely don't have very many customers and it doesn't really matter. But when you're operating at a scale where you have international customers relying on your SaaS product 24/7, suddenly deploys having a few minutes of downtime matters.
This isn't to say monorepo is bad, though, but they're clearly naive about some things;
> No sync issues. No "wait, which repo has the current pricing?" No deploy coordination across three teams. Just one change, everywhere, instantly.
It's literally impossible to deploy "one change" simultaneously, even with the simplest n-tier architecture. As you mention, a DB schema is a great example. You physically cannot change a database schema and application code at the exact same time. You either have to ensure backwards compatibility or accept that there will be an outage while old application code runs against a new database, or vice-versa. And the latter works exactly up until an incident where your automated DB migration fails due to unexpected data in production, breaking the deployed code and causing a panic as on-call engineers try to determine whether to fix the migration or roll back the application code to fix the site.
To be a lot more cynical; this is clearly an AI-generated blog post by a fly-by-night OpenAI-wrapper company and I suspect they have few paying customers, if any, and they probably won't exist in 12 months. And when you have few paying customers, any engineering paradigm works, because it simply does not matter.
I’m not sure why you made the logical leap from having all code stored in a single repo to updating/deploying code in lockstep. Where you put your code (the repo) can and should be decoupled from how you deploy changes.
> you will have the old system using the old schema and the new system using the new schema unless you design for forwards-backwards compatible changes
Of course you design changes to be backwards compatible. Even if you have a single node and have no networked APIs. Because what if you need to rollback?
> Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team.
This is an organizational issue not a tech issue. Who gives that one team the power to hold back large changes that benefit the entire org? You need a competent director or lead to say no to this kind of hostage situation. You need defined policies that balance the needs of any individual team versus the entire org. You need to talk and find a mutually accepted middle ground between teams that want new features and teams that want stability and no regressions.
The point is that the realities of not being able to deploy in lockstep erode away at a lot of the claimed benefits the monorepo gives you in being able to make a change everywhere at once.
If my code has to be backwards compatible to survive the deployment, then having the code in two different repos isn’t such a big deal, because it’ll all keep working while I update the consumer code.
[delayed]
I think I disagree.
We have a monorepo, we use a server framework with automated code generation for API clients for each h service derived from OpenAPI.json. One change cascades too many changes. We have a custom CI job that trawls git and figures out which projects changed (including dependencies) as to compute which services need to be rebuilt. We may just not be at scale—thank God. We a small team.
I really have never been able to grasp how people who believe that forward-compatible data schema changes are daunting can ever survive contact with the industry at scale. It's extremely simple to not have this problem. "design for forwards-backwards compatible changes" is what every grown-up adult programmer does.
I am a huge monorepo supporter, including "no development branches".
However there's a big difference between development and releases. You still want to be able to cut stable releases that allow for cherrypicks for example, especially so in a monorepo.
Atomic changes are mostly a lie when talking about cross API functions, i.e. frontend talking to a backend. You should always define some kind of stable API.
We use a mono repo and feature flag new features which gives us the deployment control timing.
What do you use for feature flags?
Very interesting points. Would you mind sharing a few examples of when cherry-picking is necessary and why atomic changes are a lie?
I'm using a monorepo for my company across 3+ products and so far we're deploying from stable release to stable release without any issues.
Not sure what GP had in mind, but I have a few reasons:
Cherry picks are useful for fixing releases or adding changes without having to make an entirely new release. This is especially true for large monorepos which may have all sorts of changes in between. Cherry picks are a much safer way to “patch” releases without having to create an entirely new release, especially if the release process itself is long and you want to use a limited scope “emergency” one.
Atomic changes - assuming this is related to releases as well, it’s because the release process for the various systems might not be in sync. If you make a change where the frontend release that uses a new backend feature is released alongside the backend feature itself, you can get version drift issues unless everything happens in lock-step and you have strong regional isolation. Cherry picks are a way to circumvent this, but it’s better to not make these changes “atomic” in the first place.
Atomic changes are a lie in the sense that there is no atomic deployment of a repo.
The moment you have two production services that talk to each other, you end up with one of them being deployed before the other.
Do you take down all of your projects and then bring them back up at the new version? If not, then you have times at which the change is only partially complete.
each deployment is a separate "atomic change". so if a one-file commit downstream affects 2 databases, 3 websites and 4 APIs (madeup numbers), then that is actually 9 different independent atomic changes.
I like keeping old branches but a lot of places ditch them, never understood why. I also dislike git squash, it means you have to make a brand new branch for your next PR, waste of time when I should be able to pull down master / dev / main / whatever and merge it into my working branch. I guess this is another reason I prefer the forking approach of github, let devs have their own sandbox and their own branches, and let them get their work done, they will PR when its ready.
squash results in a cleaner commit history. at least that’s why we mandate it at my work. not everyone feels the same about it I guess
Squashing only results in a cleaner commit history if you're making a mess of the history on your branches. If you're structuring the commit history on your branches logically, squashing just throws information away.
Not everyone develops and commits the same way and mandating squashing is a much simpler management task than training up everyone to commit in a similar manner.
Besides, they probably shouldn't make PR commits atomic, but do so as often as needed. It's a good way to avoid losing work. This is in tension with leaving behind clean commits, and squashing resolves it.
True but. There's a huge trade-off in time management.
I can spend hours OCDing over my git branch commit history.
-or-
I can spend those hours getting actual work done and squash at the end to clean up the disaster of commits I made along the way so I could easily roll back when needed.
it's also very easy to rewrite commit history in a few seconds.
If I'm rewriting history ... why not just squash?
But also, rewriting history only works if you haven't pushed code and are working as a solo developer.
It doesn't work when the team is working on a feature in a branch and we need to be pushing to run and test deployment via pipelines.
Good luck getting 100+ devs to all use the same logical commit style. And if tests fail in CI you get the inevitable "fix tests" commit in the branch, which now spams your main branch more than the meaningful changes. You could rebase the history by hand, but what's the point? You'd have to force push anyway. Squashing is the only practical method of clean history for large orgs.
What about separate, atomic, commits? Are they squashed too? Makes reverting a fix harder without impacting the rest, no?
PRs should be atomic, if they need to be separated for reverting, they should be multiple PRs.
I'm very fortunate to not have to use PR style forges at work (branch based, that is). Instead each commit is its own unit of code to review, test, and merge individually. I never touch branches anymore since I also use JJ locally.
[delayed]
I used to be against monorepos... Then I got really into claude code, and monorepo makes sense for the first time in my life, specifically because of tools like Claude. I mean technically I could open all the different repos from the parent directory I suppose, but its much nicer in one spot. Front-end and back-end changes are always in sync this way too.
I guess I could work with either option now.
Claude Code can actually work on multiple directories, so this is not strictly necessary! I do this when I'm working on a project whose dependencies also need to be refactored.
I changed my biggest project to a monorepo based on the same issue. I tinker with a lot of the bleeding-edge LLM tools and it was a nightmare trying to wire them all up properly so they would look at the different bits. So I refactored it into one just to make life easier for a computer.
Opening Claude from the parent directory is what I do, and it seems to work pretty well, but I do like this monorepo idea so that a single commit can change things in the front end and back end together, since this is a use case that's quite common
Yeah, I used to hate it, but as I was building a new project I was like, oh man, I can't believe I'm even thinking of doing this, but it makes more sense LOL Instead of prompting twice, I can prompt once in one shot and it has the context of both pieces too. I guess if I ever need them to be separate I can always do that too.
Except of course rollout will not be atomic anyway and making changes in a single commit might lead Devs to make changes without thinking about backwards compat
This is a systems problem that can and should be fixed in the system IMO, not by relying on devs executing processes in some correct order.
This is where unit testing / integration testing should be implemented as guard rails in my eyes.
Rollout should be within a minute. Let's say you ship one thing a day and 1/3 things involve a backwards-incompatible api change. That's 1 minute of breakage per 3 days. Aka it's broken 0.02% of the time. Life is too short to worry about such things
You might have old clients for several hours, days or forever(mobile). This has to be taken into account, for example by aggressively forcing updates which can be annoying for users, especially if their hardware doesn't support updating.
> Rollout should be within a minute
And if it's not, it breaks everything. This is an assumption you can't make.
backend-repo $ claude --add-dir ../frontend-repo
Opting for a monorepo because you don't want to alias this flag is.. something you can do, I guess.
What does the flag do? Just allow Claude to access that directory?
Is there any concern/issue regarding Claude’s context limit?
> When you ask Claude to "update the pricing page to reflect the new limits," it can...
wat. You are running the marketing page from the same repo, yet having an LLM make the updates? You have the data file available. Just read the pricing info from your config file and display it?
I built something like this at my previous startup, Pangea [1]. Overall I think looking back on our journey I'd sign up for it again, but it's not a panacea.
Here were the downsides we ran into
- Getting buy in to do everything through the repo. We had our feature flags controlled via a yaml file in the repo as well, and pretty quickly people got mad at the time it took for us to update a feature flag (open MR -> merge MR -> have CI update feature flag in our envs), and optimizing that took quite a while. It then made branch invariants harder to reason about (everything in the production branch is what is in our live environments, but except for feature flags). So, we moved that out of the monorepo into an actual service.
- CI time and complexity. When we started getting to around 20 services that deployed independently, GitLab started choking on the size of our CI configuration and we'd see a spinner for about 5 minutes before our pipeline even launched. Couple that with special snowflakes like the feature flag system I mentioned above, eventually it got to the point that only a few people knew exactly how rollouts edge cases worked. The juice was not worth the squeeze at that point (the juice being - "the repo is the source of truth for everything")
- Test times. We ran some e2e UI tests with Cypress that required a lot of beefy instances, and for safety we'd run them every single time. Couple that with flakiness, and you'd have a lot of red pipelines when the goal was 100% green all the time.
That being said, we got a ton of good stuff out of it too. I distinctly remember one day that I updated all but 2 of our services to run on ARM without involving service authors and our compute spend went down by 70% for that month because nobody was using the m8g spot instances, which had just been released.
[1]: https://pangea.cloud/
Did you use turbo, buck or Bazel? Without monorepo tooling (and the blood, sweat, and tears it takes to hone them for your use cases), you start hitting all kinds of scaling limits in CI.
"Conclusion Our monorepo isn't about following a trend. It's about removing friction between things that naturally belong together, something that is critical when related context is everything.
When a feature touches the backend API, the frontend component, the documentation, and the marketing site—why should that be four repositories, four PRs, four merge coordination meetings?
The monorepo isn't a constraint. It's a force multiplier."
Thank you Claude :)
It wrote the code, so it's best placed to write the copy too.
This post is obviously (almost insultingly) written by AI. That being said, the idea behind the post is a good one (IaC taken to an extreme). This leaves me at a really weird spot in terms of how I feel about it.
I like this for adjacent things too.
Company website in the same repo means you can find branding material and company tone from blogs, meaning you can generate customer slides, video demos
Going further, Docs + Code, why not also store Bugs, Issues etc. I wonder
I promise I only self promote when it is relevant, but this is exactly what I am building https://nimbalyst.com/ for.
We build a user-friendly way for non-technical users to interact with a repo using Claude Code. It's especially focused on markdown, giving red/green diffs on RENDERED markdown files which nobody else has. It supports developers as well, but our goal is to be much more user friendly than VSCode forks.
Internally we have been doing a lot of what they talk about here, doing our design work, business planning, and marketing with Claude Code in our main repo.
I’m curious about the authors experience with monorepo for marketing. I’ve found that using static site generators with nontechnical PMs resulted in dissatisfaction and more work for engineers that those PMs could handle independently in Wordpress/Contentful. As a huge believer in monorepo, I’d love to hear how folks have approached incorporating nonengingeers into the monorepo workflows.
So the insane thing I do is I don't use worktrees. I am using multiple Claude code instances on the same project doing different things at the same time like one is editing the CSS for the login screen while another one is changing up the settings section of the project.
I have a question about Monorepo. Do companies really expose their entire source code all in one repo for their devs to download ? I understand that people can always do bad things if they want but with monorepo, you are literally letting me download everything right ?
This is probably different between startups and enterprises. My background is purely startups, and I can't imagine not having access to 100% of the code for the company I work.
Hosting a developer environment remotely that you SSH into is very common. That’s how you would approach working with a monorepo that has any serious size to it.
How do you guys share types between your frontend and backend? I've looked into tRPC, but don't like having to use their RPC system.
I do it naively. Maintain the backend and frontend separately. Roll out each change in a backwards compatible manner.
I used to dread this approach (it’s part of why I like Typescript monorepos now), but LLMs are fantastic at translating most basic types/shapes between languages. Much less tedious to do this than several years ago.
Of course, it’s still a pretty rough and dirty way to do it. But it works for small/demo projects.
So in short you don't share types. Manually writing them for both is easy, but also tedious and error prone.
Each layer of your stack should have different types.
Never expose your storage/backend type. Whenever you do, any consumers (your UI, consumers of your API, whatever) will take dependencies on it in ways you will not expect or predict. It makes changes somewhere between miserable and impossible depending on the exact change you want to make.
A UI-specific type means you can refactor the backend, make whatever changes you want, and have it invisible to the UI. When the UI eventually needs to know, you can expose that in a safe way and then update the UI to process it.
protobuf?
I like this a lot. Every time I am forced to open Notion or Slite, I just wish so much it would just be .md files in a git repository.
Well written, anticipated my questions about pain points at the end except one: have you hit a point yet where deploying is a pain because it’s happening so frequently? I understand there’s good separation of concerns so a change in marketing/ won’t cause conflicts or anything to impact frontend/ but I have to imagine eventually you’ll hit that pain point. But fwiw I’m a big fan of monorepo containing multiple services, and only breaking up the monorepo when it starts to cause problems. Sounds like author is doing that
Interesting approach to giving LLMs full context. My only concern is the "no workspaces" approach; manual cd && npm install usually leads to dependency drift and "it works on my machine" issues once you start sharing logic between the API and the frontend. It’s a great setup for velocity now, but I'm curious if you've hit any friction with types or shared utils without a more formal monorepo tool?
I leverage git submodules and avoid the same pitfalls of monorepo scale hell we had 20 years ago. Glad it works for you though. I feel like this is the path to ARR until you need to scale engineering beyond just you and your small team. The good news here is that the author has those domains segregated out as subfolders so in the future, he/she could just pull that out into its own repo if that time came.
Still adverse to the monorepo though, but I understand why it's attractive.
Honestly, from the enterprise IT perspective?
Fuck yes I love this attitude to transparency and code-based organization. This is the kind of stuff that gets me going in the morning for work, the kind of organization and utility I honestly aspire to implement someday.
As many commenters rightly point out, this doesn't run the human side of the company. It could, though, if the company took this approach seriously enough. My personal two cents, it could be done as a separate monorepo, provided the company and its staff remain disciplined in its execution and maintenance. It'd be far easier to have a CSV dictate employees and RBAC rather than bootstrapping Active Directory and fussing with its integrations/tentacles. Putting department processes into open documentation removes obfuscation and a significant degree of process politics, enabling more staff to engage in self-service rather than figuring out who wields the power to do a thing.
I really love everything about this, and I'd like to see more of it, AI or not. Less obfuscation and more transparency is how you increase velocity in any organization.
I love the idea. It's bold. But, I hate it from an information architecture perspective.
This is something that is, of course, super relevant given context management for agentic AI. So there's great appeal in doing this.
And today, it might even be the best decision. But this really feels like an alpha version of something that will have much better tooling in the near-future. JSON and
Markdown are beautiful simple information containers, but they aren't friendly for humans as compared with something like Notion or Excel. Again I'll say, I'm confident that in the near-future we'll start to see solutions emerge that structure documentation that is friendly to both AIs and humans.
I really want the world to move on from monorepos to multirepos. Git submodules set multirepos back by 10 years, but they still make more sense. The are composable!
For me, integrating features that spans multiple repositories means coordinating changes, multiple PRs, switching branches on many repos to do testing. Quite time consuming. I did use submodules but I find monorepo easier to manage
My impression is that the world moved on from multirepo to monorepo and I vaguely remember that git submodules have some serious gotchas.
https://diziet.dreamwidth.org/14666.html#what-is-wrong-with-...
The thing I dislike about monorepos is that people don't ship stuff. Multiple versions of numpy and torch exist within the codebase, mitigated by bazel or some other build tool, instead of building binaries and deb packages and shipping actual products with well-documented APIs so that one team never needs to actually touch another team's code to get stuff done.
The people who say polyrepos cause breakage aren't doing it right. When you depend across repos in a polyrepo setup, you should depend on specific versions of things across repos, not the git head. Also, ideally, depend on properly installed binaries, not sources.
That makes sense when you depend on a shared library. However, if service A depends on endpoint x in service B, then you still have to work out synchronized deployments (or have developers handle this by making multiple separate deployments).
To be fair, this problem is not solved at all by monorepos. Basically, only careful use of gRPC (and similar technology) can help solve this… and it doesn’t really solve for application layer semantics, merely wire protocol compatibility. I’m not aware of any general comprehensive and easy solution.