Maybe this unfairly off topic, but in reading the intro, my thought was not that things become brittle and hard to change because of the inelegance of the abstractions, but because of the banal dynamics of work. Mostly, people act quickly towards short term goals, without full understanding. They accept technical debt, they pass things between people, and the numbers to avoid making things kinda gross just never work out.
I've also felt that composition is unnecessarily hard in web & data infrastructure. I do game development as a hobby, and web + data technology is in the stone age compared to game engines when it comes to composability.
For example, say I develop some object (scene) in Godot Engine. It interacts with the environment using physics simulation, renders 3D graphics to the screen with some shaders and textures, and plays back audio from its 3D location.
I can send this scene to some other user of Godot, and it will naturally just work in their project, including colliding with their objects, rendering in their viewport (including lighting and effects), and the player will hear the location of the object spatially.
Of course there is much more you can do in Godot, too: network requests, event-driven updates, localization, cross-platform inputs, the list goes on. And all of these compose and scale in a manageable way as projects grow.
Plus the engine provides a common data and behavior backbone that makes it possible for a single project to have code in C++, C#, GDScript, and even other languages simultaneously (all of these languages talk with the engine, and the engine exposes state and behaviors to each language's APIs).
In fact, I've been thinking about making a Godot-inspired (or perhaps even powered) business application framework because it's just such a productive way of building complex logic and behavior in a way that is still easy to change and maintain.
So I imagine if Cambra can bring a similar level of composability for web & data software, it could dramatically improve the development speed and quality of complex applications.
Not sure if tools and technologies can solve accidental complexity.
In my opinion, a system that has been stable for years isn't 'mature' in a good sense. An exceptional system is one that can still change after many years in production.
I believe this is almost impossible to achieve for enterprise software, because nobody has incentive to make the (huge) investment into longterm maintainability and changeability.
For me, consistent systematic naming and prefixes/suffixes to make names unique are a hint that a person is thinking about this or has experience with maintaining old systems. This has a huge effect on how well you can search, analyze, find usages, understand, replace, change.
That's an interesting post, most of which I agree with. I think the problems it describes were mostly solved (and then rejected by the industry) in the 2010s with stuff like .Net MVC & WebForms (Java had similar frameworks). You'd still have a relational database at the bottom, but everything else was written in dotnet/C# under the MVC or WebForm model. Monoliths were the norm, and deployments were "just spin up a second server, and change the IP address when it's ready to go".
It was a very productive way to produce most software. But as soon as you want to do something off-piste, you pay the entire productivity penalty.
This resonates a lot if you’ve ever worked in system integration.
In practice, most of the complexity comes exactly from what’s described here: every system has a rich internal model, but the moment data crosses a boundary, everything degrades into strings, schemas, and implicit contracts.
You end up rebuilding semantics over and over again (validation, mapping, enrichment), and a lot of failures only show up at runtime.
I’m skeptical about “one model to rule them all”, but I strongly agree that losing semantics at system boundaries is the core problem.
Ok this was a nice read. Very expressive and I agreed with the majority of the points.
I'm not going to lie, I expected "and this is why we built <insert generic gpt wrapper>", but it never came.
Which leads me to my question, and forgive me for my impatience, but _what_ are you selling? Is it a platform? Ide? Books? So many words and I ended with more questions than I started.
On any who, good read indeed.
> I'm not going to lie, I expected "and this is why we built <insert generic gpt wrapper>", but it never came.
Uh...
> Implementing it is more than I can do alone, which is why my cofounders, Daniel Mills and Skylar Cook, and I are starting Cambra. We are developing a new kind of programming system that rethinks the traditional internet software stack on the basis of a new model.
I started getting into webdev using PHP almost 30 years ago. So I'm probably biased. But when you're developing on just one machine in one language and you can do most of the stuff you need to do within that one system, you can make progress very fast, and the system can support you coding fast (I'm not proud of it but I was live patching production code via SSH and refreshing a web page as fast as humanly possible to make sure it didn't break).
I believe there are several ways achieve that analogy today, even though the technology we have access to (and our own demands) has exponentially grown in complexity. I am happy to see more people thinking about it.
[Side track: I am personally not a fan of "break it up into many tiny systems" (microservices, etc) since it removes that agility of logic/state moving around the system. I just see an attempt to codify the analog of a very large human organization.]
Now that AI lets a single person (and in some cases, no person at all!) write several orders of magnitude more code than they would possibly have been able to, the requirements of our systems will change too, and our old ways of working is cracking at the seams. In a way we're perhaps building up a whole new foundation, sending our AIs to run 50-year-old terminal commands. Maybe that's all we needed all along, but I do find it strange that AI is forced to work within a highly fragmented system, where 95%, if not 99%, of all startups that write code with AI while hiding it from the user, are essentially following the recipe of: (1) launch VM (2) tell AI to install Next.js and good luck.
I too have a horse in this race and have come to similar conclusions as the article: there is a way to create primitives on top of bare metal that work really well for small and large applications alike, and let you express what you really wanted across compute/memory/network. And I believe that with AI we can go back to first principles and rethink how we do things, because this time the technology is not just for groups of humans. I find this really exciting!
Em dash , corny sub headings, numbered lists for no reason. I think some of this may be human written or edited, but it isn't passing my smell test. Sorry if this is mostly or all human written, but you have a very AI style.
Sounds like the typical "Distributed Monolith" antipattern. And we will see more of that because "software is cheap" and everybody will built microservices and find themselves in hell because nobody stopped to think if their domain context is well contained.
> What is Rama?
Rama is a platform for building distributed backends as single programs. Instead of stitching together databases, queues, caches, and stream processors, you write one application that handles event ingestion, processing, and storage.
> "we believe advances in programming language theory and database systems have opened a path that wasn’t available before"
Which is tantamount to waving one's hands about and saying there's "New magic!(tm)"
... while standing next to a pile of discarded old magic that didn't work out.
This blog post says nothing about what makes Cambra's approach unique and likely to succeed; it is just a list of (valid) complaints about the status quo.
I'm guessing they want to build a "cathedral" instead of the current "bazaar" of components, perhaps like Heroku or Terraform, but "better"? I wish them luck! They're going to need it...
Maybe this unfairly off topic, but in reading the intro, my thought was not that things become brittle and hard to change because of the inelegance of the abstractions, but because of the banal dynamics of work. Mostly, people act quickly towards short term goals, without full understanding. They accept technical debt, they pass things between people, and the numbers to avoid making things kinda gross just never work out.
I've also felt that composition is unnecessarily hard in web & data infrastructure. I do game development as a hobby, and web + data technology is in the stone age compared to game engines when it comes to composability.
For example, say I develop some object (scene) in Godot Engine. It interacts with the environment using physics simulation, renders 3D graphics to the screen with some shaders and textures, and plays back audio from its 3D location.
I can send this scene to some other user of Godot, and it will naturally just work in their project, including colliding with their objects, rendering in their viewport (including lighting and effects), and the player will hear the location of the object spatially.
Of course there is much more you can do in Godot, too: network requests, event-driven updates, localization, cross-platform inputs, the list goes on. And all of these compose and scale in a manageable way as projects grow.
Plus the engine provides a common data and behavior backbone that makes it possible for a single project to have code in C++, C#, GDScript, and even other languages simultaneously (all of these languages talk with the engine, and the engine exposes state and behaviors to each language's APIs).
In fact, I've been thinking about making a Godot-inspired (or perhaps even powered) business application framework because it's just such a productive way of building complex logic and behavior in a way that is still easy to change and maintain.
So I imagine if Cambra can bring a similar level of composability for web & data software, it could dramatically improve the development speed and quality of complex applications.
Not sure if tools and technologies can solve accidental complexity.
In my opinion, a system that has been stable for years isn't 'mature' in a good sense. An exceptional system is one that can still change after many years in production.
I believe this is almost impossible to achieve for enterprise software, because nobody has incentive to make the (huge) investment into longterm maintainability and changeability.
For me, consistent systematic naming and prefixes/suffixes to make names unique are a hint that a person is thinking about this or has experience with maintaining old systems. This has a huge effect on how well you can search, analyze, find usages, understand, replace, change.
That's an interesting post, most of which I agree with. I think the problems it describes were mostly solved (and then rejected by the industry) in the 2010s with stuff like .Net MVC & WebForms (Java had similar frameworks). You'd still have a relational database at the bottom, but everything else was written in dotnet/C# under the MVC or WebForm model. Monoliths were the norm, and deployments were "just spin up a second server, and change the IP address when it's ready to go".
It was a very productive way to produce most software. But as soon as you want to do something off-piste, you pay the entire productivity penalty.
This resonates a lot if you’ve ever worked in system integration.
In practice, most of the complexity comes exactly from what’s described here: every system has a rich internal model, but the moment data crosses a boundary, everything degrades into strings, schemas, and implicit contracts.
You end up rebuilding semantics over and over again (validation, mapping, enrichment), and a lot of failures only show up at runtime.
I’m skeptical about “one model to rule them all”, but I strongly agree that losing semantics at system boundaries is the core problem.
It’s one of the reasons AWS ended up happening, when you make REST APIs the only allowed contract.
Ok this was a nice read. Very expressive and I agreed with the majority of the points. I'm not going to lie, I expected "and this is why we built <insert generic gpt wrapper>", but it never came. Which leads me to my question, and forgive me for my impatience, but _what_ are you selling? Is it a platform? Ide? Books? So many words and I ended with more questions than I started. On any who, good read indeed.
> I'm not going to lie, I expected "and this is why we built <insert generic gpt wrapper>", but it never came.
Uh...
> Implementing it is more than I can do alone, which is why my cofounders, Daniel Mills and Skylar Cook, and I are starting Cambra. We are developing a new kind of programming system that rethinks the traditional internet software stack on the basis of a new model.
That answers nothing. What is a "programming system"? If it were a programming language, they'd say that.
Huh I thought they meant like a new paradigm.
I started getting into webdev using PHP almost 30 years ago. So I'm probably biased. But when you're developing on just one machine in one language and you can do most of the stuff you need to do within that one system, you can make progress very fast, and the system can support you coding fast (I'm not proud of it but I was live patching production code via SSH and refreshing a web page as fast as humanly possible to make sure it didn't break).
I believe there are several ways achieve that analogy today, even though the technology we have access to (and our own demands) has exponentially grown in complexity. I am happy to see more people thinking about it.
[Side track: I am personally not a fan of "break it up into many tiny systems" (microservices, etc) since it removes that agility of logic/state moving around the system. I just see an attempt to codify the analog of a very large human organization.]
Now that AI lets a single person (and in some cases, no person at all!) write several orders of magnitude more code than they would possibly have been able to, the requirements of our systems will change too, and our old ways of working is cracking at the seams. In a way we're perhaps building up a whole new foundation, sending our AIs to run 50-year-old terminal commands. Maybe that's all we needed all along, but I do find it strange that AI is forced to work within a highly fragmented system, where 95%, if not 99%, of all startups that write code with AI while hiding it from the user, are essentially following the recipe of: (1) launch VM (2) tell AI to install Next.js and good luck.
I too have a horse in this race and have come to similar conclusions as the article: there is a way to create primitives on top of bare metal that work really well for small and large applications alike, and let you express what you really wanted across compute/memory/network. And I believe that with AI we can go back to first principles and rethink how we do things, because this time the technology is not just for groups of humans. I find this really exciting!
If code > infra (wrt 'elegance' or whatever we're talking about), then just use code. I.e. build a monolith (or monoliths) as far as is practical.
right, I think DHH's "Majestic Monolith" is another good approach here
Em dash , corny sub headings, numbered lists for no reason. I think some of this may be human written or edited, but it isn't passing my smell test. Sorry if this is mostly or all human written, but you have a very AI style.
The proposition reminds me of Rama by Red Planet Labs: https://redplanetlabs.com/
This sounds a lot like the original vision of darklang: https://darklang.com
Sounds like the typical "Distributed Monolith" antipattern. And we will see more of that because "software is cheap" and everybody will built microservices and find themselves in hell because nobody stopped to think if their domain context is well contained.
This reminds me of Rama [1] from Red Planet Labs
[1] https://redplanetlabs.com/programming-model
> What is Rama? Rama is a platform for building distributed backends as single programs. Instead of stitching together databases, queues, caches, and stream processors, you write one application that handles event ingestion, processing, and storage.
Oh good
> "we believe advances in programming language theory and database systems have opened a path that wasn’t available before"
Which is tantamount to waving one's hands about and saying there's "New magic!(tm)"
... while standing next to a pile of discarded old magic that didn't work out.
This blog post says nothing about what makes Cambra's approach unique and likely to succeed; it is just a list of (valid) complaints about the status quo.
I'm guessing they want to build a "cathedral" instead of the current "bazaar" of components, perhaps like Heroku or Terraform, but "better"? I wish them luck! They're going to need it...