The key point for me was not the rewrite in Go or even the use of AI, it was that they started with this architecture:
> The reference implementation is JavaScript, whereas our pipeline is in Go. So for years we’ve been running a fleet of jsonata-js pods on Kubernetes - Node.js processes that our Go services call over RPC. That meant that for every event (and expression) we had to serialize, send over the network, evaluate, serialize the result, and finally send it back.
> This was costing us ~$300K/year in compute, and the number kept growing as more customers and detection rules were added.
For something so core to the business, I'm baffled that they let it get to the point where it was costing $300K per year.
The fact that this only took $400 of Claude tokens to completely rewrite makes it even more baffling. I can make $400 of Claude tokens disappear quickly in a large codebase. If they rewrote the entire thing with $400 of Claude tokens it couldn't have been that big. Within the range of something that engineers could have easily migrated by hand in a reasonable time. Those same engineers will have to review and understand all of the AI-generated code now and then improve it, which will take time too.
I don't know what to think. These blog articles are supposed to be a showcase of engineering expertise, but bragging about having AI vibecode a replacement for a critical part of your system that was questionably designed and costing as much as a fully-loaded FTE per year raises a lot of other questions.
I mostly agree, but it's more appropriate to weigh contributions against an FTE's output rather than their input. If I have a $10m/yr feature I'm fleshing out now and a few more lined up afterward, it's often not worth the time to properly handle any minor $300k/yr boondoggle. It's only worth comparing to an FTE's fully loaded cost when you're actually able to hire to fix it, and that's trickier since it takes time away from the core team producing those actually valuable features and tends to result in slower progress from large-team overhead even after onboarding. Plus, even if you could hire to fix it, wouldn't you want them to work on those more valuable features first?
They were running a big kubernetes infrastructure to handle all of these RPC calls.
That takes a lot of engineer hours to set up and maintain. This architecture didn't just happen, it took a lot of FTE hours to get it working and keep it that way.
> If they rewrote the entire thing with $400 of Claude tokens it couldn't have been that big.
The original is ~10k lines of JS + a few hundred for a test harness. You can probably oneshot this with a $20/month Codex subscription and not even use up your daily allowance.
Think this is pure piggyback marketing on what cloudflare did with next.js. In my experience a company that raised $30MM a month ago is extremely unlikely to be investing energy in cost rationalization/optimization.
edit: saw the total raise not the incremental 30MM
>The approach was the same as Cloudflare’s vinext rewrite: port the official jsonata-js test suite to Go, then implement the evaluator until every test passes.
the first question that comes to mind is: who takes care of this now?
You had a dependency with an open source project. now your translated copy (fork?) is yours to maintain, 13k lines of go. how do you make sure it stays updated? Is this maintainance factored in?
I know nothing about JSONata or the problem it solves, but I took a look at the repo and there's 15PRs and 150 open issues.
That's only important if the plan is to stay feature-compatible with the original going forward.
For this case, where it's used as an internal filtering engine, I expect the goal is fixing bugs that show up and occasionally adding a feature that's needed by this organization.
>expect the goal is fixing bugs that show up and occasionally adding a feature that's needed by this organization.
Even if we assume a clean and bug free port, and no compatibility required moving forward, and a scope that doesn't involve security risks, that's already non trivial, since it's a codebase no one has context of.
Probably not 500k worth of maintainance (because wtf were they doing in the first place) but I don't buy placing the current cost at 0.
The full translation took 7hrs and $400 in tokens. Applying diffs every quarter using AI is much easier and cheaper. Software engineering has completely changed.
except there are 2 go implementations already, and he burnt 500k per year to have a kubernetes clusters to parse json (???), so the total gain is -500000*year - 400 + 1 (deducting prompt to use existing implementation)
They said in the article that they were running up to 200 pods at a time. Doing some back of the envelope math, 200 pods at $300,000 year is about $0.17/hour, which is exactly what an EC2 c5.xlarge costs per hour (on demand). That has 4 vCPUs, so about 800 vCPUs during peak, with $0.0425/CPU-hour.
I do have some questions like:
* Did they estimate cost savings based on peak capacity, as though it were running 24x7x365?
* Did they use auto scaling to keep costs low?
* Were they wasting capacity by running a single-threaded app (Node-based) on multi-CPU hardware? (My guess is no, but anything is possible)
First I thought they were AWS lambda functions, perhaps possible if they are over-provisioned for very concurrency or something similar $25k/month is in realm of possibility.
But no, the the post is talking about just RPC calls on k8s pods running docker images, for saving $300k/year, their compute bill should be well above $100M/year.
Perhaps if it was Google scale of events for billions of users daily, paired with the poorest/inefficient processing engine, using zero caching layer and very badly written rules, maybe it is possible.
Feels like it is just an SEO article designed to catch reader's attention.
>The reference implementation is JavaScript, whereas our pipeline is in Go. So for years we’ve been running a fleet of jsonata-js pods on Kubernetes - Node.js processes that our Go services call over RPC. That meant that for every event (and expression) we had to serialize, send over the network, evaluate, serialize the result, and finally send it back.
But either way, we're talking $25k/mo. That's not even remotely difficult to believe.
It has to be satire right? Like, you aren't out of touch on this. I get engineers maybe making the argument that $300k / year on cloud is the same as 1.5 devops engineers managing in-house solutions, but for just json parsing????
For numbers like that, I can never tell whether it's just a vastly larger-scale dataset than any that I've seen as a non-FAANG engineer, OR, a hilariously-wasteful application of "mAnAgEd cLoUd sErViCeS" to a job that I could do on a $200/month EC2 instance with one sinatra app running per core. This is a made-up comparison of course, not a specific claim. But I've definitely run little $40 k8s clusters that replaced $800/month paid services and never even hit 60% CPU.
Well, for starters, they replace the RPC call with an in-process function call. But my point is anybody who's surprised that working with JSON at scale is expensive (because hey it's just JSON!) shouldn't be surprised.
Well everything is expensive at scale, and any deserialization/serialization step is going to be expensive if you do it enough. However
yes i would be surprised. JSON parsing is pretty optimized now, i suspect most "json parsing at scale is expensive" is really the fault of other parts of the stack
It can be, but $500k/year is absurd. It's like they went from the most inefficient system possible to create, to a regular normal system that an average programmer could manage.
I have no idea if they are doing orders of magnitude more processing, but I crunch through 60GB of JSON data in about 3000 files regularly on my local 20-thread machine using nodejs workers to do deep and sometimes complicated queries and data manipulation. It's not exactly lightning fast, but it's free and it crunches through any task in about 3 or 4 minutes or less.
The main cost is downloading the compressed files from S3, but if I really wanted to I could process it all in AWS. It also could go much faster on better hardware. If I have a really big task I want done quickly, I can start up dozens or hundreds of EC2 instances to run the task, and it would take practically no time at all... seconds. Still has to be cheaper than what they were doing.
You didn't say it was stupid. If you had, I would have just ignored the comment. But you expressed a level of surprised that led me to believe you're unfamiliar with how much of a pain in the ass JSON parsing is.
I think OP’s point was surprise that a company would spend so much on such inefficient json parsing. I’m agreeing. I get that JSON is not the fastest format to parse, but the overarching point is that you would expect changes to be made well before you’re spending $300k on it. Or in a slightly more ideal world, you wouldn't architect something so inefficient in the first place.
But it's common for engineers to blow insane amounts of money unnecessarily on inefficient solutions for "reasons". Sort of reminds me of saas's offering 100 concurrent "serverless" WS connections for like $50 / month - some devs buy into this nonsense.
Because his prompt said to implement in go, not to check if an go implementation already exists.
They have been running kubernetes clusters to parse json, this is not suprising.
This isn’t the first time I’ve read a ridiculous story like this on hackernews. It seems to be a symptom of startups who suddenly get a cash injection with no clue how to properly manage it. I have been slowly scaling a product over the past 12 years, on income alone, so I guess I see things differently, but I could never allow such a ridiculous spend on something so trivial reach even 1% of this level before squashing it.
I'm just kind of confused what took them so long. So it was costing 300k a year, plus causing deployment headaches, etc.
But its a realitively simple tool from the looks of it. It seems like their are many competitors, some already written in go.
Its kind of weird why they waited so long to do this. Why even need AI? This looks like the sort of thing you could port by hand in less than a week (possibly even in a day).
Not saying it is a good thing, but an organization, especially if there has been a lot of turnover, can enter a state of status quo.
> it must have that architecture for a reason, we don't enough knowledge about it to touch it, etc.
That or they simply haven't had the time, cost can creep up over time. 300k is a lot though. Especially for just 200 replicas.
Seems wildly in-efficient. I also don't understand why you wouldn't just bundle these with the application in question. Have the go service and nodejs service in the same pod / container. It can even use sockets, it should be pretty much instant (sub ms) for rpc between them.
> The approach was the same as Cloudflare’s vinext rewrite: port the official jsonata-js test suite to Go, then implement the evaluator until every test passes.
This makes me wonder, for reimplementation projects like this that aren't lucky enough to have super-extensive test suites, how good are LLM's at taking existing code bases and writing tests for every single piece of logic, every code path? So that you can then do a "cleanish-room" reimplementation in a different language (or even same language) using these tests?
Obviously the easy part is getting the LLM's to write lots of tests, which is then trivial to iterate until they all pass on the original code. The hard parts are how to verify that the tests cover all possible code paths and edge cases, and how to reliably trigger certain internal code paths.
Congrats! This author found a sub-optimal microservice and replaced it with inline code. This is the bread and butter work of good engineering. This is also part of the reason that microservices are dangerous.
The bad engineering part is writing your own replacement for something that already exists. As other commenters here have noted, there were already two separate implementations of JSONata in Go. Why spend $400 to have Claude rewrite something when you can just use an already existing, already supported library?
Everyone is surprised at the $300k/year figure, but that seems on the low end. My previous work place spends tens of millions a year on GPU continuous integration tests.
how many billions of compute are wasted because this industry can't align on some binary format across all languages and APIs and instead keep serializing and deserializing things
ASN.1 and its on the wire format BER and DER have been available for close to 30+ years and it is running on billions of devices(cryptography, SSL, etc) and other critical infrastructures.
but, it is very boring stable, which means I can't tell the world about my wartime stories and write a blog about it.
A principal engineer spending his week end vibe coding some slop at a rate of 13k lines of code in 7h to replace a vendor. Is this really the new direction we want to set for our industry? For the first time ever, I have had a CTO vibe conding something to replace my product [1] even though it cost less than a day of his salary. The direction we are heading makes me want to quit, all points to software now being worthless
These articles remind me so much of those old internet debates about "teleportation" and consciousness.
Your physical form is destructively read into data, sent via radio signal, and reconstructed on the other end. Is it still you? Did you teleport, or did you die in the fancy paper shredder/fax machine?
If vibe code is never fully reviewed and edited, then it's not "alive" and effectively zombie code?
> The reference implementation is JavaScript, whereas our pipeline is in Go. So for years we’ve been running a fleet of jsonata-js pods on Kubernetes - Node.js processes that our Go services call over RPC.
> This was costing us ~$300K/year in compute
Wooof. As soon as that kind of spend hit my radar for this sort of service I would have given my most autistic and senior engineer a private office and the sole task of eliminating this from the stack.
At any point did anyone step back and ask if jsonata was the right tool in the first place? I cannot make any judgements here without seeing real world examples of the rules themselves and the ways that they are leveraged. Is this policy language intentionally JSON for portability with other systems, or for editing by end users?
As long as you are using JSON, you will be able to optimize.
Did you know that you can pass numbers up to 2 billion in 4 constant bytes instead of as a string of 20 average dynamic bytes? Also, fun fact, you can cut your packets in half by not repeating the names of your variables in every packet, you can instead use a positional system where cardinality represents the type of the variable.
And you can do all of this with pre AI technology!
No shit, you fixed the leak and it stopped leaking? The bottleneck was actually the bottleneck?! Instrumentation helped you pinpoint the issue?!?! I am flabbergasted
The key point for me was not the rewrite in Go or even the use of AI, it was that they started with this architecture:
> The reference implementation is JavaScript, whereas our pipeline is in Go. So for years we’ve been running a fleet of jsonata-js pods on Kubernetes - Node.js processes that our Go services call over RPC. That meant that for every event (and expression) we had to serialize, send over the network, evaluate, serialize the result, and finally send it back.
> This was costing us ~$300K/year in compute, and the number kept growing as more customers and detection rules were added.
For something so core to the business, I'm baffled that they let it get to the point where it was costing $300K per year.
The fact that this only took $400 of Claude tokens to completely rewrite makes it even more baffling. I can make $400 of Claude tokens disappear quickly in a large codebase. If they rewrote the entire thing with $400 of Claude tokens it couldn't have been that big. Within the range of something that engineers could have easily migrated by hand in a reasonable time. Those same engineers will have to review and understand all of the AI-generated code now and then improve it, which will take time too.
I don't know what to think. These blog articles are supposed to be a showcase of engineering expertise, but bragging about having AI vibecode a replacement for a critical part of your system that was questionably designed and costing as much as a fully-loaded FTE per year raises a lot of other questions.
I mostly agree, but it's more appropriate to weigh contributions against an FTE's output rather than their input. If I have a $10m/yr feature I'm fleshing out now and a few more lined up afterward, it's often not worth the time to properly handle any minor $300k/yr boondoggle. It's only worth comparing to an FTE's fully loaded cost when you're actually able to hire to fix it, and that's trickier since it takes time away from the core team producing those actually valuable features and tends to result in slower progress from large-team overhead even after onboarding. Plus, even if you could hire to fix it, wouldn't you want them to work on those more valuable features first?
They were running a big kubernetes infrastructure to handle all of these RPC calls.
That takes a lot of engineer hours to set up and maintain. This architecture didn't just happen, it took a lot of FTE hours to get it working and keep it that way.
Yeah, the situation from TFA doesn't make a lot of sense; I was just highlighting that it's not as clear-cut as "costs >1 FTE => fix it."
> If they rewrote the entire thing with $400 of Claude tokens it couldn't have been that big.
The original is ~10k lines of JS + a few hundred for a test harness. You can probably oneshot this with a $20/month Codex subscription and not even use up your daily allowance.
Yeah, it's like those posts "we made it 5,000x faster by actually thinking about what the code is doing."
Think this is pure piggyback marketing on what cloudflare did with next.js. In my experience a company that raised $30MM a month ago is extremely unlikely to be investing energy in cost rationalization/optimization.
edit: saw the total raise not the incremental 30MM
>The approach was the same as Cloudflare’s vinext rewrite: port the official jsonata-js test suite to Go, then implement the evaluator until every test passes.
the first question that comes to mind is: who takes care of this now?
You had a dependency with an open source project. now your translated copy (fork?) is yours to maintain, 13k lines of go. how do you make sure it stays updated? Is this maintainance factored in?
I know nothing about JSONata or the problem it solves, but I took a look at the repo and there's 15PRs and 150 open issues.
That's only important if the plan is to stay feature-compatible with the original going forward.
For this case, where it's used as an internal filtering engine, I expect the goal is fixing bugs that show up and occasionally adding a feature that's needed by this organization.
>expect the goal is fixing bugs that show up and occasionally adding a feature that's needed by this organization.
Even if we assume a clean and bug free port, and no compatibility required moving forward, and a scope that doesn't involve security risks, that's already non trivial, since it's a codebase no one has context of.
Probably not 500k worth of maintainance (because wtf were they doing in the first place) but I don't buy placing the current cost at 0.
This case looks like pure marketing fluff rather than sound engineering tho.
it is all yolo from here on out ... every major ai decision we're making today feels like a bet that agi will eventually show up and clean up the mess
The full translation took 7hrs and $400 in tokens. Applying diffs every quarter using AI is much easier and cheaper. Software engineering has completely changed.
except there are 2 go implementations already, and he burnt 500k per year to have a kubernetes clusters to parse json (???), so the total gain is -500000*year - 400 + 1 (deducting prompt to use existing implementation)
> the first question that comes to mind is: who takes care of this now?
probably another AI agent at their company, who I'm sure won't make any mistakes
I mean, my first question would be how good the test suite on this project is.
> This was costing us ~$300K/year in compute, and the number kept growing as more customers and detection rules were added.
Maybe I’m out of touch, but I cannot fathom this level of cost for custom lambda functions operating on JSON objects.
They said in the article that they were running up to 200 pods at a time. Doing some back of the envelope math, 200 pods at $300,000 year is about $0.17/hour, which is exactly what an EC2 c5.xlarge costs per hour (on demand). That has 4 vCPUs, so about 800 vCPUs during peak, with $0.0425/CPU-hour.
I do have some questions like:
* Did they estimate cost savings based on peak capacity, as though it were running 24x7x365?
* Did they use auto scaling to keep costs low?
* Were they wasting capacity by running a single-threaded app (Node-based) on multi-CPU hardware? (My guess is no, but anything is possible)
First I thought they were AWS lambda functions, perhaps possible if they are over-provisioned for very concurrency or something similar $25k/month is in realm of possibility.
But no, the the post is talking about just RPC calls on k8s pods running docker images, for saving $300k/year, their compute bill should be well above $100M/year.
Perhaps if it was Google scale of events for billions of users daily, paired with the poorest/inefficient processing engine, using zero caching layer and very badly written rules, maybe it is possible.
Feels like it is just an SEO article designed to catch reader's attention.
This is where the cost came from.
>The reference implementation is JavaScript, whereas our pipeline is in Go. So for years we’ve been running a fleet of jsonata-js pods on Kubernetes - Node.js processes that our Go services call over RPC. That meant that for every event (and expression) we had to serialize, send over the network, evaluate, serialize the result, and finally send it back.
But either way, we're talking $25k/mo. That's not even remotely difficult to believe.
It has to be satire right? Like, you aren't out of touch on this. I get engineers maybe making the argument that $300k / year on cloud is the same as 1.5 devops engineers managing in-house solutions, but for just json parsing????
For numbers like that, I can never tell whether it's just a vastly larger-scale dataset than any that I've seen as a non-FAANG engineer, OR, a hilariously-wasteful application of "mAnAgEd cLoUd sErViCeS" to a job that I could do on a $200/month EC2 instance with one sinatra app running per core. This is a made-up comparison of course, not a specific claim. But I've definitely run little $40 k8s clusters that replaced $800/month paid services and never even hit 60% CPU.
I wonder if you've ever worked on a web service at scale. JSON serialization and deserialization is notoriously expensive.
They got a 1000x speed up just by switching languages.
I highly doubt the issue was serialization latency, unless they were doing something stupid like reserializing the same payload over and over again.
Well, for starters, they replace the RPC call with an in-process function call. But my point is anybody who's surprised that working with JSON at scale is expensive (because hey it's just JSON!) shouldn't be surprised.
Well everything is expensive at scale, and any deserialization/serialization step is going to be expensive if you do it enough. However yes i would be surprised. JSON parsing is pretty optimized now, i suspect most "json parsing at scale is expensive" is really the fault of other parts of the stack
It can be, but $500k/year is absurd. It's like they went from the most inefficient system possible to create, to a regular normal system that an average programmer could manage.
I have no idea if they are doing orders of magnitude more processing, but I crunch through 60GB of JSON data in about 3000 files regularly on my local 20-thread machine using nodejs workers to do deep and sometimes complicated queries and data manipulation. It's not exactly lightning fast, but it's free and it crunches through any task in about 3 or 4 minutes or less.
The main cost is downloading the compressed files from S3, but if I really wanted to I could process it all in AWS. It also could go much faster on better hardware. If I have a really big task I want done quickly, I can start up dozens or hundreds of EC2 instances to run the task, and it would take practically no time at all... seconds. Still has to be cheaper than what they were doing.
Would it be better or worse if I had that experience and still said it's stupid?
You didn't say it was stupid. If you had, I would have just ignored the comment. But you expressed a level of surprised that led me to believe you're unfamiliar with how much of a pain in the ass JSON parsing is.
I think OP’s point was surprise that a company would spend so much on such inefficient json parsing. I’m agreeing. I get that JSON is not the fastest format to parse, but the overarching point is that you would expect changes to be made well before you’re spending $300k on it. Or in a slightly more ideal world, you wouldn't architect something so inefficient in the first place.
But it's common for engineers to blow insane amounts of money unnecessarily on inefficient solutions for "reasons". Sort of reminds me of saas's offering 100 concurrent "serverless" WS connections for like $50 / month - some devs buy into this nonsense.
The docs indicate there are already 2 other go implementations. Why not just use one of those? https://docs.jsonata.org/overview.html
Because his prompt said to implement in go, not to check if an go implementation already exists. They have been running kubernetes clusters to parse json, this is not suprising.
This isn’t the first time I’ve read a ridiculous story like this on hackernews. It seems to be a symptom of startups who suddenly get a cash injection with no clue how to properly manage it. I have been slowly scaling a product over the past 12 years, on income alone, so I guess I see things differently, but I could never allow such a ridiculous spend on something so trivial reach even 1% of this level before squashing it.
I'm just kind of confused what took them so long. So it was costing 300k a year, plus causing deployment headaches, etc.
But its a realitively simple tool from the looks of it. It seems like their are many competitors, some already written in go.
Its kind of weird why they waited so long to do this. Why even need AI? This looks like the sort of thing you could port by hand in less than a week (possibly even in a day).
Not saying it is a good thing, but an organization, especially if there has been a lot of turnover, can enter a state of status quo.
> it must have that architecture for a reason, we don't enough knowledge about it to touch it, etc.
That or they simply haven't had the time, cost can creep up over time. 300k is a lot though. Especially for just 200 replicas.
Seems wildly in-efficient. I also don't understand why you wouldn't just bundle these with the application in question. Have the go service and nodejs service in the same pod / container. It can even use sockets, it should be pretty much instant (sub ms) for rpc between them.
If I had to guess… The same thing happening to a lot of the industry… the era of cheap money is over.
> The approach was the same as Cloudflare’s vinext rewrite: port the official jsonata-js test suite to Go, then implement the evaluator until every test passes.
This makes me wonder, for reimplementation projects like this that aren't lucky enough to have super-extensive test suites, how good are LLM's at taking existing code bases and writing tests for every single piece of logic, every code path? So that you can then do a "cleanish-room" reimplementation in a different language (or even same language) using these tests?
Obviously the easy part is getting the LLM's to write lots of tests, which is then trivial to iterate until they all pass on the original code. The hard parts are how to verify that the tests cover all possible code paths and edge cases, and how to reliably trigger certain internal code paths.
Congrats! This author found a sub-optimal microservice and replaced it with inline code. This is the bread and butter work of good engineering. This is also part of the reason that microservices are dangerous.
The bad engineering part is writing your own replacement for something that already exists. As other commenters here have noted, there were already two separate implementations of JSONata in Go. Why spend $400 to have Claude rewrite something when you can just use an already existing, already supported library?
For context, JSONata's reference implementation is 5.5k lines of javascript.
If you can incorporate Quamina or similar logic in there, you might be able to save even more… worth looking into, at least
Next maybe they will use a binary format instead of JSON.
Stop reading ahead.
Everyone is surprised at the $300k/year figure, but that seems on the low end. My previous work place spends tens of millions a year on GPU continuous integration tests.
The $300K/year figure is surprising because it was for something that didn't need to exist (RPC calls).
how many billions of compute are wasted because this industry can't align on some binary format across all languages and APIs and instead keep serializing and deserializing things
ASN.1 and its on the wire format BER and DER have been available for close to 30+ years and it is running on billions of devices(cryptography, SSL, etc) and other critical infrastructures.
but, it is very boring stable, which means I can't tell the world about my wartime stories and write a blog about it.
A principal engineer spending his week end vibe coding some slop at a rate of 13k lines of code in 7h to replace a vendor. Is this really the new direction we want to set for our industry? For the first time ever, I have had a CTO vibe conding something to replace my product [1] even though it cost less than a day of his salary. The direction we are heading makes me want to quit, all points to software now being worthless
[1] https://github.com/mickael-kerjean/filestash
These articles remind me so much of those old internet debates about "teleportation" and consciousness.
Your physical form is destructively read into data, sent via radio signal, and reconstructed on the other end. Is it still you? Did you teleport, or did you die in the fancy paper shredder/fax machine?
If vibe code is never fully reviewed and edited, then it's not "alive" and effectively zombie code?
> The reference implementation is JavaScript, whereas our pipeline is in Go. So for years we’ve been running a fleet of jsonata-js pods on Kubernetes - Node.js processes that our Go services call over RPC.
> This was costing us ~$300K/year in compute
Wooof. As soon as that kind of spend hit my radar for this sort of service I would have given my most autistic and senior engineer a private office and the sole task of eliminating this from the stack.
At any point did anyone step back and ask if jsonata was the right tool in the first place? I cannot make any judgements here without seeing real world examples of the rules themselves and the ways that they are leveraged. Is this policy language intentionally JSON for portability with other systems, or for editing by end users?
Your most autistic and senior engineer is now named Claude. Point him at nearly any task, pair-program with codex, and review the results.
Darn, I'd wished they improved one of the existing Go or Rust implementations.
As long as you are using JSON, you will be able to optimize.
Did you know that you can pass numbers up to 2 billion in 4 constant bytes instead of as a string of 20 average dynamic bytes? Also, fun fact, you can cut your packets in half by not repeating the names of your variables in every packet, you can instead use a positional system where cardinality represents the type of the variable.
And you can do all of this with pre AI technology!
Neat trick huh?
No shit, you fixed the leak and it stopped leaking? The bottleneck was actually the bottleneck?! Instrumentation helped you pinpoint the issue?!?! I am flabbergasted