Okay, I'll ask the dumb question: Couldn't you also reduce the number of layers per container? Sure, if you can reuse layers you should, but unless you've done something very clever like 1 package per layer I struggle to think that 50 is really useful?
> unless you've done something very clever like 1 package per layer I struggle to think that 50 is really useful?
1 package per layer can actually be quite nice, since it means that any package updates will only affect that layer, meaning that downloading container updates will use much less network bandwidth. This is nice for things like bootc [0] that are deployed on the "edge", but less useful for things deployed in a well-connected server farm.
> If you change the package defined in the bottom most layer, all 49 above it are invalid and need re-pulled or re-built.
I also initially thought that that was the case, but some tools are able to work around that [0] [1] [2]. I have no idea how it works, but it works pretty well in my experience.
Its not a dumb question. It seems like when it comes to these supposed high tech enterprise solutions, they spend so much churn in doing something that is very complex and impressive like investigating architecture performance when it comes to kernel level operations and figuring out the kernel specifics that are causing slowdowns. Instead they can put that talent into just writing software without containers that can just max out any EC2 instance in terms of delivering streamed content, and then you don't worry about why your containers are taking so long to load.
I have seen these comments quite a bit but they gloss over a major feature of a large company.
In a large company you can have thousands of developers just coding away at their features without worrying about how any of it runs. You can dislike that, but that's how that goes.
From a company perspective this is preferable as those developers are supposedly focussed on building the things that make the company money. It also allows you to hire people that might be good at that but have no idea how the deployment actually works or how to optimize that. Meanwhile with all code running sort of the same way, that makes the operations side easier.
When the company grows and you're dealing with thousands of people contributing code. These optimizations might save a lot of money/time. But those savings might be peanuts compared with every 10 devs coming up with their own deployment and the ops overhead of that.
I am not familiar with the nitty gritty of container instance building process, so maybe I'm just not the intended audience, but this is particularly unclear to me:
> To avoid the costly process of untarring and shifting UIDs for every container, the new runtime uses the kernel’s idmap feature. This allows efficient UID mapping per container without copying or changing file ownership, which is why containerd performs many mounts
Why does using idmap require to perform more mount?
This kind of id mapping works as a mount option (it can also be used on bind mounts). You give it a mapping of "id in filesystem on disk" to "id to return to filesystem APIs" and it's all translated on the fly.
So using the "old" container architecture could have been better than wasting time implementing the new architecture, dealing with the performance issues and wasting more time fixing the issues?
I mean Netflix is dealing with big, important things like container scaling, creating a million micro services talking to each other and so on. Having multiple tech blogging platform on Medium is not something they have a spare moment to think about.
Okay, I'll ask the dumb question: Couldn't you also reduce the number of layers per container? Sure, if you can reuse layers you should, but unless you've done something very clever like 1 package per layer I struggle to think that 50 is really useful?
> unless you've done something very clever like 1 package per layer I struggle to think that 50 is really useful?
1 package per layer can actually be quite nice, since it means that any package updates will only affect that layer, meaning that downloading container updates will use much less network bandwidth. This is nice for things like bootc [0] that are deployed on the "edge", but less useful for things deployed in a well-connected server farm.
[0]: https://bootc-dev.github.io/bootc/
It doesn't work this way really?
It's called a layer because each layer on top depends on the layers below.
If you change the package defined in the bottom most layer, all 49 above it are invalid and need re-pulled or re-built.
> If you change the package defined in the bottom most layer, all 49 above it are invalid and need re-pulled or re-built.
I also initially thought that that was the case, but some tools are able to work around that [0] [1] [2]. I have no idea how it works, but it works pretty well in my experience.
[0]: https://github.com/hhd-dev/rechunk/
[1]: https://coreos.github.io/rpm-ostree/container/#creating-chun...
[2]: https://coreos.github.io/rpm-ostree/build-chunked-oci/
Its not a dumb question. It seems like when it comes to these supposed high tech enterprise solutions, they spend so much churn in doing something that is very complex and impressive like investigating architecture performance when it comes to kernel level operations and figuring out the kernel specifics that are causing slowdowns. Instead they can put that talent into just writing software without containers that can just max out any EC2 instance in terms of delivering streamed content, and then you don't worry about why your containers are taking so long to load.
I have seen these comments quite a bit but they gloss over a major feature of a large company.
In a large company you can have thousands of developers just coding away at their features without worrying about how any of it runs. You can dislike that, but that's how that goes.
From a company perspective this is preferable as those developers are supposedly focussed on building the things that make the company money. It also allows you to hire people that might be good at that but have no idea how the deployment actually works or how to optimize that. Meanwhile with all code running sort of the same way, that makes the operations side easier.
When the company grows and you're dealing with thousands of people contributing code. These optimizations might save a lot of money/time. But those savings might be peanuts compared with every 10 devs coming up with their own deployment and the ops overhead of that.
I am not familiar with the nitty gritty of container instance building process, so maybe I'm just not the intended audience, but this is particularly unclear to me:
Why does using idmap require to perform more mount?This kind of id mapping works as a mount option (it can also be used on bind mounts). You give it a mapping of "id in filesystem on disk" to "id to return to filesystem APIs" and it's all translated on the fly.
Articles like this are pretty cool. It’s so interesting to see the behind the scenes that happens whenever we watch a Netflix movie.
So using the "old" container architecture could have been better than wasting time implementing the new architecture, dealing with the performance issues and wasting more time fixing the issues?
Interesting, another case of removing HT improving performance. Reminds me of doing that on Intel CPUs of a few gens ago.
It took them this long to move from docker to containerd?
- can someone kindly explain why there are 2 websites that all claim to be netflix tech blog?
- website 1 https://netflixtechblog.medium.com/
- website 2 https://netflixtechblog.com/
I mean Netflix is dealing with big, important things like container scaling, creating a million micro services talking to each other and so on. Having multiple tech blogging platform on Medium is not something they have a spare moment to think about.
Why is this so badly AI written? Netflix can surely pay for writers.
At this point I refuse to read any content in the AI format of: - The problem - The solution - Why it matters