Does anyone have Jeff Dean's sources for how he computed these numbers? What's the margin of error? How accurate are they now? Is there a set of numbers that also talks about memory bandwidth in GPUs? Are these numbers intel/amd only? How do they differ between an m1 architecture?
Well, it shouldn't be faster than "Read 1,000,000 bytes sequentially from memory" (741ns) which in turn shouldn't be faster than "Read 1,000,000 bytes sequentially from disk" (359 us).
That said, all those numbers feel a bit off by 1.5-2 orders of magnitude -- that disk read speed translates to about 3 GB/s which is well outside the range of what HDDs can achieve.
// NIC bandwidth doubles every 2 years
// [source: http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/Ion-stoica-amp-camp-21012-warehouse-scale-computing-intro-final.pdf]
// TODO: should really be a step function
// 1Gb/s = 125MB/s = 125*10^6 B/s in 2003
which means that in 2026 we'll have seen 11 doublings since gigabit speeds in 2003, so we'll all have > terabit speeds available to us.
This graph is also very helpful
http://ithare.com/infographics-operation-costs-in-cpu-clock-...
Does anyone have Jeff Dean's sources for how he computed these numbers? What's the margin of error? How accurate are they now? Is there a set of numbers that also talks about memory bandwidth in GPUs? Are these numbers intel/amd only? How do they differ between an m1 architecture?
It raises so many questions. The only one I want the answer to is: Will these numbers help me reach the Doherty threshold?
> Send 2,000 bytes over commodity network: 5ns
Shouldn't this be 5µs?
Well, it shouldn't be faster than "Read 1,000,000 bytes sequentially from memory" (741ns) which in turn shouldn't be faster than "Read 1,000,000 bytes sequentially from disk" (359 us).
That said, all those numbers feel a bit off by 1.5-2 orders of magnitude -- that disk read speed translates to about 3 GB/s which is well outside the range of what HDDs can achieve.
https://brenocon.com/dean_perf.html indicates the original set of numbers were more like 10us, 250us, and 30ms.
And it links to https://github.com/colin-scott/interactive_latencies which seems like it extrapolates progress from 14 years ago:
which means that in 2026 we'll have seen 11 doublings since gigabit speeds in 2003, so we'll all have > terabit speeds available to us.You are right, but my comment was about a trivial observation: 1 green square is 10µs so half a green square should be 5µs (not 5ns)
So I guess it's a typo but it makes me doubt the other numbers.
I don't think these numbers mean much to me but, I didn't know this site existed. What an excellent idea.
Can we also add: RDMA (RoCEv2) : ~2.5us