Idk to me this is just redescribing what deep neural networks do without actually explaining why anything happens. I guess it "unifies" things but I am kinda over most unifying theories. Everything is Bayesian, everything is a graph or a group or some other fancy geometric structure, everything is a category. Ultimately the best framework is whatever is useful enough to explain what's happening in such a way that a practitioner can manipulate the model towards a desired outcome. In other words, where is the knob? The tool they share may be interesting and I hope to play with it to see what happens at different levels of noise applied to the labels.
This is a fascinating mathematical framework, but the post title might be a bit of an overreach. I often wonder if "a theory of deep learning" could exist that could be stated succinctly and that could predict (1) scaling laws and (2) the surprising reliability of gradient descent.
Note that I said "predict" not "describe". It feels like we're still in the era of Kepler, not Newton.
Interesting read. I remember the grokking paper when it came out but I don't think I've ever seen that classic grokking loss curve in my own hands on real data. Curious if others have seen it more often in practice
This is a beautifully written way of saying “Some parts of what the network memorizes affect test behavior, and some don’t.” But that’s not a theory of deep learning, the grand unified theory would explain that.
We're given a signal channel and a reservoir. Signal lives in the channel, noise lives in the reservoir, and the reservoir supposedly doesn’t show up at test time.
Okay, but then we have: why would SGD put the right things in the right bucket?
If the answer is “because the reservoir is defined as the stuff that doesn’t transfer to test,” then this is close to circular.
The Borges/Lavoisier stuff is a tell. "We have unified the field” rhetoric should come after nontrivial predictions and results. Claiming to solve benign overfitting, double descent, grokking, implicit bias, risk of training on population, how to avoid a validation set, and last but not least, skipping training by analytically jumping to the end is 6 theory papers, 3 NeurIPS winners, and a $10B startup. Let's get some results before we tell everyone we unified the field. :) I hope you're right.
> why would SGD put the right things in the right bucket?
Think of it as a best fit curve and exceptions to that curve. The noise is essentially this set of exceptions that move points away from where they would otherwise fall on the curve.
Gradient descent wants to be able to make the smallest change that moves the most data points towards the curve. To do this it learns an arrangement where it can change, say, one parameter and have a bunch of points move at once. What does this correspond to? The big common patterns shared by many data points.
Most of the capacity gets soaked up modelling these sorts of common patterns, and after they have been learned the model starts adding exceptions that allow individual points to deviate from the curve.
Because they’re exceptions, they must not impact neighbouring points, or at least only ones within a very short distance from them. Otherwise they’re now driving the error higher by impacting more points than they should. So you end up with very narrow ranges of features that are able to trigger different sorts of noise.
How narrow they are is shaped by the training data, they’re exactly as narrow as needed not to raise the error, so assuming the total population has the same distribution, they don’t get hit. Much.
If that's the case, a way to test the theory and understanding (assuming some parts of reservoir and signal channel can be reliably identified) would be to prune the high-confidence reservoir significantly reducing the model size while still getting good predictions. I don't believe the authors mention this (though I skimmed and didn't read the full paper in detail so I may be wrong)
These are the same complaints I had. Also felt like it was high quality ai writing, possibly because of the style choices like "Benign overfitting is noise sitting in the reservoir at interpolation. XYZ is ..." and because of the similarity it has to the times I ended up with chatgpt or gemini creating very detailed and plausible reports about my own crackpot or vague-enough-to-be-useless ideas.
Nah, the softer stuff seems like valuable outreach / good science communication for people that aren't up for the math. Including probably lots of software engineers who are sick of dumb debates in forums, and starting to dip into the real literature and listen to better authorities. More people should do this really, since it's the only way to see past the marketing and hype from fully entrenched AI boosters or detractors. Neither of those groups is big on critical thinking, and they dominate most conversation.
Time/effort coming from experts who want to make things accessible is a gift! The paper is linked elsewhere in the thread if you want no-frills.
Admittedly probably some aggrandized boasting here, but I think empirical verification of that Adam modification alone would be a meaningful contribution, unless that's prior work?
Idk to me this is just redescribing what deep neural networks do without actually explaining why anything happens. I guess it "unifies" things but I am kinda over most unifying theories. Everything is Bayesian, everything is a graph or a group or some other fancy geometric structure, everything is a category. Ultimately the best framework is whatever is useful enough to explain what's happening in such a way that a practitioner can manipulate the model towards a desired outcome. In other words, where is the knob? The tool they share may be interesting and I hope to play with it to see what happens at different levels of noise applied to the labels.
This is a fascinating mathematical framework, but the post title might be a bit of an overreach. I often wonder if "a theory of deep learning" could exist that could be stated succinctly and that could predict (1) scaling laws and (2) the surprising reliability of gradient descent.
Note that I said "predict" not "describe". It feels like we're still in the era of Kepler, not Newton.
The relevant paper: "A Theory of Generalization in Deep Learning". https://arxiv.org/abs/2605.01172
Interesting read. I remember the grokking paper when it came out but I don't think I've ever seen that classic grokking loss curve in my own hands on real data. Curious if others have seen it more often in practice
This essay seems to be related to the paper "There Will Be a Scientific Theory of Deep Learning" [1] which was discussed here recently [2].
[1] https://arxiv.org/pdf/2604.21691
[2] https://news.ycombinator.com/item?id=47893779
A very fascinating read.
As a fellow tufte css enjoyer, Why is user select turned off on the sidenotes? I would like to be able to copy paste them quite badly.
Layout is fine but font is atrocious.
Uppercase letters have different stroke width than lowercase ones — it’s like they are *B*old *L*ike this.
Not only that: tracking, kerning is basically non-existent.
Please don’t use that open-source font
You need real Bembo, not that piece of shit
Does anyone happen to know what font this site is using? It looks really elegant.
It is a modified version of ET_Book called ET_Bembo:
https://github.com/DavidBarts/ET_Bembo
I love u. thanks!
apparently its the font used in Edward Tufte's books. Its on github: https://edwardtufte.github.io/et-book/
The "Quantitative Display of Information", which I just checked, is using Monotype Bembo. So still Bembo, but a different version.
Font is atrocious.
Uppercase letters have different stroke width than lowercase ones — it’s like they are *B*old *L*ike this.
Not only that: tracking, kerning is basically non-existent.
Please don’t use that open-source font
You need real paid Bembo, not that piece of shit.
This is a beautifully written way of saying “Some parts of what the network memorizes affect test behavior, and some don’t.” But that’s not a theory of deep learning, the grand unified theory would explain that.
We're given a signal channel and a reservoir. Signal lives in the channel, noise lives in the reservoir, and the reservoir supposedly doesn’t show up at test time.
Okay, but then we have: why would SGD put the right things in the right bucket?
If the answer is “because the reservoir is defined as the stuff that doesn’t transfer to test,” then this is close to circular.
The Borges/Lavoisier stuff is a tell. "We have unified the field” rhetoric should come after nontrivial predictions and results. Claiming to solve benign overfitting, double descent, grokking, implicit bias, risk of training on population, how to avoid a validation set, and last but not least, skipping training by analytically jumping to the end is 6 theory papers, 3 NeurIPS winners, and a $10B startup. Let's get some results before we tell everyone we unified the field. :) I hope you're right.
> why would SGD put the right things in the right bucket?
Think of it as a best fit curve and exceptions to that curve. The noise is essentially this set of exceptions that move points away from where they would otherwise fall on the curve.
Gradient descent wants to be able to make the smallest change that moves the most data points towards the curve. To do this it learns an arrangement where it can change, say, one parameter and have a bunch of points move at once. What does this correspond to? The big common patterns shared by many data points.
Most of the capacity gets soaked up modelling these sorts of common patterns, and after they have been learned the model starts adding exceptions that allow individual points to deviate from the curve.
Because they’re exceptions, they must not impact neighbouring points, or at least only ones within a very short distance from them. Otherwise they’re now driving the error higher by impacting more points than they should. So you end up with very narrow ranges of features that are able to trigger different sorts of noise.
How narrow they are is shaped by the training data, they’re exactly as narrow as needed not to raise the error, so assuming the total population has the same distribution, they don’t get hit. Much.
At least, that’s what I take away from it.
If that's the case, a way to test the theory and understanding (assuming some parts of reservoir and signal channel can be reliably identified) would be to prune the high-confidence reservoir significantly reducing the model size while still getting good predictions. I don't believe the authors mention this (though I skimmed and didn't read the full paper in detail so I may be wrong)
These are the same complaints I had. Also felt like it was high quality ai writing, possibly because of the style choices like "Benign overfitting is noise sitting in the reservoir at interpolation. XYZ is ..." and because of the similarity it has to the times I ended up with chatgpt or gemini creating very detailed and plausible reports about my own crackpot or vague-enough-to-be-useless ideas.
> The Borges/Lavoisier stuff is a tell.
Nah, the softer stuff seems like valuable outreach / good science communication for people that aren't up for the math. Including probably lots of software engineers who are sick of dumb debates in forums, and starting to dip into the real literature and listen to better authorities. More people should do this really, since it's the only way to see past the marketing and hype from fully entrenched AI boosters or detractors. Neither of those groups is big on critical thinking, and they dominate most conversation.
Time/effort coming from experts who want to make things accessible is a gift! The paper is linked elsewhere in the thread if you want no-frills.
Admittedly probably some aggrandized boasting here, but I think empirical verification of that Adam modification alone would be a meaningful contribution, unless that's prior work?