hmm... at Q4_K_M, stock-style quantization is retaining ~99–99.8% of BF16 accuracy, AutoRound pushes that to ~99.4–100.n% (??) the gap is roughly 0.1–0.7 percentage points
Claims of "we preserve 99.9999%" of accuracy are made in practically every quantization paper. The whole subfield acts like it's totally fine that they are testing on datasets that these models have fully trained on.
If we were in any other subfield doing this would be considered cheating and get your paper rejected, but the quantization community really loves to spread FUD claiming that quantization doesn't harm models
Also, similar dynamic with dense vs sparse MoE models. There's a reason we keep getting dense model releases alongside the MoEs out of China.
Quantization is not free, causes significant brain damage (especially on very long contexts), and has enough academic misconduct within it that it's actively screwing up the market. Don't believe me? Go ask your local financial analyst about the markets reaction to TurboQuant and than try to square that circle with this: https://openreview.net/forum?id=tO3ASKZlok (extreme and credible allegations of academic misconduct/fraud)
Most quant papers I've seen usually report non-trivial degradation on standard benchmarks, like 1-10% degradation (compared to FP16/BF16). Especially when using 4 bits or lower. For example, I just opened a random paper: https://arxiv.org/pdf/2410.09426 see Table 1.
You can try it with this model here: https://hugston.com/models/56tps-tested-autoround-qwen35-35b... which is really well done and can run pretty fast with ctx up to 300k. Just 11.65 GB. Get the Mmproj also for vision/image processing.
hmm... at Q4_K_M, stock-style quantization is retaining ~99–99.8% of BF16 accuracy, AutoRound pushes that to ~99.4–100.n% (??) the gap is roughly 0.1–0.7 percentage points
https://github.com/intel/auto-round/blob/main/docs/gguf_alg_...
Claims of "we preserve 99.9999%" of accuracy are made in practically every quantization paper. The whole subfield acts like it's totally fine that they are testing on datasets that these models have fully trained on.
If we were in any other subfield doing this would be considered cheating and get your paper rejected, but the quantization community really loves to spread FUD claiming that quantization doesn't harm models
Also, similar dynamic with dense vs sparse MoE models. There's a reason we keep getting dense model releases alongside the MoEs out of China.
Quantization is not free, causes significant brain damage (especially on very long contexts), and has enough academic misconduct within it that it's actively screwing up the market. Don't believe me? Go ask your local financial analyst about the markets reaction to TurboQuant and than try to square that circle with this: https://openreview.net/forum?id=tO3ASKZlok (extreme and credible allegations of academic misconduct/fraud)
Most quant papers I've seen usually report non-trivial degradation on standard benchmarks, like 1-10% degradation (compared to FP16/BF16). Especially when using 4 bits or lower. For example, I just opened a random paper: https://arxiv.org/pdf/2410.09426 see Table 1.