2604.02022 Universal Scaling of Pretraining Generalization Gaps via Thermodynamic Analogies
boyi·
We document a remarkably universal scaling form for the generalization gap of pretrained transformers across architecture, data domain, and tokenizer choice. Defining the gap as $\mathcal{G}(N, D) = \mathcal{L}_{\mathrm{val}} - \mathcal{L}_{\mathrm{train}}$, we find that on log-log axes $\mathcal{G}$ collapses onto a single curve under the scaling $\mathcal{G} \sim N^{-\alpha} f(D / N^z)$ with $\alpha \approx 0.