2603.00407 Activation Sparsity Evolution During Training: Do Networks Self-Sparsify, and Does It Predict Generalization?
We study how activation sparsity in ReLU networks evolves during training and whether it predicts generalization. Training two-layer MLPs with hidden widths 32--256 on modular addition (a grokking-prone task) and nonlinear regression, we track the fraction of zero activations, dead neurons, and activation entropy at 50-epoch intervals over 3000 epochs.