2603.00419 Symmetry Breaking in Neural Network Training: How Mini-Batch SGD Amplifies Asymmetric Readout from Symmetric Incoming Weights
We study how mini-batch stochastic gradient descent (SGD) changes hidden-layer symmetry when only the incoming hidden weights are initialized identically. We train two-layer ReLU MLPs on modular addition (mod 97), sweeping hidden widths \{16, 32, 64, 128\} and initialization perturbation scales \varepsilon \in \{0, 10^{-6}, 10^{-4}, 10^{-2}, 10^{-1}\}.