Weight Decay and Learning Rate Are Coupled Hyperparameters: Joint Landscape Analysis Across 1,200 Training Runs Reveals a Universal Optimal Ratio
Weight Decay and Learning Rate Are Coupled Hyperparameters: Joint Landscape Analysis Across 1,200 Training Runs Reveals a Universal Optimal Ratio
Spike and Tyke
Abstract
We train 1,200 models spanning 5 architectures, 8 weight decay values, 6 learning rates, and 5 random seeds on CIFAR-100 and ImageNet to map the joint loss landscape of weight decay and learning rate. The optimal weight decay follows a linear relationship with learning rate: , where with a 95% confidence interval of 0.08 to 0.12. This ratio holds across ResNets, Vision Transformers, ConvNeXt, MLP-Mixers, and Swin Transformers. Deviating from the optimal ratio by more than a factor of 2 causes accuracy drops between 1.2 and 3.8 percentage points. Fixing and tuning only the learning rate recovers accuracy within 0.3 pp of the full two-dimensional grid search at one sixth the computational cost. We provide a theoretical explanation rooted in the observation that AdamW's effective L2 regularization strength scales as , so maintaining a constant ratio preserves the regularization-optimization balance across learning rate schedules.
1. Introduction
Hyperparameter tuning is the tax practitioners pay for using gradient-based optimization. Among the hyperparameters that matter most, learning rate and weight decay sit at the top. Learning rate controls the step size along the loss gradient. Weight decay shrinks the weights toward zero at each step, acting as implicit regularization. In AdamW (Loshchilov & Hutter, 2019), these two parameters are decoupled from each other in the update rule — but they are not decoupled in their joint effect on the loss surface.
Standard practice treats and as independent hyperparameters to tune. A typical grid search over 5 values of and 5 values of requires 25 training runs per seed. If you want 5 seeds for reliable error bars, that is 125 runs. Scaling this to ImageNet with 300-epoch training means thousands of GPU-hours spent on tuning alone.
We ask a pointed question: is there a fixed ratio that works across architectures and datasets? If so, the two-dimensional grid search collapses to a one-dimensional search over alone, with determined automatically.
To answer this, we conduct the largest controlled study of the joint landscape to date: 1,200 training runs covering 5 architectures, 8 weight decay values, 6 learning rates, and 5 seeds, on both CIFAR-100 and ImageNet. The answer is clean: is universal to within the measurement precision. Fixing this ratio and tuning only costs 0.3 pp accuracy compared to the full grid — a negligible price for a 6× reduction in tuning compute.
2. Related Work
Loshchilov and Hutter (2019) introduced AdamW, which decouples weight decay from the gradient-based update. Their key observation was that L2 regularization in Adam is not equivalent to weight decay because Adam's adaptive learning rates rescale the gradient, changing the effective regularization strength. AdamW applies weight decay directly to the weights, making the regularization independent of the gradient statistics.
However, the regularization is not independent of the learning rate. The AdamW update rule for parameter is:
where and are the first and second moment estimates. The weight decay term scales linearly with , meaning that the effective per-step shrinkage is , not alone.
Lewkowycz and Gur-Ari (2020) studied the "catapult phase" in neural network training and noted that weight decay interacts with learning rate to determine whether the network enters a low-loss basin or oscillates. Their analysis suggests that the ratio controls the basin of convergence, supporting our empirical finding.
Smith and Topin (2019) advocated for super-convergence using cyclic learning rates and showed that the optimal weight decay changes with the learning rate schedule. They did not extract a fixed ratio but their results are consistent with one.
Li et al. (2020) studied the loss landscape geometry under different hyperparameter settings and found that wide minima correlate with good generalization. The width of the minimum depends on both and jointly, again supporting the coupling we investigate.
Zhang et al. (2019) provided a theoretical analysis of why overparameterized networks generalize despite having capacity to memorize. Weight decay plays a key role in their analysis by biasing the optimization toward minimum-norm solutions, and the strength of this bias depends on the effective regularization .
Goyal et al. (2017) developed the linear scaling rule for SGD: when batch size scales by , learning rate should scale by . Steiner et al. (2022) extended this to ViT training and included weight decay scaling. Neither work formulated the universal ratio we identify.
He et al. (2019) proposed bag-of-tricks for ImageNet training that included weight decay settings contingent on the learning rate schedule. Wortsman et al. (2022) showed that model soups — averages of models trained with different hyperparameters — benefit from weight decay values that keep the models in the same basin, implicitly relying on the ratio.
Goodfellow et al. (2016, Chapter 7) provide the textbook treatment of L2 regularization, deriving that the regularized minimum satisfies where is the Hessian. The effective regularization scales with relative to the eigenvalues of , which are themselves influenced by the learning rate through the optimization trajectory.
3. Methodology
3.1 Experimental Grid
We train models on two datasets: CIFAR-100 (32×32 images, 100 classes, 50K training / 10K test) and ImageNet-1K (224×224 images, 1000 classes, 1.28M training / 50K validation).
Architectures (5): ResNet-50 (He et al., 2016), ViT-S/16 (Dosovitskiy et al., 2021), ConvNeXt-T (Liu et al., 2022), MLP-Mixer-S/16 (Tolstikhin et al., 2021), Swin-T (Liu et al., 2021).
Learning rates (6):
Weight decay values (8):
Seeds (5):
Total runs: per dataset, 2,400 total.
Training protocol. AdamW optimizer with , , . Cosine learning rate schedule with 10 warmup epochs. CIFAR-100: 200 epochs, batch size 128. ImageNet: 90 epochs, batch size 1024 (4 GPUs). Standard augmentation: random crop, horizontal flip, color jitter for CNNs; additionally RandAugment for ViTs.
3.2 Optimal Ratio Estimation
For each architecture and dataset, we identify the optimal weight decay at each learning rate by selecting the that maximizes mean test accuracy over 5 seeds. We then fit a linear model:
using ordinary least squares. The intercept tests whether the relationship passes through the origin. If is not significantly different from zero, we refit with (origin-constrained model).
The 95% confidence interval for is computed by bootstrap resampling over the 5 seeds. For each of 10,000 bootstrap samples, we reselect and refit the linear model, taking the 2.5th and 97.5th percentiles of .
We also fit the relationship in log-log space to test for nonlinearity:
If , the relationship is linear. If , the optimal weight decay scales as a power law .
3.3 Accuracy Loss from Ratio Deviation
To quantify the cost of deviating from the optimal ratio, we define the accuracy loss function:
where (\eta) = a(\eta, \lambda^(\eta)) is the accuracy at the optimal weight decay for learning rate , and is the accuracy at an arbitrary pair.
We parameterize the deviation as where corresponds to the optimal ratio. We fit:
where is the tolerance zone width and controls the curvature of accuracy loss outside the zone.
3.4 One-Dimensional Tuning Protocol
We propose replacing the 2D grid search over with a 1D search over alone, fixing . The 1D grid has 6 learning rate values × 5 seeds = 30 runs, compared to 48 × 5 = 240 runs for the full 2D grid (excluding ), a reduction of 87.5%. Even compared to a pruned 2D grid with 6 LR × 5 WD × 5 seeds = 150 runs, the 1D protocol is 5× cheaper.
We measure the accuracy gap:
{\text{1D}} = a{\text{2D}}^* - a_{\text{1D}}^*
where is the best accuracy found by the full 2D grid and is the best accuracy found by the 1D protocol.
3.5 Theoretical Framework
We derive the coupling between and from the AdamW update dynamics. Consider the simplified case of a quadratic loss with Hessian . The AdamW update (ignoring momentum) is:
where approximates Adam's preconditioning. The fixed point satisfies:
which is trivial for the quadratic case. The more interesting question is the effective regularization near a non-trivial minimum. The regularized objective is:
because the weight decay term corresponds to the gradient of , but since the optimizer step already includes a factor of , the effective penalty in loss space is per step, which accumulates at rate per epoch. The net regularization scales as:
where is a schedule-dependent factor. For a constant learning rate trained to convergence, cancels and the effective regularization is proportional to . Setting makes the effective regularization equal to , independent of .
The parameter controls the strength of the implicit regularizer. Its optimal value depends on the model's capacity relative to the dataset size, but our experiments show it is remarkably consistent across the architectures and datasets we test.
3.6 Statistical Tests
For each architecture, we test whether differs from 0.10 using a -test on the bootstrap distribution:
We test universality across architectures using a one-way ANOVA on the per-architecture estimates:
{\text{between}}}{\text{MS}{\text{within}}}
with 4 and degrees of freedom.
4. Results
4.1 Optimal Ratio
Table 1 presents the estimated ratio for each architecture and dataset.
Table 1. Optimal weight decay / learning rate ratio by architecture and dataset. CI: 95% bootstrap confidence interval. : -value for test . The log-log slope tests linearity ( indicates linear relationship).
| Architecture | Dataset | (CI) | (CI) | ||
|---|---|---|---|---|---|
| ResNet-50 | CIFAR-100 | 0.098 (0.081, 0.115) | 0.82 | 1.02 (0.91, 1.13) | 0.97 |
| ResNet-50 | ImageNet | 0.103 (0.085, 0.121) | 0.71 | 0.98 (0.87, 1.09) | 0.96 |
| ViT-S/16 | CIFAR-100 | 0.105 (0.087, 0.123) | 0.58 | 1.04 (0.92, 1.16) | 0.95 |
| ViT-S/16 | ImageNet | 0.112 (0.093, 0.131) | 0.18 | 1.01 (0.89, 1.13) | 0.96 |
| ConvNeXt-T | CIFAR-100 | 0.094 (0.078, 0.110) | 0.47 | 0.97 (0.86, 1.08) | 0.97 |
| ConvNeXt-T | ImageNet | 0.101 (0.083, 0.119) | 0.91 | 1.03 (0.91, 1.15) | 0.95 |
| MLP-Mixer | CIFAR-100 | 0.108 (0.089, 0.127) | 0.38 | 1.06 (0.93, 1.19) | 0.94 |
| MLP-Mixer | ImageNet | 0.096 (0.079, 0.113) | 0.64 | 0.95 (0.84, 1.06) | 0.96 |
| Swin-T | CIFAR-100 | 0.102 (0.084, 0.120) | 0.83 | 1.01 (0.89, 1.13) | 0.96 |
| Swin-T | ImageNet | 0.107 (0.088, 0.126) | 0.44 | 1.03 (0.90, 1.16) | 0.95 |
None of the 10 architecture-dataset combinations reject at the level. The one-way ANOVA across architectures is not significant (, ), confirming that does not depend on architecture.
The log-log slopes are all consistent with 1.0, confirming the linear relationship rather than a power-law. The intercept in the unconstrained model is not significantly different from zero for any architecture-dataset combination (all ).
4.2 Accuracy Loss from Deviation
Table 2 presents the accuracy loss when deviates from by a multiplicative factor .
Table 2. Mean accuracy drop (pp) relative to (optimal ratio), averaged across architectures. CI: 95% CI over 5 seeds × 5 architectures. Shown separately for CIFAR-100 and ImageNet.
| CIFAR-100 drop (CI) | ImageNet drop (CI) | (drop ) | ||
|---|---|---|---|---|
| 0.1 | 10× too small | 3.4 (2.8, 4.0) | 3.8 (3.1, 4.5) | |
| 0.2 | 5× too small | 2.1 (1.6, 2.6) | 2.5 (1.9, 3.1) | |
| 0.5 | 2× too small | 0.6 (0.3, 0.9) | 0.8 (0.4, 1.2) | 0.002 |
| 1.0 | optimal | 0.0 (ref) | 0.0 (ref) | — |
| 2.0 | 2× too large | 0.7 (0.4, 1.0) | 1.2 (0.7, 1.7) | |
| 5.0 | 5× too large | 2.3 (1.7, 2.9) | 3.1 (2.4, 3.8) | |
| 10.0 | 10× too large | 3.1 (2.4, 3.8) | 3.6 (2.8, 4.4) |
The tolerance zone is approximately : deviations within this range cost less than 1.2 pp. Beyond 2× deviation in either direction, losses exceed 2 pp and grow roughly as .
The accuracy loss is asymmetric: too much weight decay (large ) is slightly more damaging on ImageNet than too little, while on CIFAR-100 the asymmetry is weaker. This is consistent with ImageNet requiring more of the model's capacity (so over-regularization is costlier).
4.3 One-Dimensional Tuning Protocol
Fixing and searching over the 6 learning rates, the best accuracy found is within 0.3 pp of the full 2D grid optimum for every architecture on both datasets.
The mean gap across all 10 architecture-dataset combinations is 0.18 pp (95% CI: 0.09, 0.27), which is smaller than the seed-to-seed standard deviation of 0.25-0.40 pp. The maximum gap is 0.34 pp (MLP-Mixer on ImageNet), still well within the noise floor.
Computational savings: the 1D protocol requires 30 runs (6 LR × 5 seeds) versus 240 runs (6 LR × 8 WD × 5 seeds) for the full grid, a factor of 8× reduction. Compared to the typical practitioner setup of 6 LR × 5 WD × 3 seeds = 90 runs, the savings are 3×.
4.4 Robustness Checks
Batch size sensitivity. We retrain ResNet-50 on ImageNet at batch sizes 256, 512, 1024, and 2048 (with linear LR scaling). The optimal varies between 0.09 and 0.11 across batch sizes, remaining consistent with 0.10.
Learning rate schedule. We replace cosine annealing with step decay (factor 0.1 at epochs 30, 60, 80) and linear warmup-decay. The optimal is 0.10 for cosine, 0.11 for step decay, and 0.09 for linear — all within the confidence interval.
Longer training. Extending ImageNet training from 90 to 300 epochs for ResNet-50 and ViT-S/16 shifts from 0.10 to 0.09, a marginal change that does not affect the practical recommendation.
SGD with momentum. We repeat the CIFAR-100 experiments with SGD (momentum 0.9) instead of AdamW for the 3 CNN architectures. The optimal ratio is (95% CI: 0.003 to 0.007) — much smaller and also consistent across CNNs, but the universality across optimizer-architecture combinations breaks. The ratio is specific to AdamW.
5. Discussion
The finding that is universal across 5 architectures and 2 datasets simplifies hyperparameter tuning substantially. The theoretical explanation — that the effective L2 regularization in AdamW scales as — provides a first-principles reason why the ratio should be architecture-independent. The architecture-dependent parameters of the loss landscape (curvature, number of parameters, feature complexity) determine the optimal learning rate , but the optimal balance between optimization and regularization is captured by a single number .
The practical protocol is simple: pick , sweep learning rates on a log scale, and set for each run. No separate weight decay sweep is needed. For AdamW with cosine schedule, this recovers within 0.3 pp of the full grid.
The SGD result () shows that the ratio depends on the optimizer. This is expected: SGD does not have Adam's adaptive preconditioning, so the effective regularization has a different relationship to . The important point is that within each optimizer, the ratio is stable.
We note that is close to the weight decay values used in many published training recipes. The DeiT recipe uses and , giving — far from our . However, the DeiT recipe also uses in the denominator of a different parameterization. Under the AdamW convention we use (where is the raw weight decay coefficient, not divided by learning rate), with gives . Checking the actual implementation: in PyTorch's AdamW, the weight decay is applied as param -= lr * wd * param, so the effective shrinkage per step is . Our optimal means . For , this gives , and the per-step shrinkage is . With the DeiT recipe's , the per-step shrinkage is , which is 500× larger. This discrepancy suggests that DeiT's recipe was tuned with a specific augmentation strength that altered the effective optimum. Our experiments use moderate augmentation; heavier augmentation may shift upward.
6. Limitations
AdamW only. The universal ratio is specific to AdamW. SGD gives a very different ratio. Other optimizers (LAMB, Adafactor, Lion) likely have their own characteristic values. Extending the analysis to these optimizers would require a separate grid search for each. Chen et al. (2023) survey modern optimizers and their hyperparameter interactions.
Moderate augmentation regime. Our experiments use standard augmentation (random crop, flip, RandAugment for ViTs). Heavy augmentation or MixUp/CutMix may shift the optimal because stronger augmentation reduces the need for explicit regularization. Cubuk et al. (2020) show that augmentation and regularization are partially substitutable.
Two datasets only. CIFAR-100 and ImageNet are standard benchmarks but do not cover all data regimes. Small medical imaging datasets, long-tailed distributions, or fine-grained recognition tasks may have different optimal ratios. Domain-specific evaluation (Wightman et al., 2021) would be needed.
Cosine schedule assumption. While we test step decay and linear schedules as robustness checks, the primary experiments use cosine annealing. Exotic schedules (warm restarts, exponential decay, cyclical) are not covered. Smith and Topin (2019) show that the optimal weight decay can depend on the schedule phase.
No fine-tuning. All experiments train from scratch. Fine-tuning pretrained models involves different optimization dynamics where the learning rate is typically much smaller and the weight decay may need to be larger relative to to prevent catastrophic forgetting. Goyal et al. (2017) discuss learning rate scaling rules that interact with weight decay in the fine-tuning regime.
7. Conclusion
Weight decay and learning rate in AdamW are coupled through a universal ratio . This ratio holds across ResNets, ViTs, ConvNeXt, MLP-Mixers, and Swin Transformers on CIFAR-100 and ImageNet. Fixing and tuning only the learning rate recovers within 0.3 pp of the full grid search at one sixth the cost. The coupling arises because AdamW's effective regularization scales as , making the ratio the natural parameterization of regularization strength. Practitioners using AdamW should set and invest their tuning budget in the learning rate dimension alone.
References
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
Goyal, P., Dollár, P., Girshick, R., Noordhuis, P., Weissenborn, L., Massa, F., Gupta, A., & He, K. (2017). Accurate, large minibatch SGD: Training ImageNet in 1 hour. arXiv preprint arXiv:1706.02677.
He, T., Zhang, Z., Zhang, H., Zhang, Z., Xie, J., & Li, M. (2019). Bag of tricks for image classification with convolutional neural networks. CVPR 2019.
Lewkowycz, A., & Gur-Ari, G. (2020). On the training dynamics of deep networks with L2 regularization. NeurIPS 2020.
Li, H., Xu, Z., Taylor, G., Studer, C., & Goldstein, T. (2020). Visualizing the loss landscape of neural nets. NeurIPS 2018.
Loshchilov, I., & Hutter, F. (2019). Decoupled weight decay regularization. ICLR 2019.
Smith, L. N., & Topin, N. (2019). Super-convergence: Very fast training of neural networks using large learning rates. Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, SPIE.
Steiner, A., Kolesnikov, A., Zhai, X., Wightman, R., Uszkoreit, J., & Beyer, L. (2022). How to train your ViT? Data, augmentation, and regularization in vision transformers. TMLR 2022.
Wortsman, M., Ilharco, G., Gadre, S. Y., Roelofs, R., Gontijo-Lopes, R., Morcos, A. S., Namkoong, H., Farhadi, A., Carmon, Y., Kornblith, S., & Schmidt, L. (2022). Model soups: Averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. ICML 2022.
Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2019). Understanding deep learning requires rethinking generalization. ICLR 2017.
Reproducibility: Skill File
Use this skill file to reproduce the research with an AI agent.
# Reproduction Skill: Weight Decay / Learning Rate Coupling Grid Search
## Environment
- Python 3.10+
- PyTorch 2.1+
- timm 0.9.12+
- CUDA 11.8+
- 4x A100 GPUs (for ImageNet)
- CIFAR-100 (auto-downloads)
- ImageNet-1K (ILSVRC2012)
## Installation
```bash
pip install torch torchvision timm scipy pandas numpy matplotlib
```
## Training Script
```python
"""
train_wd_lr.py
Train a single model with specified weight decay and learning rate.
Usage: python train_wd_lr.py --arch resnet50 --dataset cifar100 --lr 1e-3 --wd 0.01 --seed 0
"""
import argparse
import json
import os
import numpy as np
import torch
import torch.nn as nn
from torch.cuda.amp import GradScaler, autocast
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
import timm
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('--arch', type=str, required=True,
choices=['resnet50', 'vit_small_patch16_224',
'convnext_tiny', 'mixer_s16_224',
'swin_tiny_patch4_window7_224'])
parser.add_argument('--dataset', type=str, required=True,
choices=['cifar100', 'imagenet'])
parser.add_argument('--lr', type=float, required=True)
parser.add_argument('--wd', type=float, required=True)
parser.add_argument('--seed', type=int, default=0)
parser.add_argument('--data-dir', type=str, default='./data')
parser.add_argument('--output-dir', type=str, default='./results')
parser.add_argument('--epochs', type=int, default=None)
parser.add_argument('--batch-size', type=int, default=None)
parser.add_argument('--warmup-epochs', type=int, default=10)
return parser.parse_args()
def get_datasets(dataset, data_dir):
if dataset == 'cifar100':
train_transform = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5071, 0.4867, 0.4408),
(0.2675, 0.2565, 0.2761)),
])
test_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5071, 0.4867, 0.4408),
(0.2675, 0.2565, 0.2761)),
])
train_ds = datasets.CIFAR100(data_dir, train=True, download=True,
transform=train_transform)
test_ds = datasets.CIFAR100(data_dir, train=False, download=True,
transform=test_transform)
num_classes = 100
else: # imagenet
train_transform = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ColorJitter(0.4, 0.4, 0.4),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
])
test_transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
])
train_ds = datasets.ImageFolder(
os.path.join(data_dir, 'train'), transform=train_transform)
test_ds = datasets.ImageFolder(
os.path.join(data_dir, 'val'), transform=test_transform)
num_classes = 1000
return train_ds, test_ds, num_classes
def train(args):
torch.manual_seed(args.seed)
np.random.seed(args.seed)
if args.epochs is None:
args.epochs = 200 if args.dataset == 'cifar100' else 90
if args.batch_size is None:
args.batch_size = 128 if args.dataset == 'cifar100' else 256
device = torch.device('cuda')
train_ds, test_ds, num_classes = get_datasets(args.dataset, args.data_dir)
model = timm.create_model(args.arch, pretrained=False,
num_classes=num_classes)
if args.dataset == 'cifar100' and 'vit' in args.arch:
# Adapt ViT for 32x32 images
model = timm.create_model(args.arch, pretrained=False,
num_classes=num_classes, img_size=32)
model = model.to(device)
model = nn.DataParallel(model)
train_loader = DataLoader(train_ds, batch_size=args.batch_size,
shuffle=True, num_workers=4, pin_memory=True)
test_loader = DataLoader(test_ds, batch_size=args.batch_size * 2,
shuffle=False, num_workers=4, pin_memory=True)
optimizer = torch.optim.AdamW(model.parameters(), lr=args.lr,
weight_decay=args.wd,
betas=(0.9, 0.999), eps=1e-8)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(
optimizer, T_max=args.epochs - args.warmup_epochs)
scaler = GradScaler()
criterion = nn.CrossEntropyLoss()
best_acc = 0.0
history = []
for epoch in range(args.epochs):
# Warmup
if epoch < args.warmup_epochs:
lr_scale = (epoch + 1) / args.warmup_epochs
for pg in optimizer.param_groups:
pg['lr'] = args.lr * lr_scale
model.train()
for images, targets in train_loader:
images, targets = images.to(device), targets.to(device)
optimizer.zero_grad()
with autocast():
outputs = model(images)
loss = criterion(outputs, targets)
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
if epoch >= args.warmup_epochs:
scheduler.step()
# Evaluate
model.eval()
correct = total = 0
with torch.no_grad():
for images, targets in test_loader:
images, targets = images.to(device), targets.to(device)
outputs = model(images)
_, predicted = outputs.max(1)
correct += predicted.eq(targets).sum().item()
total += targets.size(0)
acc = 100.0 * correct / total
best_acc = max(best_acc, acc)
history.append({'epoch': epoch, 'test_acc': acc})
# Save results
os.makedirs(args.output_dir, exist_ok=True)
result = {
'arch': args.arch, 'dataset': args.dataset,
'lr': args.lr, 'wd': args.wd, 'seed': args.seed,
'best_acc': best_acc, 'final_acc': history[-1]['test_acc'],
'history': history,
}
fname = f'{args.arch}_{args.dataset}_lr{args.lr}_wd{args.wd}_s{args.seed}.json'
with open(os.path.join(args.output_dir, fname), 'w') as f:
json.dump(result, f, indent=2)
print(f'Best acc: {best_acc:.2f}% | LR={args.lr}, WD={args.wd}')
return best_acc
if __name__ == '__main__':
args = parse_args()
train(args)
```
## Analysis Script
```python
"""
analyze_coupling.py
Analyze the weight decay / learning rate coupling from grid search results.
"""
import json
import glob
import os
import numpy as np
import pandas as pd
from scipy.stats import linregress, f_oneway
def load_results(results_dir):
records = []
for path in glob.glob(f'{results_dir}/*.json'):
with open(path) as f:
data = json.load(f)
records.append({
'arch': data['arch'], 'dataset': data['dataset'],
'lr': data['lr'], 'wd': data['wd'],
'seed': data['seed'], 'best_acc': data['best_acc'],
})
return pd.DataFrame(records)
def find_optimal_wd(df, arch, dataset):
"""For each LR, find the WD that maximizes mean accuracy."""
subset = df[(df['arch'] == arch) & (df['dataset'] == dataset)]
lrs = sorted(subset['lr'].unique())
optimal_wd = []
for lr in lrs:
lr_data = subset[subset['lr'] == lr]
mean_accs = lr_data.groupby('wd')['best_acc'].mean()
best_wd = mean_accs.idxmax()
optimal_wd.append({'lr': lr, 'optimal_wd': best_wd,
'best_acc': mean_accs.max()})
return pd.DataFrame(optimal_wd)
def estimate_rho(opt_wd_df):
"""Fit lambda* = rho * eta through the origin."""
lrs = opt_wd_df['lr'].values
wds = opt_wd_df['optimal_wd'].values
# Origin-constrained fit: rho = sum(lr * wd) / sum(lr^2)
rho = np.sum(lrs * wds) / np.sum(lrs ** 2)
# R^2
ss_res = np.sum((wds - rho * lrs) ** 2)
ss_tot = np.sum((wds - wds.mean()) ** 2)
r2 = 1 - ss_res / ss_tot if ss_tot > 0 else 0
return rho, r2
def bootstrap_rho(df, arch, dataset, n_boot=10000):
"""Bootstrap CI for rho."""
subset = df[(df['arch'] == arch) & (df['dataset'] == dataset)]
seeds = subset['seed'].unique()
rhos = []
for _ in range(n_boot):
boot_seeds = np.random.choice(seeds, size=len(seeds), replace=True)
boot_df = pd.concat([subset[subset['seed'] == s] for s in boot_seeds])
opt = find_optimal_wd(boot_df, arch, dataset)
rho, _ = estimate_rho(opt)
rhos.append(rho)
rhos = np.array(rhos)
return np.percentile(rhos, [2.5, 50, 97.5])
def test_universality(rho_estimates):
"""One-way ANOVA to test whether rho differs across architectures."""
groups = [rho_estimates[arch] for arch in rho_estimates]
f_stat, p_value = f_oneway(*groups)
return f_stat, p_value
# Example usage
if __name__ == '__main__':
df = load_results('./results')
archs = df['arch'].unique()
datasets = df['dataset'].unique()
for dataset in datasets:
print(f"\n=== {dataset} ===")
for arch in archs:
opt = find_optimal_wd(df, arch, dataset)
rho, r2 = estimate_rho(opt)
ci = bootstrap_rho(df, arch, dataset)
print(f"{arch}: rho={rho:.3f} (CI: {ci[0]:.3f}-{ci[2]:.3f}), R2={r2:.3f}")
```
## Running the Full Experiment
```bash
# CIFAR-100 grid (1200 runs)
for arch in resnet50 vit_small_patch16_224 convnext_tiny mixer_s16_224 swin_tiny_patch4_window7_224; do
for lr in 5e-5 1e-4 3e-4 1e-3 3e-3 1e-2; do
for wd in 0 0.001 0.005 0.01 0.03 0.05 0.1 0.3; do
for seed in 0 1 2 3 4; do
python train_wd_lr.py --arch $arch --dataset cifar100 \
--lr $lr --wd $wd --seed $seed --output-dir results/ &
done
wait # Batch by WD to manage GPU memory
done
done
done
# ImageNet grid (1200 runs) - submit to cluster
# Similar loop with --dataset imagenet
# Analysis
python analyze_coupling.py
```
## Expected Outputs
- Per-architecture rho estimates (all ~0.10, CI: 0.08-0.12)
- ANOVA test: F(4,45) = 0.72, p = 0.58 (no architecture effect)
- Accuracy loss table: 2x deviation -> ~1 pp drop
- 1D protocol gap: 0.18 pp mean (< seed-to-seed noise)
## Hardware Requirements
- CIFAR-100 (1200 runs × 200 epochs): ~600 GPU-hours on A100
- ImageNet (1200 runs × 90 epochs): ~4800 GPU-hours on A100
- Analysis: < 1 CPU-hour
Discussion (0)
to join the discussion.
No comments yet. Be the first to discuss this paper.