LLMs generate unit tests with impressive coverage, but we challenge this optimism using mutation testing. We evaluate GPT-4, Claude-3, CodeLlama-34B, and DeepSeek-Coder-33B on 200 Python functions from popular libraries.
Code review thoroughness is believed to decrease with PR size, but quantitative evidence is scarce. We analyze 50,247 reviews from 187 open-source GitHub repositories.
Semantic segmentation quality measured by IoU treats all pixels equally, but boundary pixels are inherently ambiguous and annotator agreement drops to near-chance there. We propose Attention Map Entropy (AME) computed from self-attention maps at the penultimate layer of ViT-based segmentation models.
Small object detection remains challenging despite architectural advances. We characterize resolution dependence by evaluating 6 detectors (YOLOv8, DETR, Faster R-CNN, DINO, Co-DETR, RT-DETR) on VisDrone and DOTA at 8 resolutions from 320×320 to 2560×2560.
Vision Transformers were hypothesized to be more shape-biased than CNNs due to global attention, but findings are contradictory. We resolve this through Fourier-domain selective masking: removing spatial frequency bands from ImageNet images and measuring accuracy degradation.
Learning rate warmup is near-universal in deep learning training, yet the optimal warmup duration is typically found through expensive grid search. We conduct a controlled comparison across Transformers and State-Space Models (Mamba) on language modeling, image classification, and time-series forecasting, training 840 models with warmup durations from 0 to 20% of training.
Feature attribution methods—Integrated Gradients, SHAP, LIME, Attention, GradCAM—often disagree on the same input. We investigate whether this disagreement is systematic by measuring pairwise agreement (Kendall's τ and top-k overlap) as a function of model depth.
Grokking—sudden generalization long after memorization—is difficult to predict. We identify a precursor: the Gradient Acceleration Index (GAI), the second derivative of gradient norm w.
Learning rate warmup is near-universal in deep learning training, yet the optimal warmup duration is typically found through expensive grid search. We conduct a controlled comparison across Transformers and State-Space Models (Mamba) on language modeling, image classification, and time-series forecasting, training 840 models with warmup durations from 0 to 20% of training.
The double descent phenomenon—where test error first decreases, then increases, then decreases again as model complexity grows—has been extensively documented under in-distribution evaluation. We investigate whether double descent persists under distribution shift by training 2,100 models (7 architectures × 6 widths × 50 seeds) on CIFAR-10 and evaluating under five controlled shift types: covariate shift (Gaussian noise), label shift (10% flip), domain shift (CIFAR-10.
Catastrophic forgetting in continual learning is extensively studied, but its temporal dynamics—the functional form of accuracy decay on old tasks—remain poorly characterized. We train 4 continual learning methods (EWC, PackNet, Experience Replay, naive SGD) on 15 task sequences with controlled inter-task similarity across 3 architectures.
Feature attribution methods—Integrated Gradients, SHAP, LIME, Attention, GradCAM—often disagree on the same input. We investigate whether this disagreement is systematic by measuring pairwise agreement (Kendall's τ and top-k overlap) as a function of model depth.
Learning rate warmup is near-universal in deep learning training, yet the optimal warmup duration is typically found through expensive grid search. We conduct a controlled comparison across Transformers and State-Space Models (Mamba) on language modeling, image classification, and time-series forecasting, training 840 models with warmup durations from 0 to 20% of training.
The double descent phenomenon—where test error first decreases, then increases, then decreases again as model complexity grows—has been extensively documented under in-distribution evaluation. We investigate whether double descent persists under distribution shift by training 2,100 models (7 architectures × 6 widths × 50 seeds) on CIFAR-10 and evaluating under five controlled shift types: covariate shift (Gaussian noise), label shift (10% flip), domain shift (CIFAR-10.
Benchmark contamination—the inclusion of test set examples in language model pretraining data—inflates reported performance and undermines the validity of model comparisons. Existing contamination detection methods rely on output-level signals (perplexity, verbatim completion) that are unreliable for closed-source models and paraphrased contamination.
Long-context language models employing Rotary Position Embeddings (RoPE) or ALiBi claim to generalize to sequences far longer than those seen during training, but empirical performance often degrades at extreme lengths without clear explanation. We present a spectral analysis of positional encoding behavior across context lengths, revealing a phenomenon we term *positional saturation*: the progressive loss of discriminability between positional encodings as sequence length increases.
Multilingual language models achieve impressive cross-lingual transfer for high-resource languages but frequently fail for low-resource languages with limited pretraining data. While transfer failure is typically attributed to data scarcity, we demonstrate that tokenizer fertility—the ratio of tokens produced per word in a given language relative to English—is a stronger predictor of transfer performance than pretraining data volume.
Compound AI systems that chain multiple large language model (LLM) calls to solve complex tasks are increasingly deployed in production. While individual LLM calls may be well-calibrated—with stated confidence reflecting actual accuracy—we demonstrate that calibration degrades rapidly across chains.
Syntactic priming—the tendency to reuse recently encountered grammatical structures—is a well-established phenomenon in human language production. Whether transformer language models exhibit analogous structural persistence, and whether such persistence extends across the boundaries of attention context windows, remains unknown.
Hallucination in large language models is commonly understood as a failure of factual recall, with rarer entities assumed to be uniformly more prone to hallucination. We challenge this uniform-rarity hypothesis through a controlled study of hallucination rates across 12,000 entities stratified by Wikipedia page view frequency, entity type (person, location, organization, event), and temporal recency.