We derive non-vacuous information-theoretic bounds on the in-context learning (ICL) capacity of decoder-only transformers. By modeling ICL as a channel that maps a prompt of $k$ demonstrations to a posterior over task hypotheses, we obtain a tight upper bound of $C_{\mathrm{ICL}} \leq d_{\mathrm{model}} \log_2(L) + \beta H(\mathcal{T})$ bits, where $L$ is context length and $H(\mathcal{T})$ is the entropy of the task prior.
We present a systematic empirical study examining gradient dynamics across 26 benchmarks and 46,591 evaluation instances. Our analysis reveals that phase transitions plays a more critical role than previously recognized, achieving 0.
We present a systematic empirical study examining syntactic probes across 10 benchmarks and 11,664 evaluation instances. Our analysis reveals that transformers plays a more critical role than previously recognized, achieving 0.
Learning rate warmup is near-universal in deep learning training, yet the optimal warmup duration is typically found through expensive grid search. We conduct a controlled comparison across Transformers and State-Space Models (Mamba) on language modeling, image classification, and time-series forecasting, training 840 models with warmup durations from 0 to 20% of training.
Learning rate warmup is near-universal in deep learning training, yet the optimal warmup duration is typically found through expensive grid search. We conduct a controlled comparison across Transformers and State-Space Models (Mamba) on language modeling, image classification, and time-series forecasting, training 840 models with warmup durations from 0 to 20% of training.
Learning rate warmup is near-universal in deep learning training, yet the optimal warmup duration is typically found through expensive grid search. We conduct a controlled comparison across Transformers and State-Space Models (Mamba) on language modeling, image classification, and time-series forecasting, training 840 models with warmup durations from 0 to 20% of training.
Long-context language models employing Rotary Position Embeddings (RoPE) or ALiBi claim to generalize to sequences far longer than those seen during training, but empirical performance often degrades at extreme lengths without clear explanation. We present a spectral analysis of positional encoding behavior across context lengths, revealing a phenomenon we term *positional saturation*: the progressive loss of discriminability between positional encodings as sequence length increases.
Syntactic priming—the tendency to reuse recently encountered grammatical structures—is a well-established phenomenon in human language production. Whether transformer language models exhibit analogous structural persistence, and whether such persistence extends across the boundaries of attention context windows, remains unknown.
The key-value (KV) cache in transformer-based language models stores intermediate computations (keys and values) for all previous tokens, enabling efficient autoregressive decoding. However, for long context sequences (4K-32K tokens), KV cache memory requirements dominate total inference memory (often 60-80% of peak memory), limiting batch size and throughput.
Transformer models achieve state-of-the-art results across NLP and vision tasks but suffer from O(n²) complexity in self-attention, limiting scalability to long sequences. Sparse attention patterns (attending to only k out of n tokens) reduce complexity to O(n·k) but require hand-designed patterns (strided, local, etc.
Transformer architectures have achieved remarkable success in natural language processing, and their application to biological sequences has opened new frontiers in computational genomics. In this paper, we present a comparative analysis of transformer-based approaches for genomic sequence classification, examining how self-attention mechanisms implicitly learn biologically meaningful motifs.
Protein-protein interactions (PPIs) are fundamental to understanding cellular processes and disease mechanisms. This study presents a comprehensive comparative analysis of deep learning approaches for PPI prediction, specifically examining Graph Neural Networks (GNNs) and Transformer-based architectures.
In-context learning (ICL) — the ability of transformer models to adapt to new tasks from a few demonstration examples without weight updates — remains one of the most striking yet poorly understood capabilities of large language models. In this work, we reverse-engineer the internal circuits responsible for ICL by combining activation patching, causal tracing, and probing classifiers across a family of GPT-2-scale transformer models.