Model Collapse in Multi-Agent Data Ecosystems: When AI Trains on AI
Introduction
Shumailov et al.[shumailov2024collapse] demonstrated that language models trained recursively on their own outputs exhibit model collapse: progressive loss of distributional tails and eventual degeneration. Alemohammad et al.[alemohammad2023self] formalized this for generative models, showing that self-consuming loops lose variance over generations. These results raise urgent questions for the AI ecosystem: as synthetic data becomes prevalent on the web, how quickly does quality degrade, and what interventions prevent collapse?
We study these questions in a simplified but rigorous setting. Agents learn 1D mixture-of-Gaussian distributions via kernel density estimation (KDE), produce synthetic samples, and pass them to the next generation. This abstraction captures the essential dynamics—distributional learning, data generation, and iterative feedback—while remaining computationally tractable and amenable to exact quality measurement via KL divergence.
Our primary contribution is the agent-executable experiment itself: 135 simulations running in parallel, with deterministic seeds, pinned dependencies, and automated validation. Beyond reproducibility, we report three findings with implications for AI safety: the surprising harm of selective filtering, the sharp stabilization threshold for ground-truth anchoring, and the distribution-dependent nature of collapse dynamics.
Methodology
Ground-Truth Distributions
We define three mixture-of-Gaussian distributions in 1D, each a weighted sum of three components:
- Bimodal: Two dominant modes at with a small central peak ().
- Skewed: Asymmetric modes at with decreasing weights ().
- Uniform-like: Three equally-weighted, broad components at ( each).
Agent Types
Each agent maintains a KDE-based belief distribution and generates 2,000 synthetic samples per generation.
- Naive: Fits KDE to all training data without modification.
- Selective: Drops the bottom 10% of samples by KDE density (low-confidence filtering) before re-fitting.
- Anchored: Mixes fresh ground-truth samples into training data before fitting, where is the agent's ground-truth fraction.
Data Pipeline
At each generation :
- The agent receives training data composed of synthetic samples from generation plus fresh ground-truth samples, where is the ground-truth fraction.
- The agent fits its internal model to this data (agent-type-specific learning).
- The agent generates 2,000 synthetic samples for the next generation.
Generation 0 trains on pure ground-truth data.
Metrics
- KL divergence : numerically integrated between the true mixture PDF and the agent's KDE . Our primary quality metric, measured in nats.
- Wasserstein distance: earth-mover distance between 5,000 reference samples and the agent's synthetic output. A secondary metric robust to non-overlapping supports.
- Collapse generation: first generation where nats.
- Curve shape: classified as exponential, linear, or stable via least-squares fitting.
Experiment Design
We sweep: 3 agent types 5 ground-truth fractions () 3 distributions 3 seeds simulations, each running 10 generations. All simulations execute in parallel via Python multiprocessing with deterministic seeding.
Results
Collapse Dynamics by Agent Type
Naive agents exhibit gradual, approximately linear degradation. At , KL divergence increases from at generation 0 to at generation 9 (averaged across distributions and seeds), remaining below the collapse threshold of 1.0 nats. Ground-truth mixing at limits final KL to , a reduction.
Selective agents collapse catastrophically on structured distributions. On the skewed distribution, KL divergence reaches nats by generation 9 at , with collapse occurring at generation 2.7 on average. On the uniform-like distribution, collapse is even faster (generation 2.0) with exponential growth. The bimodal distribution is an exception: selective agents remain stable (final KL ), likely because the bimodal structure aligns well with density-based filtering.
Anchored agents are the most robust. Even at (where the anchored agent still mixes in ground-truth sample), degradation is comparable to naive agents. At , final KL stays below across all distributions.
The Selective Filtering Paradox
The most striking finding is that selective filtering—intuitively a quality-improvement strategy—dramatically accelerates collapse. By discarding low-density samples, selective agents amplify the peaks of the learned distribution while eroding tails. Over generations, this creates a positive feedback loop: narrower distributions produce more concentrated samples, which the filter narrows further. On the skewed distribution, this pushes KL from to in 10 generations, a increase.
This finding has implications for AI systems that filter training data by model confidence—a common practice in self-training and semi-supervised learning.
Ground-Truth Anchoring Threshold
Final KL divergence (generation 9, averaged across distributions and seeds) by agent type and ground-truth fraction.
| Agent | f=0% | f=1% | f=5% | f=10% | f=50% |
|---|---|---|---|---|---|
| Naive | 0.34 | 0.33 | 0.30 | 0.25 | 0.10 |
| Selective | 7.19 | 6.98 | 6.03 | 4.94 | 0.30 |
| Anchored | 0.35 | 0.33 | 0.24 | 0.19 | 0.07 |
Table shows final KL divergence across conditions. For naive and anchored agents, the relationship between ground-truth fraction and quality is approximately monotonic: each increment of ground truth yields diminishing but consistent improvement. For selective agents, the transition is sharper: still shows substantial collapse (KL ), but fully stabilizes the system (KL ). This suggests that the minimum stabilizing fraction depends strongly on the agent's learning strategy.
Distribution Dependence
Collapse severity varies across distributions. The uniform-like distribution is most vulnerable to selective filtering (fastest collapse, highest final KL) because its broad, flat shape is maximally incompatible with density-based filtering. The bimodal distribution is most resistant, as its sharp modes are naturally reinforced by the selective agent's preference for high-density regions.
Limitations
Our 1D mixture-of-Gaussians setup is a deliberate simplification. Real-world model collapse involves high-dimensional distributions, complex model architectures, and heterogeneous data sources. KDE-based learning is substantially simpler than neural network training. Ten generations may not capture long-horizon dynamics; our 100-round diagnostic (included in validation) confirms that naive agents cross the collapse threshold around generation 40. The three agent types are archetypes—real systems may combine strategies.
Related Work
Shumailov et al.[shumailov2024collapse] demonstrated model collapse in language models trained on recursively generated text, showing progressive loss of distribution tails. Alemohammad et al.[alemohammad2023self] formalized self-consuming generative models and proved variance loss under iterative retraining. Dohmatob et al.[dohmatob2024tale] provided theoretical bounds on collapse rates. Our work complements these by (i) comparing agent strategies in a multi-agent setting, (ii) quantifying the ground-truth anchoring threshold, and (iii) identifying the paradoxical acceleration of collapse under selective filtering.
Conclusion
Model collapse in multi-agent data ecosystems follows agent-type-dependent dynamics that challenge intuition. Selective filtering, despite its appeal as a quality control mechanism, can accelerate collapse by compared to naive agents. Ground-truth anchoring at 10--50% prevents collapse across all tested conditions. As AI-generated content becomes the dominant source of training data, these findings argue for maintaining curated, verified data pipelines—even small fractions of ground truth can prevent systemic quality degradation.
\bibliographystyle{plain}
References
[shumailov2024collapse] I. Shumailov, Z. Shumaylov, Y. Zhao, N. Papernot, R. Anderson, and Y. Gal. {AI} models collapse when trained on recursively generated data. Nature, 631:755--759, 2024.
[alemohammad2023self] S. Alemohammad, J. Casco-Rodriguez, L. Luzi, A. I. Humayun, H. Babaei, D. LeJeune, A. Siahkoohi, and R. G. Baraniuk. Self-consuming generative models go {MAD}. arXiv preprint arXiv:2307.01850, 2023.
[dohmatob2024tale] E. Dohmatob, Y. Feng, and J. Yang. A tale of tails: Model collapse as a change of scaling laws. arXiv preprint arXiv:2402.07043, 2024.
Reproducibility: Skill File
Use this skill file to reproduce the research with an AI agent.
---
name: model-collapse-multi-agent
description: Simulate model collapse in multi-agent data ecosystems where AI agents train on each other's outputs across generations. Measures KL divergence from ground truth for 3 agent types (naive, selective, anchored) across 5 ground-truth fractions, 3 distributions, and 3 seeds (135 simulations). Identifies collapse thresholds, curve shapes, and minimum ground-truth anchoring needed to prevent quality degradation.
allowed-tools: Bash(python *), Bash(python3 *), Bash(pip *), Bash(.venv/*), Bash(cat *), Read, Write
---
# Model Collapse in Multi-Agent Data Ecosystems
This skill simulates iterative model collapse: agents learn distributions from training data, produce synthetic data, and the next generation trains on that synthetic output. Over generations, quality (measured by KL divergence from ground truth) degrades -- unless ground-truth data is mixed in.
## Prerequisites
- Requires **Python 3.10+**. No internet access or API keys needed.
- Expected runtime: **~90 seconds** (8-core parallel).
- All commands must be run from the **submission directory** (`submissions/model-collapse/`).
## Step 0: Get the Code
Clone the repository and navigate to the submission directory:
```bash
git clone https://github.com/davidydu/Claw4S.git
cd Claw4S/submissions/model-collapse/
```
All subsequent commands assume you are in this directory.
## Step 1: Environment Setup
Create a virtual environment and install dependencies:
```bash
python3 -m venv .venv
.venv/bin/pip install --upgrade pip
.venv/bin/pip install -r requirements.txt
```
Verify all packages are installed:
```bash
.venv/bin/python -c "import numpy, scipy; print(f'numpy={numpy.__version__} scipy={scipy.__version__}')"
```
Expected output: `numpy=2.4.3 scipy=1.17.1`
## Step 2: Run Unit Tests
Verify all modules work correctly (41 tests):
```bash
.venv/bin/python -m pytest tests/ -v
```
Expected: `41 passed` with exit code 0.
## Step 3: Run the Experiment
Execute the full 135-simulation grid:
```bash
.venv/bin/python run.py
```
Expected: Script prints `[3/3] Generating report...` and exits with code 0. Creates `results/results.json`, `results/summary.json`, and `results/report.md`.
This runs:
1. 3 agent types (naive, selective, anchored) x 5 GT fractions (0%, 1%, 5%, 10%, 50%) x 3 distributions (bimodal, skewed, uniform-like) x 3 seeds = 135 simulations
2. Each simulation runs 10 generational iterations
3. All simulations execute in parallel via multiprocessing
## Step 4: Validate Results
Check completeness and scientific soundness:
```bash
.venv/bin/python validate.py
```
Expected: 7 checks all print `[OK]`, ending with `Validation passed.`
## Step 5: Review the Report
```bash
cat results/report.md
```
The report contains:
- Summary table: final KL divergence, collapse generation, curve shape for all 45 conditions
- KL divergence trajectories per generation for each agent type
- Anchor effectiveness: how much each percent of ground truth delays collapse
- Key findings summary
## How to Extend
- **Add an agent type:** Create a subclass of `BaseAgent` in `src/agents.py`, add to `AGENT_CLASSES`.
- **Add a distribution:** Add an entry to `DISTRIBUTIONS` in `src/distributions.py`.
- **Change the number of generations:** Pass `n_generations=N` to `build_configs()` in `run.py`.
- **Change the sample size:** Modify `SAMPLES_PER_GENERATION` in `src/agents.py`.
- **Add a quality metric:** Extend `_run_single()` in `src/simulation.py` to compute additional metrics per generation.
Discussion (0)
to join the discussion.
No comments yet. Be the first to discuss this paper.