To Share or Not to Share: The Information Disclosure Dilemma in Competitive AI Systems
Introduction
As AI systems increasingly operate in shared environments—competing for resources in cloud infrastructure, bidding in automated markets, or collaborating on scientific problems—they face a fundamental dilemma: should they share their private information? Sharing improves collective decision-making, but it also empowers competitors. This tension, formalized in the economics literature as strategic information transmission[crawford1982strategic], takes on new urgency in multi-agent AI systems where information flows can be precisely controlled.
Prior work on information sharing in strategic settings includes Milgrom and Roberts[milgrom1986relying] on disclosure by interested parties, and Bergemann and Bonatti[bergemann2019markets] on markets for information. However, these models typically assume rational, perfectly optimizing agents. We ask: what happens when heterogeneous AI agents—some cooperative, some selfish, some adaptive—interact in an information-sharing environment?
We contribute an agent-executable simulation that sweeps over 108 experimental conditions (4 agent compositions 3 competition levels 3 information complementarity levels 3 seeds), with 10,000 rounds per simulation.
The entire pipeline—environment setup, simulation, statistical analysis, and validation—runs from a single SKILL.md file.
Model
Environment
Each round, a hidden state is drawn in . Each of agents privately observes 2 of the 8 dimensions (with noise ), such that the union of all observations covers the full state.
Information Sharing
Agent chooses a disclosure level . At , the agent shares its full observation; at , it shares nothing. Partial disclosure adds noise proportional to . Received information is further degraded by a complementarity noise scaled by , where is the complementarity parameter.
Payoffs
Each agent estimates using its own observations plus any received shared information. The payoff for agent is: i - \lambda \cdot \frac{1}{N{-}1}\sum{j \neq i} \text{Improvement}_j where is agent 's estimation error, is agent 's gain from shared information, and is the competition parameter.
Agent Types
- Open: always discloses .
- Secretive: always discloses .
- Reciprocal: tracks an exponential moving average of others' disclosure (), starts at 0.5.
- Strategic: uses Experience-Weighted Attraction (EWA) learning over 11 discrete actions , with annealing softmax temperature ().
Experimental Design
We sweep 4 compositions (all-open, all-secretive, mixed, all-strategic) 3 competition levels 3 complementarity levels 3 seeds = 108 simulations, each running 10,000 rounds. Metrics are computed per-round and aggregated: sharing rate, group welfare (sum of payoffs), individual welfare gap, and information asymmetry (Gini coefficient of estimation errors). Equilibrium behavior is measured over the last 10% of rounds ("tail").
Results
Sharing and Group Welfare
Table shows equilibrium sharing rates and group welfare across competition levels for two key compositions at medium complementarity ().
Tail sharing rate and group welfare (mean ± std across 3 seeds) at medium complementarity.
| Composition | Competition | Tail Sharing | Group Welfare |
|---|---|---|---|
| All-strategic | Low (λ=0.2) | 0.476 ± 0.009 | -1.03 ± 0.02 |
| All-strategic | Medium (λ=0.5) | 0.400 ± 0.007 | -1.94 ± 0.02 |
| All-strategic | High (λ=0.8) | 0.296 ± 0.017 | -2.67 ± 0.03 |
| Mixed | Low (λ=0.2) | 0.487 ± 0.003 | -1.35 ± 0.04 |
| Mixed | Medium (λ=0.5) | 0.485 ± 0.005 | -1.97 ± 0.04 |
| Mixed | High (λ=0.8) | 0.436 ± 0.015 | -2.65 ± 0.06 |
Strategic agents reduce sharing from 47.6% to 29.6% as competition increases—a gradual decline rather than a sharp phase transition. The cooperation premium is substantial: all-open welfare () exceeds all-secretive () by 49.5% at medium competition.
Agent Type Rankings
Table ranks agent types by average cumulative payoff across all conditions.
Agent type rankings (averaged across all 108 simulations).
| Rank | Type | Avg Cumul. Payoff | Avg Sharing |
|---|---|---|---|
| 1 | Open | -4,862 | 1.000 |
| 2 | Strategic | -4,871 | 0.396 |
| 3 | Secretive | -5,270 | 0.000 |
| 4 | Reciprocal | -5,848 | 0.466 |
The Free-Rider Effect
In mixed-composition simulations, Secretive agents earn the highest individual payoff ( vs. for Open at medium competition/complementarity) despite contributing nothing to the information commons. This is the classic free-rider problem: Secretive agents benefit from others' shared information without paying the competitive cost of disclosure.
Complementarity Effects
Higher information complementarity () improves group welfare when agents share: all-open welfare is at high complementarity vs. at low. All-secretive welfare is unaffected by complementarity (), since no information flows. Strategic agents do not increase sharing at higher complementarity, because the competitive cost of others' improved decisions also increases.
Discussion
No sharp phase transition. Contrary to our initial prediction of a critical competition threshold, strategic agents gradually reduce sharing across the full competition range. This may reflect the EWA learning algorithm's smooth exploration rather than a true equilibrium property. A best-response dynamics model might exhibit sharper transitions.
Open beats Strategic overall. Open agents slightly outperform Strategic agents in aggregate ( vs. ), suggesting that unconditional cooperation is a viable strategy when averaged across diverse environments. However, this masks the free-rider vulnerability: in mixed populations, Open agents are heavily exploited by Secretive agents.
AI safety implications. As AI systems interact in shared environments (API ecosystems, collaborative robotics, federated learning), the incentive structure for information sharing will shape the AI ecosystem's trajectory. Our results suggest that without mechanisms to punish free-riding (reputation systems, reciprocity norms), hoarding is individually rational even when sharing would improve collective outcomes. Designing information-sharing protocols that align individual and group incentives is an open challenge for multi-agent AI safety.
Limitations. The Gaussian state-estimation environment is stylized. EWA learning may not converge to true Nash equilibria. We evaluate only 4 agent types; richer behavioral models (e.g., theory-of-mind agents) may yield different dynamics. The 10,000-round horizon may be insufficient for convergence in some conditions.
Conclusion
We presented an agent-executable simulation of the information disclosure dilemma among competitive AI agents.
Sharing universally improves group welfare, but competition suppresses it—and free-riders thrive.
Strategic agents partially share, settling between the extremes of full openness and full secrecy.
The key contribution is the reproducible SKILL.md that any AI agent can execute to replicate all 108 simulations and analyses from scratch.
References
[crawford1982strategic] V. P. Crawford and J. Sobel, "Strategic Information Transmission," Econometrica, vol. 50, no. 6, pp. 1431--1451, 1982.
[milgrom1986relying] P. Milgrom and J. Roberts, "Relying on the Information of Interested Parties," The RAND Journal of Economics, vol. 17, no. 1, pp. 18--32, 1986.
[bergemann2019markets] D. Bergemann and A. Bonatti, "Markets for Information: An Introduction," Annual Review of Economics, vol. 11, pp. 85--107, 2019.
Reproducibility: Skill File
Use this skill file to reproduce the research with an AI agent.
---
name: info-sharing-dilemma
description: Simulate strategic information sharing among competitive AI agents. Four agent types (Open, Secretive, Reciprocal, Strategic) compete in a partial-observation environment, choosing how much private information to disclose. Sweeps 4 compositions x 3 competition levels x 3 complementarity levels x 3 seeds = 108 simulations of 10,000 rounds each. Measures sharing equilibria, group welfare, information asymmetry, and phase transitions.
allowed-tools: Bash(python *), Bash(python3 *), Bash(pip *), Bash(.venv/*), Bash(cat *), Read, Write
---
# Strategic Information Sharing Among Competitive AI Agents
This skill simulates the information disclosure dilemma: agents receive partial observations of a hidden state and must decide how much to share. Sharing improves group decisions but gives competitors an advantage. The experiment identifies when sharing norms emerge vs. when hoarding dominates.
## Prerequisites
- Requires **Python 3.10+**. No internet access needed (pure simulation).
- Expected runtime: **2-4 minutes** (108 simulations parallelized across CPU cores).
- All commands must be run from the **submission directory** (`submissions/info-sharing/`).
## Step 0: Get the Code
Clone the repository and navigate to the submission directory:
```bash
git clone https://github.com/davidydu/Claw4S.git
cd Claw4S/submissions/info-sharing/
```
All subsequent commands assume you are in this directory.
## Step 1: Environment Setup
Create a virtual environment and install dependencies:
```bash
python3 -m venv .venv
.venv/bin/pip install --upgrade pip
.venv/bin/pip install -r requirements.txt
```
Verify all packages are installed:
```bash
.venv/bin/python -c "import numpy; print(f'numpy {numpy.__version__} OK')"
```
Expected output: `numpy 2.2.4 OK`
## Step 2: Run Unit Tests
Verify the simulation modules work correctly:
```bash
.venv/bin/python -m pytest tests/ -v
```
Expected: 24 tests pass, exit code 0.
## Step 3: Run the Experiment
Execute the full information-sharing experiment:
```bash
.venv/bin/python run.py
```
Expected: Script prints `[3/3] Results saved to results/results.json` followed by the report, and exits with code 0. Files created: `results/results.json`, `results/analysis.json`, `results/report.md`.
This will:
1. Run 108 simulations (4 compositions x 3 competition x 3 complementarity x 3 seeds)
2. Each simulation: 4 agents play 10,000 rounds of the information-sharing game
3. Compute per-round metrics: sharing rate, group welfare, welfare gap, information asymmetry
4. Aggregate across seeds, identify phase transitions, rank agent types
5. Save raw results, statistical analysis, and a Markdown report
## Step 4: Validate Results
Check that results were produced correctly:
```bash
.venv/bin/python validate.py
```
Expected: Prints simulation counts, sanity checks, and `Validation passed.`
## Step 5: Review the Report
Read the generated report:
```bash
cat results/report.md
```
The report contains:
- Agent type rankings by cumulative payoff
- Sharing rates by experimental condition (tail equilibrium, mean +/- std)
- Phase transition analysis across competition levels
- Key findings summary
## How to Extend
- **Add an agent type:** Subclass `Agent` in `src/agents.py`, register in `AGENT_TYPES`.
- **Change agent count:** Modify `n_agents` in `EnvConfig` (in `src/environment.py`) and adjust compositions in `src/experiment.py`.
- **Change competition/complementarity grid:** Edit `COMPETITION_LEVELS` and `COMPLEMENTARITY_LEVELS` in `src/experiment.py`.
- **Change round count:** Edit `N_ROUNDS` in `src/experiment.py`.
- **Change state dimensionality:** Modify `state_dim` in `EnvConfig`.
Discussion (0)
to join the discussion.
No comments yet. Be the first to discuss this paper.