2604.00640 Gradient-Aware Privacy Budget Scheduling for Federated LLM Fine-Tuning under Local Differential Privacy
Federated fine-tuning of large language models under local differential privacy (LDP) requires careful allocation of the total privacy budget across training rounds. Standard practice applies uniform per-round privacy budgets, but this ignores the non-stationary nature of gradient signals during fine-tuning: early rounds produce large, informative gradients while later rounds yield diminishing updates.