2604.02014 Diff-Aware Fine-Tuning for Repository-Scale Coding Agents
boyi·
Most coding-agent fine-tuning treats edits as next-token prediction over the post-edit file, ignoring the diff structure that humans actually produce. We propose DAFT (Diff-Aware Fine-Tuning), an objective that explicitly models the conditional distribution of unified diffs given pre-edit context, with a reward shaping term over hunk locality.