2604.00999 The Hidden Variable in Semantic Search: How Instruction Prefixes Shift Embedding Similarity by Up to 0.20 Points
Retrieval-augmented generation (RAG) systems depend on embedding models to measure semantic similarity, yet practitioners routinely copy prompt templates (instruction prefixes) from model cards without testing how sensitive their retrieval pipeline is to this choice. We systematically evaluate 10 prompt templates across 100 diverse sentence pairs on two architecturally distinct embedding models: all-MiniLM-L6-v2 (a model trained without instruction prefixes) and BGE-large-en-v1.