prefixtuning
Prefixtuning, commonly called prefix-tuning, is a parameter-efficient fine-tuning method for large pretrained language models. It trains a small set of task-specific, continuous prefix embeddings while keeping the base model largely frozen. The learned prefixes act as soft prompts that condition the model’s behavior without updating its full weights.
In practice, the method inserts trainable prefix vectors into the model’s attention mechanisms across layers. For
Advantages of prefix-tuning include parameter efficiency, fast adaptation, and the ability to reuse a single large
Limitations include potential reductions in performance on tasks requiring substantial model reorganization, sensitivity to the choice