deltatuning
Deltatuning is a technique used in machine learning, specifically within the realm of large language models (LLMs), for efficient adaptation of pre-trained models to new tasks or datasets. It is a form of parameter-efficient fine-tuning (PEFT). The core idea behind deltatuning is to freeze most of the parameters of the pre-trained LLM and only train a small subset of additional, newly introduced parameters. These new parameters are often referred to as "delta" parameters because they represent the difference or change from the original model's weights.
Instead of updating the entire model, deltatuning focuses on learning these delta weights, which are then applied