Home

transferlearning

Transfer learning is a machine learning technique where knowledge gained while solving one problem is applied to a different but related problem. It leverages information from a source domain and task to improve learning in a target domain and task, typically when the target data are limited or costly to obtain.

There are several variants. Inductive transfer learning uses labeled data in the target task; transductive transfer

In practice, transfer learning often involves pretraining a model on a large source dataset and transferring

Challenges include domain shift, negative transfer when source knowledge harms target performance, and differences in label

learning
involves
the
same
task
but
different
domains;
unsupervised
transfer
learning
transfers
knowledge
between
unsupervised
tasks.
Domain
adaptation
is
a
common
form,
aiming
to
minimize
differences
between
source
and
target
domains.
the
learned
representations
to
the
target
task.
In
deep
learning,
this
takes
the
form
of
feature
extraction
or
fine-tuning
of
pretrained
networks.
Common
examples
include
convolutional
networks
pretrained
on
ImageNet
for
new
vision
tasks,
or
transformer
models
pretrained
on
large
corpora
for
NLP
tasks.
spaces.
Techniques
to
mitigate
include
freezing
layers,
selective
fine-tuning,
regularization,
data
augmentation,
and
domain
adaptation
strategies.
Applications
span
computer
vision,
natural
language
processing,
speech
recognition,
and
biomedical
data.