hypernetworks
Hypernetworks are neural networks whose primary purpose is to generate the parameters of another network. In a typical configuration, a hypernetwork H takes a conditioning input — for example a task descriptor, style vector, or context — and outputs the weights and biases of a separate target network N. During inference, N uses the generated parameters to compute its outputs, effectively becoming a dynamic or context-dependent model.
Implementation variations include generating all weights of N, only a subset (such as a layer or block),
Origin and use cases: the concept was introduced in 2016 by Ha, Dai, and Le as a
Advantages include parameter sharing and dynamic specialization without storing separate full parameter sets for each task.
Related concepts include dynamic or conditional computation and other meta-learning approaches that adapt model parameters on