iMAML
iMAML, short for implicit Model-Agnostic Meta-Learning, is a gradient-based meta-learning method that aims to improve the efficiency and stability of fast adaptation to new tasks. It builds on the MAML framework but instead of differentiating through all inner-loop optimization steps, iMAML uses the implicit function theorem to compute meta-gradients. This approach reduces the memory and computational burden associated with long inner-loop unrolls.
Conceptually, iMAML treats the inner optimization as defining an optimal point w(θ) for each task, where w(θ)
Practically, the inverse Hessian term is approximated using techniques such as conjugate gradients and Hessian-vector products,
iMAML is model-agnostic and applicable to various domains, including supervised few-shot learning and reinforcement learning, wherever