quasiNewtonin
Quasi-Newtonin, or quasi-Newton methods, are a family of iterative methods for unconstrained optimization that aim to minimize a differentiable function by updating an approximation to the Hessian matrix of second-order partial derivatives, or to its inverse. The core idea is to use gradient information from the current iterate to progressively improve a curvature estimate, avoiding the need to compute the exact Hessian at every step.
In a typical iteration, the current point xk has gradient gk = ∇f(xk). A search direction pk is
Two prominent update formulas are the DFP (Davidon-Fletcher-Powell) update and the BFGS (Broyden-Fletcher-Goldfarb-Shanno) update; the latter
Quasi-Newton methods often exhibit superlinear convergence and are widely used in scientific computing and machine learning