Fehlergradienten
Fehlergradienten, also known as gradient descent, is a fundamental optimization algorithm used widely in machine learning and deep learning. Its primary purpose is to find the minimum of a function, typically a loss function, by iteratively moving in the direction of the steepest descent. The "gradient" refers to the vector of partial derivatives of the function with respect to its parameters. This gradient points in the direction of the greatest increase of the function. Therefore, by moving in the opposite direction of the gradient, we can effectively decrease the function's value.
The algorithm starts with an initial set of parameters. In each iteration, the gradient of the loss
Fehlergradienten can be applied in various forms, such as batch gradient descent, where the gradient is computed