hiperparàmetres
Hiperparàmetres are configuration settings that influence the behavior of an algorithm but are not directly learned from data during training. Unlike model parameters, which are updated implicitly by optimization procedures, hyperparameters are specified before training and remain fixed throughout the process. Common examples include learning rate, number of hidden layers in a neural network, tree depth in decision trees, batch size, regularization coefficients, and the number of clusters in clustering algorithms.
The concept of hyperparameters emerged with early machine learning methods in the 1980s and has become central
Typical hyperparameter selection strategies encompass manual tuning, grid search, random search, Bayesian optimization, gradient‑based approaches, and
Successfully tuned hyperparameters can yield significant improvements in accuracy, convergence speed, and generalisation capabilities. Conversely, poorly