Adatmérgezés
Adatmérgezés, often translated as data poisoning, is a type of attack on machine learning models where malicious actors inject corrupted or misleading data into the training dataset. The goal is to compromise the integrity and performance of the resulting model. This can manifest in several ways, such as causing the model to misclassify specific inputs, reducing its overall accuracy, or even making it completely unusable.
The effectiveness of adatmérgezés depends on the attacker's knowledge of the model's architecture, training process, and
Preventing adatmérgezés involves robust data validation and sanitization techniques. This can include anomaly detection to identify