shadowrobustness
Shadowrobustness is a concept related to the resilience of machine learning models, particularly in the context of adversarial attacks. It refers to a model's ability to maintain its performance and accuracy even when faced with inputs that have been subtly manipulated to mislead it. This manipulation is often referred to as an "adversarial perturbation," and it can be so small that it is imperceptible to humans.
The term "shadow" in shadowrobustness implies that the model is resistant to these hidden or "shadow" attacks.
Research into shadowrobustness explores various techniques to fortify models. These include adversarial training, where models are