reflejabais
Reflejabais is a neologism used in discussions of artificial intelligence and data ethics to describe a bias that arises when a model's outputs effectively reflect the biases present in its training data or environment. The term is often used to emphasize that such biases are not merely carried into the model but are reinforced and potentially amplified through data collection, labeling practices, and deployment contexts. The word blends the Spanish verb reflejar (“to reflect”) with the English noun bias. It has appeared in some Spanish- and English-language writings since the early 2020s, but it does not have a single, formal definition in major taxonomies of algorithmic fairness.
Contexts and examples: In natural language processing, reflejabais can occur when biased language in training corpora
Relation to other concepts: It overlaps with broader notions of dataset bias, model bias, and bias amplification.
Mitigation: Approaches include auditing datasets for representativeness, adversarial or fairness-aware training, debiasing techniques, diverse labeling, and
Criticism: Some scholars argue that reflejabais is ambiguous or redundant with existing terms such as dataset