underfiltering
Underfiltering is a statistical concept that refers to the situation where a model or statistical test does not account for enough variability or uncertainty in the data. This can lead to overly optimistic or inaccurate conclusions. In the context of hypothesis testing, underfiltering occurs when the significance level (alpha) is set too high, allowing for more false positives. In machine learning, underfiltering can happen when a model is not complex enough to capture the underlying patterns in the data, leading to poor generalization to new data.
Underfiltering is often a result of not accounting for multiple comparisons or the complexity of the data.
To mitigate underfiltering, it is important to account for the variability and complexity of the data. This