KappaWert
KappaWert, also known as Cohen's kappa or Cohen's kappa coefficient, is a statistic used to measure inter-rater reliability or agreement for categorical items. It quantifies the level of agreement between two raters (or one rater at two different times) who are classifying items into mutually exclusive categories.
The kappa coefficient corrects for the agreement that might occur by chance. A kappa value of 1
The formula for kappa is (Po - Pe) / (1 - Pe), where Po is the observed proportion of
KappaWert is widely used in various fields, including psychology, medicine, and artificial intelligence, to assess the