íthet
íthet is a neologism used in theoretical discussions of artificial intelligence alignment to denote an internal ethical checkpoint within an agent’s decision-making process. The term was introduced in early 2020s scholarly discourse by researchers exploring how internalized guidelines could constrain actions before external objectives are pursued. The word combines a prefix suggestive of interiority (i-) with a root resembling theta, a symbol often associated with thresholds, tests, or conditions.
An íthet mechanism is described as a cognitive module or algorithm that evaluates predicted actions against
Íthet typically involves a two-stage evaluation: first, a constraint assessment that maps actions to ethical risk
The concept remains largely theoretical, with limited empirical validation. It is often discussed alongside other internal
AI alignment, safety, ethical reasoning, introspection, internal evaluation.