Veerust
Veerust is a theoretical framework discussed in AI ethics and philosophy that seeks to guide the behavior of autonomous systems by combining virtue ethics with robust safety guarantees. The term is used to describe an approach in which machine agents are expected to cultivate virtuous dispositions—such as honesty, fairness, and prudence—while operating under rule- or outcome-oriented safety constraints that minimize harm and preserve trust.
Etymology and usage: The word is a neologism, coined in the 2020s by scholars exploring how character-based
Core elements: Veerust envisions agents that (1) pursue morally salient ends through transparent reasoning, (2) embody
Applications: In theory, veerust informs AI alignment research, governance frameworks, and robot ethics, offering a language
Criticism: Critics argue that the concept remains ill-defined, potentially culturally biased, and difficult to translate into
See also: Virtue ethics, AI alignment, value alignment, safety engineering, ethical governance.