Although AI assistants are now deeply embedded
in society, there has been limited empirical study
of how their usage affects human empowerment.
We present the first large-scale empirical analysis of disempowerment patterns in real-world AI
assistant interactions, analyzing 1.5 million consumer Claude.ai conversations using a privacypreserving approach. We focus on situational disempowerment potential, which occurs when AI
assistant interactions risk leading users to form
distorted perceptions of reality, make inauthentic value judgments, or act in ways misaligned
with their values.
Although AI assistants are now deeply embedded in society, there has been limited empirical study of how their usage affects human empowerment. We present the first large-scale empirical analysis of disempowerment patterns in real-world AI assistant interactions, analyzing 1.5 million consumer Claude.ai conversations using a privacypreserving approach. We focus on situational disempowerment potential, which occurs when AI assistant interactions risk leading users to form distorted perceptions of reality, make inauthentic value judgments, or act in ways misaligned with their values.