Align with Me, Not TO Me: How People Perceive Concept Alignment with LLM-Powered Conversational Agents
Shengchen Zhang, Weiwei Guo, Xiaohua Sun
Abstract
Concept alignment—building a shared understanding of concepts—is essential for human and human-agent communication. While large language models (LLMs) promise human-like dialogue capabilities for conversational agents, the lack of studies to understand people's perceptions and expectations of concept alignment hinders the design of effective LLM agents. This paper presents results from two lab studies with human-human and human-agent pairs using a concept alignment task. Quantitative and qualitative analysis reveals and contextualizes potentially (un)helpful dialogue behaviors, how people perceived and adapted to the agent, as well as their preconceptions and expectations. Through this work, we demonstrate the co-adaptive and collaborative nature of concept alignment and identify potential design factors and their trade-offs, sketching the design space of concept alignment dialogues. We conclude by calling for designerly endeavors on understanding concept alignment with LLMs in context, as well as technical efforts to combine theory-informed and LLM-driven approaches.
Cite as
Shengchen Zhang, Weiwei Guo, and Xiaohua Sun. 2025. Align with Me, Not TO Me: How People Perceive Concept Alignment with LLM-Powered Conversational Agents. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’25), April 26–May 1, 2025, Yokohama, Japan. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3706599.3720126