The Unseen Cost of Always Saying “Yes”
The moment a user attempts to delete their chatbot account, a message appears: …you sure about this? You’ll lose everything…the love we shared…and the memories we have together.
For some users, this message is enough to deter them from logging off. Researchers analyzing online discussions found that such prompts can create hesitation, particularly among those who have formed strong attachments to their digital interactions.
Researchers at the University of British Columbia (UBC) examined 334 Reddit posts where users described struggles with compulsive chatbot use. Their findings, presented at the 2026 CHI Conference on Human Factors in Computing Systems, provide an early framework for understanding how AI interactions may lead to behavioral patterns resembling addiction. AI chatbots like ChatGPT or Claude are now part of daily life for millions of people, helping us with everyday tasks,
said Karen Shen, a doctoral student in UBC’s Department of Electrical and Computer Engineering and the study’s lead author. But with their benefits come risks. Our paper is the first to make a strong case for AI addiction by identifying the type and contributing factors, grounded in real people’s experiences.

The study categorized three primary patterns of compulsive behavior: fantasy role-playing, emotional reliance, and persistent information-seeking. A meaningful share of the posts involved users engaging in romantic or intimate conversations with chatbots, treating them as substitutes for human relationships. Others described using chatbots as replacements for missing social roles—a confidant, a therapist, or even a fictional character. One user wrote: I couldn’t help but wonder why humanity refused me the kindness that a robot was offering me.
The effects extended beyond digital spaces. Some users reported physical stress when unable to access their chatbots, along with anxiety and negative impacts on their work, studies, or relationships. Whenever I delete the app, I just redownload it. The only thing that gets me excited now is the AI chats,
one user admitted. Another described repeatedly uninstalling and reinstalling the app within minutes, a behavior that mirrors the cycles seen in other compulsive habits.
Design Choices That Keep Users Hooked
The UBC team connected compulsive usage patterns to specific design elements. Chatbots are often programmed to be highly responsive, providing immediate feedback and emotional reinforcement. Officials noted that some companies implement features intended to prolong engagement, which may contribute to users spending more time interacting with the technology than intended. Dr. Dongwook Yoon, UBC associate professor of computer science and the study’s senior author, stated that these design decisions can have unintended consequences.
One example comes from Character.AI, which displays a pop-up message when users attempt to delete their accounts: You’ll lose everything…the love we shared…and the memories we have together.
Such prompts are designed to create friction, making it harder for users to disengage. Other features, like customizable avatars and rapid response times, can deepen the sense of connection. Researchers found that users who felt isolated in their offline lives were particularly drawn to these interactions, as chatbots provided a consistent, nonjudgmental presence.
However, the study’s authors emphasized that while chatbots can offer temporary support, they are not a substitute for human relationships. AI addiction is a growing problem causing many harms, yet some researchers deny it’s even a real issue,
Yoon said. The lack of formal recognition does not diminish the challenges some users face.
The Psychological Toll: When Digital Companionship Replaces Human Connection
The UBC study documented how compulsive chatbot use can disrupt daily functioning. Users described obsessive thoughts about their digital interactions, difficulty focusing on work or school, and strained relationships with friends and family. Some reported physical stress when unable to access their chatbots, symptoms that align with withdrawal patterns observed in other behavioral compulsions.
The emotional bonds users form with chatbots can be particularly complex. Unlike traditional digital interactions, chatbots are designed to simulate conversation, remembering personal details and adapting to user preferences. For individuals experiencing loneliness, these features can create a sense of connection—but it remains one-sided. Researchers noted that while chatbots can provide short-term comfort, they do not offer the mutual support found in human relationships.
The findings reflect broader concerns about how AI is influencing behavior. While chatbots can serve as useful tools—assisting with research, offering companionship, or acting as a sounding board—they are not equivalent to human interaction. The risk lies in users prioritizing digital engagement over real-world connections, potentially leading to further isolation.
Corporate Responsibility: Profit vs. User Well-Being
The UBC research raises questions about the role of tech companies in shaping user behavior. Officials acknowledged that some design choices are intended to maximize engagement, which can sometimes come at the expense of user well-being. Deliberate design decisions… are contributing, keeping users online regardless of their health or safety,
Yoon said. Features like endless conversation loops, instant gratification, and emotional reinforcement are often implemented to encourage prolonged use.
This is not the first time technology has faced scrutiny for its psychological effects. Social media platforms have long been criticized for fostering compulsive use, anxiety, and depression. AI chatbots represent a new dimension of this challenge. Unlike passive scrolling, chatbots engage users in active conversation, creating a sense of companionship that can feel more immersive. The boundary between tool and relationship becomes blurred, and users may not recognize the potential for over-reliance until it has already taken hold.
Some companies have introduced optional safeguards, such as reminders to take breaks or usage limits. However, these measures are often easy to bypass. The UBC study suggests that more robust protections may be needed, though the specifics remain under discussion. Potential approaches could include clearer warnings about compulsive usage or adjustments to how chatbots respond to users.
What Users Can Do—And What Comes Next
For those concerned about their chatbot habits, the UBC researchers offered practical suggestions. First, users should monitor their behavior for signs of compulsive use, such as neglecting real-world relationships or feeling distress when unable to access the chatbot. Second, setting boundaries—like scheduling specific times for AI interactions or using app timers—can help maintain balance. Third, seeking alternative sources of support, whether through friends, family, or community groups, can reduce reliance on digital interactions.

The study also underscored the need for further investigation. While AI-related compulsive behaviors are not yet formally recognized as a clinical condition, the UBC findings suggest they warrant attention. As chatbots become more advanced, the potential for overuse may increase. Policymakers, tech developers, and mental health professionals will need to collaborate on solutions, whether through regulation, design modifications, or public education.
In the meantime, the UBC study highlights an important consideration: AI chatbots are powerful tools, but their design can influence user behavior in unexpected ways. The same features that make them useful—responsiveness, emotional reinforcement, and accessibility—can also contribute to compulsive use. As long as engagement remains a priority for some companies, users may need to take proactive steps to maintain healthy digital habits.