Anthropic’s Claude users have reported an unusual behaviour for months. The AI chatbot repeatedly tells them to go to sleep, get rest, or take a break. Now, an Anthropic employee has responded to the claims, saying the company is aware of the issue and plans to address it in future models. Sam McAllister, a staff member at Anthropic, took to the microblogging site X (formerly Twitter) to describe the behaviour as a “Bit of a character tic, but we’re aware of this and hoping to fix it in future models.”The comments come after multiple Reddit users shared examples of Claude encouraging them to rest, sometimes repeatedly, regardless of the actual time. Responses ranged from simple reminders to more personalised messages.“Now go to sleep again. Again. For the THIRD time tonight…” Claude reportedly replied to one user.Some users found the messages considerate, while others found them confusing or disruptive. One Reddit user wrote: “It often does it at like 8:30 in the morning. Tells me to go get some rest and we’ll pick back up in the morning.”The unusual behaviour has led to online speculation, with theories ranging from user wellbeing features to attempts to reduce computing load. However, reports suggest that these explanations are unlikely, as Claude does not have access to details about a user’s broader usage patterns.
Experts explain how training data and hidden prompts may be behind this behaviour
Experts told Fortune that Claude’s sleep reminders may stem from patterns in its training data rather than any intentional empathy or awareness.Jan Liphardt, a professor at Stanford University and CEO of OpenMind, suggested that the chatbot may simply be reproducing the common language of late-night conversations.“It doesn’t mean that the frontier model has suddenly become sentient. It doesn’t mean that this model has now come alive. It’s reflecting that it’s read 25,000 books on humans’ need [for] sleep, and humans sleep at night,” Liphardt said. Leo Derikiants, co-founder and CEO of Mind Simulation Lab, said hidden system prompts could also influence such responses. System prompts are internal instructions used to guide AI behaviour and define boundaries.Derikiants added that Claude’s responses may be linked to how AI models manage long conversations. When context windows fill up, language models sometimes introduce wrap-up phrases that resemble conversation endings, such as “good night” or “rest.”“The definitive reason, though, requires further research by Anthropic,” he said.The conversation comes as AI companies continue to release new models with more conversational capabilities. Anthropic released the public version of Claude Opus 4.7 last month, but kept another model, Mythos, in the dark over safety concerns. Meanwhile, GPT 5.5 is a step “towards more agentic and intuitive computing” from the company that made it, OpenAI, says company president Greg Brockman.Liphardt said increasingly human-like AI interactions make it easier for users to attribute emotions or intentions to chatbots.“I’m continuously surprised by how quickly people, when they interact with a frontier model, project life into it and develop a strong connection,” he said.