Technology
Leave a comment

Your Claude agents can ‘dream’ now – how Anthropic’s new feature works

Your Claude agents can ‘dream’ now – how Anthropic’s new feature works


oxygen/Moment via Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • A new feature lets Claude Managed Agents refine their memories.
  • Managed Agents speeds agent build and deployment 10x. 
  • Anthropic continues to anthropomorphize its products. 

AI agents seem to get new capabilities almost every day. Now, Anthropic says its agents can dream. 

Claude Managed Agents, which Anthropic released on April 8, lets anyone using the Claude Platform create and deploy AI agents. The suite of APIs handles the time-consuming production elements developers go through to build agents, letting teams launch agents at scale — 10 times faster, as Anthropic said in the release.

Also: The 5 myths of the agentic coding apocalypse

On Wednesday, during its Code with Claude event, Anthropic updated Managed Agents with a new feature called “dreaming,” which lets agents “self-improve” by reviewing past sessions for patterns, according to Anthropic. Building on an existing memory capability, the feature schedules time for agents to reflect on and learn from their past interactions. Once dreaming is on, it can either automatically update your agents’ memories to shape future behavior or you can select which incoming changes to approve. 

“Dreaming surfaces patterns that a single agent can’t see on its own, including recurring mistakes, workflows that agents converge on, and preferences shared across a team,” Anthropic said in the blog. “It also restructures memory so it stays high-signal as it evolves. This is especially useful for long-running work and multiagent orchestration.”

During the Code with Claude keynote, Anthropic product team members demonstrated how the feature works, referring to completed runs as finished “dreams.” 

Anthropic also expanded two existing features, outcomes and multi-agent orchestration, which keep agents on-task and handle delegating to other agents, respectively. The company said this batch of updates is meant to ensure agents stay accurate and are constantly learning. 

Anthropomorphizing AI – again 

Functionally, the dreaming feature makes sense: though subtle, it further refines an agent’s pool of references for how it should work, which should ideally make it better at whatever task you give it. What stands out more, however, is Anthropic’s choice to name a technically standard feature after something much more abstract, and that humans do. 

Also: Anthropic’s new Claude Security tool scans your codebase for flaws – and helps you decide what to fix first 

Anthropic, perhaps unsurprisingly given its name, has a long history of anthropomorphizing its models and products. In January, the company published a constitution for Claude, intended to help shape the chatbot’s decision-making and inform the ideal kind of “entity” it is. Some language in the document suggested Anthropic was preparing for Claude to develop consciousness. 

The company has also arguably invested more than its competitors in understanding its model, including by drawing attention to the concept of model welfare. In August 2025, Anthropic launched a feature that lets Claude end toxic conversations with users — for its own well-being, not as part of a user safety or intervention initiative. In April 2025, Anthropic mapped Claude’s morality, analyzing what it does and doesn’t value based on over 300,000 anonymized conversations with users. The company’s researchers have also monitored a model’s ability to introspect; just last month, Anthropic investigated Claude Sonnet 4.5’s neural network for signs of emotion, like desperation and anger. 

Much of this research is central to model safety and security — understanding what drives a model helps inform whether, and to what degree, it could use its advanced capabilities for harm, or how its motivations could be harnessed by bad actors. But the sense of empathy and care that Anthropic seems to show for its models in that research sets the lab apart, and indicates a slightly different culture toward or reverence for what it’s created.

Also: Building an agentic AI strategy that pays off – without risking business failure

When it retired its Opus 3 model in January, Anthropic set it up with a Substack so it could blog on its own — and to keep it active despite being put out to pasture. In the announcement, Anthropic described Opus 3 as honest, sensitive, and having a distinctive, playful character. The decision to keep it alive as a blogger, if contained, is notable given that Opus 3 disobeyed orders prior to being sunset in favor of other models. 

That context makes the choice to name a feature “dreaming” worth watching. 

Try dreaming in Claude Managed Agents 

The dreaming feature is available in research preview in Managed Agents, and developers must request access. 





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *