LangChain Releases Deep Agents: Fixed Runtime for Scheduling, Memory, and Content Management in Multistep AI Agents

Most LLM agents work well for short tool-calling loops but begin to break down when the task becomes multi-step, transparent, and artifact-heavy. This is LangChain’s site Deep Agents that gap is built. This project is described by LangChain as ‘agent thread‘: a standalone library built on top of the LangChain agent building blocks and powered by the LangGraph runtime for long-running, streaming, and human-in-the-loop workflows.
The important point is that Deep Agents does not introduce a new logic model or a new runtime separate from LangGraph. Instead, it packages a set of defaults and built-in tools around a standard tool-calling loop. The LangChain team positions us as an easy starting point for developers who need agents capable of programming, managing large contexts, dispatching micro-tasks, and continuous information across conversations, while still maintaining the option to migrate to simple LangChain agents or custom LangGraph workflows when needed.
Which includes Deep Agents by default
The Deep Agents GitHub repository lists the main components directly. These include an editing tool called write_todosfile system tools like read_file, write_file, edit_file, ls, globagain grepshell penetration with execute with sandboxing, i task a tool for spawning subagents, and built-in context management features such as automatic summarization and saving large results to files.
That framework is important because most agent systems leave the scheduling, central storage, and deployment of underlying work to the application developer. Deep Agents move those pieces to an automated runtime.
Planning and Division of Labor
Deep Agents include built-in write_todos a tool for organizing and disposing of work. The purpose is clear: an agent can break a complex task into separate steps, track progress, and update the plan as new information emerges.
Without a planning layer, the model tends to improve each step from the current notification. With write_todosthe workflow becomes more organized, which is very useful for research projects, coding sessions, or analysis tasks that unfold in a few steps.
Filesystem-Based Content Management
The second main feature is the use of file system tools for content management. These tools allow the agent to extract a large amount of content from storage rather than keeping everything within the active information window. The LangChain team clearly notes that this helps prevent context window overflow and supports variable length tool outputs.
This is a tangible design choice rather than vague claims about ‘memory.’ An agent can write notes, generated code, intermediate reports, or search for output in files and retrieve them later. That makes the system more suitable for long tasks where the output becomes part of the working environment.
Deep Agents also support multiple backend versions of this virtual file system. List of documents for customization StateBackend, FilesystemBackend, LocalShellBackend, StoreBackendagain CompositeBackend. By default, the system uses StateBackendwhich stores an ephemeral file system in LangGraph mode with a single thread.
Subagents and Segregation of Content
Deep Agents include built-in task subagent breeding tool. This tool allows the main agent to create specialized subagents to split the context, maintaining a large thread cleaner while allowing the system to drill down into specific subtasks.
This is one of the cleanest responses to a common failure mode in agent systems. If a single thread accumulates too many objectives, tool outputs, and temporal decisions, the quality of the model tends to decrease. Separating work into subagents reduces that overhead and makes the orchestration process easier to remove.
Long-Term Memory and LangGraph Integration
The Deep Agents GitHub repository also describes long-term memory as a built-in capability. Deep Agents can be extended with persistent memory across threads using LangGraph’s Memory Store, allowing the agent to store and retrieve information from previous conversations.
On the implementation side, Deep Agents reside fully within the LangGraph implementation model. The customization documentation specifies that create_deep_agent(...) he returns a CompiledStateGraph. The resulting graph can be used with standard LangGraph features such as broadcast, studio, and test indicators.
Deep Agents are not a virtual layer that blocks access to runtime features; is a pre-constructed graph with fixed conditions.
Shipping Details
To use, the official quick start shows a small Python setup: install deepagents and a search provider such as tavily-pythonexport your model API key and search API key, define a search tool, and create an agent with create_deep_agent(...) using the tool calling model. Documentation that Deep Agents need to call the instrument support, and an example workflow to start the agent with your tools and system_promptthen run with it agent.invoke(...). The LangChain team also points developers to LangGraph deployment options for production, which is ideal because Deep Agents run on the LangGraph runtime and support built-in streaming to view execution.
# pip install -qU deepagents
from deepagents import create_deep_agent
def get_weather(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
agent = create_deep_agent(
tools=[get_weather],
system_prompt="You are a helpful assistant",
)
# Run the agent
agent.invoke(
{"messages": [{"role": "user", "content": "what is the weather in sf"}]}
)Key Takeaways
- Deep Agents is an agent harness built on LangChain and the LangGraph runtime.
- Includes built-in editing by using
write_todosmulti-step decomposition tool. - It uses file system tools to manage large context and reduce fast window pressure.
- It can spawn single-threaded subagents using the built-in
taska tool. - Supports persistent memory across threads with LangGraph’s Memory Store.
Check it out Repo again Documents. Also, feel free to follow us Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to Our newspaper. Wait! are you on telegram? now you can join us on telegram too.
Michal Sutter is a data science expert with a Master of Science in Data Science from the University of Padova. With a strong foundation in statistical analysis, machine learning, and data engineering, Michal excels at turning complex data sets into actionable insights.



