LangChain Message Class
Let's undestand langchain_core.messages.
When using LangGraph, interactions between the LLM, tools, and users are structured around a message-based architecture.
The langchain_core.messages
module in LangChain provides various message classes, each serving a specific role, allowing you to manage conversation state in a flexible and structured way throughout the workflow.
While the official LangChain documentation provides limited explanation of these message types, LangGraph places message-based state management at the core of its design, making it essential to understand the purpose and usage of each message type.
In this post, we’ll explore the most commonly used message classes in LangGraph workflows and explain when and why to use them in practice.
Overview of Representative Message Classes
Class Name | Description | Example Code |
---|---|---|
BaseMessage | The abstract base class for all message types (SystemMessage , HumanMessage , etc.). It is typically used for type hinting or generic message handling, rather than instantiated directly. | isinstance(message, BaseMessage) |
SystemMessage | Provides system-level instructions or role definitions to guide the LLM’s behavior. Usually placed at the start of a conversation. | SystemMessage(content="You are a helpful assistant that answers questions concisely.") |
HumanMessage | Represents input from the user, such as questions or commands. It is often the starting point of a conversation. | HumanMessage(content="What's the weather like in Seoul?") |
AIMessage | Represents the LLM’s response to a HumanMessage or SystemMessage . It usually follows a user message. | AIMessage(content="The weather in Seoul is sunny with a high of 28°C.") |
ToolMessage | Represents the result returned by an external tool in response to a tool call initiated by the LLM. | ToolMessage(content="28°C and sunny", tool_call_id="tool_1") |
This is an example of how a Tool Calling Agent interacts with external tools through a structured sequence of messages.
It demonstrates the full flow of tool invocation and response handling — from receiving a user query to calling a tool, processing the result, and delivering the final answer.
Each step in the message sequence plays a specific role:
- SystemMessage – Defines the assistant’s behavior.
→ Sets the role of the assistant to “a helpful assistant that answers questions concisely.” - HumanMessage – Represents the user’s input.
→ The user asks about the weather in San Francisco. - AIMessage (Tool Call) – Instructs the system to call a tool instead of answering directly.
→ The model generates atool_calls
request, specifying the tool name (get_weather
), arguments (e.g.,"city": "sf"
), and a unique ID to track the call. - ToolMessage – Contains the result from the executed tool.
→ The tool’s response ("It's always sunny in sf!"
) is passed back to the system, linked to the original tool call usingtool_call_id
. - AIMessage – Delivers the final response to the user.
→ Based on the tool result, the assistant returns a natural language answer:"It's always sunny in San Francisco!"
Additional Note:
- BaseMessage is the common abstract base class for all messages used in the LangChain ecosystem.
While it is not used directly in conversations, it enables tools likeadd_messages()
andStateGraph
in LangGraph to handle various message types in a unified way (e.g.,List[BaseMessage]
).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage, ToolMessage
messages = [
# Step1) System instruction message that sets the role of the assistant
SystemMessage(content="You are a helpful assistant that answers questions concisely."),
# Step2) User input message
HumanMessage(content="How's the weather in sf?"),
# Step3) AI response indicating a tool should be called (get_weather)
AIMessage(
content='', # No direct response; instead, it triggers a tool call
tool_calls=[{
'name': 'get_weather', # Name of the tool to be called
'args': {'city': 'sf'}, # Arguments to be passed to the tool
'id': 'bnb734sce', # ID used to map the tool call and its response
'type': 'tool_call' # Specifies this is a tool call
}]
),
# Step4) Message containing the result returned by the tool
ToolMessage(
content="It's always sunny in sf!", # Result of executing the tool
name='get_weather', # Name of the tool that was called
tool_call_id="bnb734sce" # ID of the tool call this result corresponds to
),
# Step5) Final AI message responding to the user using the tool result
AIMessage(content="It's always sunny in San Francisco!"),
]
1
2
3
4
5
[SystemMessage(content='You are a helpful assistant that answers questions concisely.', additional_kwargs={}, response_metadata={}),
HumanMessage(content="How's the weather in sf?", additional_kwargs={}, response_metadata={}),
AIMessage(content='', additional_kwargs={}, response_metadata={}, tool_calls=[{'name': 'get_weather', 'args': {'city': 'sf'}, 'id': 'bnb734sce', 'type': 'tool_call'}]),
ToolMessage(content="It's always sunny in sf!", name='get_weather', tool_call_id='bnb734sce'),
AIMessage(content="It's always sunny in San Francisco!", additional_kwargs={}, response_metadata={})]