Exploring LangGraph Agents: A Shift Towards Graph-Based AI
Written on
Introduction
Recently, there has been a noticeable resurgence in using graph-based data structures for AI applications and agents. Notable tools that have emerged include GALE from kore.ai, LangGraph from LangChain, Workflows from LlamaIndex, and Deepset Studio.
In computer science, a graph is an abstract data type (ADT) that is characterized by its behavior (semantics) from the user's perspective.
Historically, graph representations have been fundamental in the traditional chatbot domain, facilitating the construction and maintenance of flows via graphical user interfaces (GUIs). However, with the rise of AI agents, there has been a shift away from this approach, but now we are witnessing a renewed focus on graph-based flow systems.
Two Approaches
There are essentially two methodologies for representing data in graphs:
- In a pro-code model, the flow is established through code, with graphical representations generated accordingly. For instance, LangGraph allows limited interaction via its GUI, where the visual representation is created from the underlying code.
- Conversely, some systems enable users to define and manage flows entirely through the GUI, allowing for comprehensive manipulation of each node. This method facilitates a pro-code approach for detailed configurations.
Why Now?
Graph data, as an abstract data type, serves as a theoretical model, focusing on how a data type functions rather than its specific implementation.
This model simplifies the interpretation and management of application flows compared to traditional data representations, which often rely on the physical organization of data—a concern primarily for technical implementers.
Many AI agents, particularly those involved in knowledge representation or natural language processing, deal with interconnected entities. Graphs inherently represent these complex relationships, making it easier to model and navigate connections between various data points.
Knowledge Graphs serve as an example: in these structures, entities (such as people, places, or concepts) act as nodes, while the relationships between them are represented as edges, enabling efficient querying and reasoning.
Balancing Rigidity & Flexibility
The integration of graph-based data representations must not compromise the flexibility required in certain instances. These representations can be utilized for designing, building, and executing applications using a pipeline or process automation approach.
Alternatively, they can support an agent-centric approach, which necessitates greater adaptability. Here, the fundamental steps can be outlined, and the agent can traverse through these steps until a conclusion is reached.
LangGraph By LangChain
We begin our working notebook with a code snippet that installs or upgrades the langgraph and langchain_anthropic Python packages quietly, suppressing error messages in a Jupyter notebook environment:
%%capture --no-stderr
%pip install --quiet -U langgraph langchain_anthropic
This code is crafted to securely establish an environment variable for the current session:
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")_set_env("ANTHROPIC_API_KEY")
The above Python snippet sets up several environment variables that are crucial for configuring the LangChain LangSmith project for tracing and monitoring with a designated endpoint:
import os
from uuid import uuid4
unique_id = uuid4().hex[0:8]
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = f"LangGraph_HumanInTheLoop"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGCHAIN_API_KEY"] = "<Your API Key>"
In this framework, each step of a process is depicted as a node within a graph:
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from IPython.display import Image, display
class State(TypedDict):
input: strdef step_1(state):
print("---Step 1---")
pass
def step_2(state):
print("---Step 2---")
pass
def step_3(state):
print("---Step 3---")
pass
builder = StateGraph(State)
builder.add_node("step_1", step_1)
builder.add_node("step_2", step_2)
builder.add_node("step_3", step_3)
builder.add_edge(START, "step_1")
builder.add_edge("step_1", "step_2")
builder.add_edge("step_2", "step_3")
builder.add_edge("step_3", END)
# Set up memory
memory = MemorySaver()
# Add
graph = builder.compile(checkpointer=memory, interrupt_before=["step_3"])
A visual representation of the graph can be generated:
This code snippet extends the state management process using langgraph, allowing user interaction to decide whether to proceed with a specific step in the graph:
# Input
initial_input = {"input": "hello world"}
# Thread
thread = {"configurable": {"thread_id": "1"}}
# Run the graph until the first interruption
for event in graph.stream(initial_input, thread, stream_mode="values"):
print(event)user_approval = input("Do you want to go to Step 3? (yes/no): ")
if user_approval.lower() == "yes":
# If approved, continue the graph execution
for event in graph.stream(None, thread, stream_mode="values"):
print(event)else:
print("Operation cancelled by user.")
And the expected output when the code is executed:
{'input': 'hello world'}
—Step 1---
---Step 2---
Do you want to go to Step 3? (yes/no): yes
—Step 3---
AI Agent
In the realm of agents, breakpoints are essential for manually approving specific actions taken by the agent.
To illustrate this, below is a simple ReAct-style agent that performs tool calls, with a breakpoint set just before the action node is executed:
# Set up the tool
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool
from langgraph.graph import MessagesState, START
from langgraph.prebuilt import ToolNode
from langgraph.graph import END, StateGraph
from langgraph.checkpoint.memory import MemorySaver
@tool
def search(query: str):
"""Call to surf the web."""
# This is a placeholder for the actual implementation
return [
"It's sunny in San Francisco, but you better look sudden weather changes later afternoon!"]
tools = [search]
tool_node = ToolNode(tools)
# Set up the model
model = ChatAnthropic(model="claude-3-5-sonnet-20240620")
model = model.bind_tools(tools)
# Define nodes and conditional edges
def should_continue(state):
messages = state["messages"]
last_message = messages[-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return "end"# Otherwise, we continue
else:
return "continue"def call_model(state):
messages = state["messages"]
response = model.invoke(messages)
return {"messages": [response]}
# Define a new graph
workflow = StateGraph(MessagesState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
# Set the entrypoint as agent
workflow.add_edge(START, "agent")
# We now add a conditional edge
workflow.add_conditional_edges(
"agent",
should_continue,
{
"continue": "action",
"end": END,
},
)
# We now add a normal edge from tools to agent
workflow.add_edge("action", "agent")
# Set up memory
memory = MemorySaver()
# Finally, we compile it!
app = workflow.compile(checkpointer=memory, interrupt_before=["action"])
display(Image(app.get_graph().draw_mermaid_png()))
Below is a visual representation of the flow:
Now, we can interact with the agent. Notice that it pauses before invoking a tool, as interrupt_before is set to trigger before the action node:
from langchain_core.messages import HumanMessage
thread = {"configurable": {"thread_id": "3"}}
inputs = [HumanMessage(content="search for the weather in sf now?")]
for event in app.stream({"messages": inputs}, thread, stream_mode="values"):
event["messages"][-1].pretty_print()
And the output would be:
================================= Human Message =================================
search for the weather in sf now?
================================== AI Message ==================================
[{'text': "Certainly! I can help you search for the current weather in San Francisco. To do this, I'll use the search function to look up the latest weather information. Let me do that for you right away.", 'type': 'text'}, {'id': 'toolu_0195ZVcpdHkUrcgtZWmxVTue', 'input': {'query': 'current weather in San Francisco'}, 'name': 'search', 'type': 'tool_use'}]
Tool Calls:
search (toolu_0195ZVcpdHkUrcgtZWmxVTue)Call ID: toolu_0195ZVcpdHkUrcgtZWmxVTue
Args:
query: current weather in San Francisco
Next, we can call the agent again without providing inputs to continue. This will execute the tool as originally requested. When you run an interrupted graph with None as the input, it instructs the graph to proceed as if the interruption hadn’t occurred:
for event in app.stream(None, thread, stream_mode="values"):
event["messages"][-1].pretty_print()
And the output would be:
=============================== Tool Message ================================
Name: search
["It's sunny in San Francisco, but you better look sudden weather changes later afternoon!"]
================================== AI Message ==================================
Based on the search results, here's the current weather information for San Francisco:
It's currently sunny in San Francisco. However, it's important to note that there might be sudden weather changes later in the afternoon.
This means that while the weather is pleasant right now, you should be prepared for potential changes as the day progresses. It might be a good idea to check the weather forecast again later or carry appropriate clothing if you plan to be out for an extended period.
Is there anything else you'd like to know about the weather in San Francisco or any other information I can help you with?
Follow me on LinkedIn for updates on Large Language Models
I’m currently the Chief Evangelist @ Kore AI. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.
Get notified on new publications
- [Add breakpoints](https://langchain-ai.github.io) - Build language agents as graphs.
- [Get notified whenever Cobus Greyling publishes.](https://cobusgreyling.medium.com)
- [Cobus Greyling](https://www.cobusgreyling.com) - Exploring the intersection of AI & Language | NLP/NLU/LLM, Chat/Voicebots, CCAI.