September 28, 2025

LangGraph




What is LangGraph?

LangGraph is an open-source framework for building stateful, controllable AI agent workflows as graphs. You describe your app as nodes (steps) and edges (how to move between steps), with a shared, typed state flowing through the graph. It’s designed for agents and multi-step LLM apps where you need loops, branching, tool use, memory, human-in-the-loop, and reliability. 

Core ideas:

  • StateGraph: declare the shape of the shared state. Each node outputs a partial update (a diff). LangGraph applies it to the state—no in-place mutation.

  • Nodes & Edges: register node functions and connect them with edges; use conditional edges for branching; special START and END markers define entry/exit. 

  • Persistence (checkpointers): when compiled with a checkpointer, the graph saves a checkpoint each super-step into a thread, enabling resumability, memory, time-travel, fault tolerance, and human review. 

  • Helpful built-ins & optional platform: quickstarts and message-centric state helpers in the docs to get started fast, plus an optional LangGraph Platform to deploy and operate long-running, stateful agent workflows. 

The smallest possible example (Python)

This is a literal “hello world” graph: one node that replies, wired from START → node → END.

code

# pip install --pre -U langgraph from langgraph.graph import StateGraph, MessagesState, START, END # A node: takes the current state, returns updates to it def say_hello(state: MessagesState): # Append one AI message to the conversation return {"messages": [{"role": "ai", "content": "hello world"}]} # 1) Define graph over a message-based state graph = StateGraph(MessagesState) # 2) Register the node (name inferred from the function) graph.add_node(say_hello) # 3) Wire edges: START -> say_hello -> END graph.add_edge(START, "say_hello") graph.add_edge("say_hello", END) # 4) Compile to a runnable graph app = graph.compile() # 5) Run it (you can pass an initial state; here we start empty) result = app.invoke({"messages": []}) print(result["messages"][-1]["content"]) # -> hello world

This mirrors the official quickstart, just with explicit prints. The key parts to notice are StateGraph(MessagesState), add_node, add_edge(START, ...), add_edge(..., END), and compile()

A Tiny Graph: NFL/NBA Score Router

This minimal example shows how LangGraph models an agent as a few small nodes connected by edges: a router node inspects the latest user message to determine the league (NFL or NBA), then branches to fetch_nfl or fetch_nba, each returning a short list of mock scores; finally, a finalize node formats a concise reply. If the league is unclear, router simply asks and loops until it has enough info. The key idea is that each node returns only partial updates (like {"league": "..."} or {"scores": [...]}), which LangGraph merges into the shared state, so you get predictable, readable control flow without in-place mutation—and you can swap in real API calls without changing the graph’s structure.

START → Router ──┬──→ FetchNFL ──→ Finalize → END └──→ FetchNBA ──→ Finalize → END (if league unknown: Router asks → user replies → Router again)


# pip install --pre -U langgraph from typing import TypedDict, Optional, List, Dict, Literal from langgraph.graph import StateGraph, MessagesState, START, END class ScoreState(MessagesState, TypedDict, total=False): league: Optional[str] # "nfl" | "nba" scores: Optional[List[Dict]] # [{"home":..., "away":..., "home_score":..., "away_score":..., "status":...}] def detect_league(text: str) -> Optional[str]: t = text.lower() if "nfl" in t or "football" in t: return "nfl" if "nba" in t or "basketball" in t: return "nba" return None def router(state: ScoreState) -> Literal["fetch_nfl", "fetch_nba", "router"]: last = state["messages"][-1]["content"] if state["messages"] else "" league = detect_league(last) or state.get("league") if not league: return {"messages": [{"role": "ai", "content": "Which league would you like: NFL or NBA?"}]} | "router" return {"league": league} | ("fetch_nfl" if league == "nfl" else "fetch_nba") def fetch_nfl(state: ScoreState): return {"scores": [{"away": "Cowboys", "away_score": 20, "home": "Chiefs", "home_score": 24, "status": "Final"}]} def fetch_nba(state: ScoreState): return {"scores": [{"away": "Warriors", "away_score": 105, "home": "Lakers", "home_score": 108, "status": "Final"}]} def finalize(state: ScoreState): league = (state.get("league") or "UNKNOWN").upper() scores = state.get("scores") or [] if not scores: return {"messages": [{"role": "ai", "content": f"No {league} scores found."}]} lines = [f"{league} scores:"] for g in scores: lines.append(f"- {g['away']} {g['away_score']} @ {g['home']} {g['home_score']} — {g['status']}") return {"messages": [{"role": "ai", "content": "\n".join(lines)}]} g = StateGraph(ScoreState) g.add_node("router", router) g.add_node("fetch_nfl", fetch_nfl) g.add_node("fetch_nba", fetch_nba) g.add_node("finalize", finalize) g.add_edge(START, "router") g.add_conditional_edges( "router", lambda s: "fetch_nfl" if s.get("league") == "nfl" else ("fetch_nba" if s.get("league") == "nba" else "router"), {"fetch_nfl": "fetch_nfl", "fetch_nba": "fetch_nba", "router": "router"} ) g.add_edge("fetch_nfl", "finalize") g.add_edge("fetch_nba", "finalize") g.add_edge("finalize", END) app = g.compile() # Example: # app.invoke({"messages": [{"role": "human", "content": "NBA"}]})

LangChain and LangGraph

LangChain is a general LLM orchestration framework that covers models, prompts, retrievers, tools, and LCEL for composing linear flows, plus LangServe and LangSmith for deployment and observability. LangGraph, created by the same team, adds a graph centric layer for stateful, multi step, and agentic workflows where nodes transition via edges, supporting loops, branching, backtracking, multi agent patterns, and checkpointed state.

Use plain LangChain and LCEL for simple, mostly acyclic pipelines such as retrieve then generate then post process. Prefer LangGraph when you need iterative tool use, richer control flow, memory, or recovery. The two interoperate seamlessly: LangGraph nodes can call LangChain runnables, tools, and retrievers, and a complete LangGraph app can be wrapped as a runnable inside larger LangChain compositions. Both integrate with LangSmith for tracing and evaluation and can be exposed through LangServe. Keep state transition logic in LangGraph, factor text and IO and post processing into small LangChain runnables for reuse and testing, begin with the simplest approach, and introduce a graph only when control flow complexity or durability via checkpointers like SQLite or Postgres justifies it.


Reference

https://langchain-ai.github.io/langgraph/reference/
https://docs.langchain.com/oss/python/langgraph/quickstart
https://github.com/langchain-ai/langgraph
https://www.langchain.com/langgraph
https://docs.langchain.com/langgraph-platform/index
https://docs.langchain.com/oss/python/langchain/overview

No comments:

Post a Comment