LangGraph with Human-in-the-Loop: Building Responsible AI Workflows


Introduction

In today’s AI-powered world, automation without accountability can be risky. Enter LangGraph with Human-in-the-Loop (HITL) — a powerful approach for building safe, controllable, and modular agent workflows using LangChain’s LangGraph framework. Whether you’re building an AI assistant for legal review, customer support, or sensitive decision-making, HITL ensures humans retain control over critical checkpoints.

In this blog, we’ll walk through:

  • What LangGraph is and why it matters
  • Real-world HITL use cases
  • Building a HITL agent workflow step-by-step:
    • Base agent setup
    • Agent state design
    • Human approval node
    • Graph construction and execution
    • Pausing and resuming workflows

Real-World Use Case: AI Legal Assistant

Imagine a law firm using an AI assistant to draft contracts. You want the AI to handle:

  • Initial document creation
  • Summarization
  • Risk detection

…but before sending the final draft, a human lawyer must approve it. That’s the perfect use case for LangGraph with HITL.


Part 1: Base Agent Setup

Define LLM + Tools:

from dotenv import load_dotenv
load_dotenv()
from langchain_groq import ChatGroq
llm = ChatGroq(model_name="deepseek-r1-distill-llama-70b")

Add Tools:

from langchain_core.tools import tool
from langchain_community.tools.tavily_search import TavilySearchResults

@tool
def add(x: int, y: int) -> int:
    return x + y

@tool
def search(query: str):
    tavily = TavilySearchResults()
    result = tavily.invoke(query)
    return f"Result for {query}:\n{result}"

tools = [add, search]
llm_with_tools = llm.bind_tools(tools)

You can test tool binding with:

result = llm_with_tools.invoke("What is the capital of India?")

Part 2: Agent State Design

LangGraph requires a state model. Here’s our basic structure:

from typing import TypedDict, Sequence, Annotated
import operator
from langchain_core.messages import BaseMessage

class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], operator.add]

This state tracks all messages exchanged with the agent.


Part 3: Human Approval Node

This is where LangGraph truly shines by allowing us to pause execution and involve a human reviewer mid-agent run.

from langgraph.types import interrupt
from langchain_core.messages import HumanMessage

def human_node(state: AgentState):
    # Pause the graph and request manual review
    value = interrupt({
        "text_to_revise": state["messages"][-1].content
    })
    return {
        "messages": [HumanMessage(content=value["text_to_revise"])]
    }

What is interrupt() doing?

The interrupt() method in LangGraph is a controlled pause point. When this node executes:

  • It halts the graph execution.
  • Returns a payload (in this case, the last LLM response) to the outside world — typically a UI or a backend service.
  • This lets a human review or modify the AI’s response before proceeding.

Example Returned Interrupt:

{
  "__interrupt__": {
    "text_to_revise": "GDP of India"
  }
}

This output can now be shown in a UI or CLI to ask the user: “Do you want to approve or revise this?”


Visual: Human-In-Loop Flow

  1. LLM generates tool call
  2. Human review is triggered via interrupt
  3. Graph resumes after manual input using resume

Part 4: Graph and Execution

from langgraph.graph import StateGraph, END
from langgraph.checkpoint.memory import MemorySaver

graph_builder = StateGraph(AgentState)
graph_builder.add_node("human", human_node)
graph_builder.set_entry_point("human")
graph_builder.set_finish_point("human")

graph = graph_builder.compile(checkpointer=MemorySaver())

Part 5: Run + Pause Execution

Run the graph and observe how it halts at the human review node:

config = {"configurable": {"thread_id": "session-001"}}

result = graph.invoke({"messages": [HumanMessage(content="Get me GDP of India")]}, config=config)

print(result["__interrupt__"])

Resume Execution: What is Command(resume=)?

Once the human has revised or approved the content, you can resume the graph using:

from langgraph.types import Command

graph.invoke(
    Command(resume={"text_to_revise": "GDP of India"}),
    config=config
)

How it works:

  • Command(resume={...}) tells LangGraph to continue from the pause point.
  • The dictionary you pass will be injected back into the paused node’s interrupt() call.
  • Execution picks up smoothly from where it left off.

Visual Summary: LangGraph HITL Workflow


Summary Table


Conclusion

LangGraph makes it easy to combine powerful AI agents with human intelligence — ideal for building real-world production workflows.

With built-in support for pause/resume logic, interrupt() and resume() form the foundation for trustworthy AI automation, empowering teams to confidently deploy LLMs where oversight matters.


Prem Kumar
Prem Kumar
Articles: 19

Leave a Reply

Your email address will not be published. Required fields are marked *