LangGraph Pipeline Example

This example demonstrates how to build a multi-agent pipeline for scientific discovery using LangGraph. Five specialized agents work in sequence to tackle a research goal, with each agent contributing its expertise before passing results to the next.

Code: github.com/agents4science/agents4science.github.io/tree/main/Capabilities/local-agents/AgentsLangGraph

The Application

The pipeline addresses a sample scientific goal: “Find catalysts that improve CO2 conversion at room temperature.”

The workflow proceeds through five stages:

Agent Role Input Output
Scout Surveys the problem space, identifies anomalies Goal Research opportunities
Planner Designs workflows, allocates resources Opportunities Workflow plan
Operator Executes the planned workflow safely Plan Execution results
Analyst Summarizes findings, quantifies uncertainty Results Analysis summary
Archivist Documents everything for reproducibility Summary Documented provenance

Each agent implementation is a skeleton demonstrating the pattern.

Requirements: Python 3.10+, LangGraph 1.0+, LangChain 1.0+

LLM Configuration

Supports OpenAI, FIRST (HPC inference), Ollama (local), or mock mode. Mock mode shows realistic example outputs for the scientific workflow.

See LLM Configuration for details on configuring LLM backends, including Argonne’s FIRST service.

Why LangGraph?

LangGraph provides several advantages over plain LangChain for multi-agent workflows:

Implementation

The code uses LangGraph’s StateGraph for workflow definition. State is passed between agent nodes:

from typing import TypedDict, Annotated
from operator import add

class PipelineState(TypedDict):
    goal: str
    scout_output: str
    planner_output: str
    operator_output: str
    analyst_output: str
    archivist_output: str
    messages: Annotated[list[str], add]  # Accumulates across nodes

Each agent is a node function that receives state and returns updates:

def scout_node(state: PipelineState) -> dict:
    chain = _create_chain("You are the Scout agent...")
    output = chain.invoke({"input": state["goal"]})
    return {
        "scout_output": output,
        "messages": ["Scout: Identified opportunities"]
    }

The graph defines the flow between agents:

from langgraph.graph import StateGraph, START, END

graph = StateGraph(PipelineState)

# Add nodes
graph.add_node("scout", scout_node)
graph.add_node("planner", planner_node)
graph.add_node("operator", operator_node)
graph.add_node("analyst", analyst_node)
graph.add_node("archivist", archivist_node)

# Define edges
graph.add_edge(START, "scout")
graph.add_edge("scout", "planner")
graph.add_edge("planner", "operator")
graph.add_edge("operator", "analyst")
graph.add_edge("analyst", "archivist")
graph.add_edge("archivist", END)

# Compile to executable
app = graph.compile()

Running the pipeline with streaming:

for event in app.stream(initial_state):
    for node_name in event:
        print(f"{node_name} completed")

Directory Structure

AgentsLangGraph/
├── main.py                # Entry point
├── requirements.txt       # Dependencies
└── pipeline/
    ├── state.py           # PipelineState TypedDict
    ├── nodes.py           # Agent node functions
    ├── graph.py           # StateGraph definition
    ├── llm.py             # LLM configuration (OpenAI/FIRST/mock)
    └── tools/
        └── analysis.py    # analyze_dataset tool

Running the Example

cd Capabilities/local-agents/AgentsLangGraph
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
python main.py

Custom goal:

python main.py --goal "Design a catalyst for ammonia synthesis"

Quiet mode (less output):

python main.py --quiet

Comparison with Academy Version

Aspect LangGraph Academy
Workflow definition StateGraph with typed state Agent-to-agent messaging
State management Typed PipelineState dict Passed between agents
Flow control Graph edges Pipeline or hub-and-spoke
Distribution Single process Supports federated execution

LangGraph is particularly useful when workflows need:

See Also