This example demonstrates how to build a multi-agent pipeline for scientific discovery using LangGraph. Five specialized agents work in sequence to tackle a research goal, with each agent contributing its expertise before passing results to the next.
The pipeline addresses a sample scientific goal: “Find catalysts that improve CO2 conversion at room temperature.”
The workflow proceeds through five stages:
| Agent | Role | Input | Output |
|---|---|---|---|
| Scout | Surveys the problem space, identifies anomalies | Goal | Research opportunities |
| Planner | Designs workflows, allocates resources | Opportunities | Workflow plan |
| Operator | Executes the planned workflow safely | Plan | Execution results |
| Analyst | Summarizes findings, quantifies uncertainty | Results | Analysis summary |
| Archivist | Documents everything for reproducibility | Summary | Documented provenance |
Each agent implementation is a skeleton demonstrating the pattern.
Requirements: Python 3.10+, LangGraph 1.0+, LangChain 1.0+
The example supports three modes:
| Mode | Environment Variable | Description |
|---|---|---|
| OpenAI | OPENAI_API_KEY |
Uses OpenAI’s gpt-4o-mini |
| FIRST | FIRST_API_KEY |
Uses FIRST HPC inference service |
| Mock | (none) | Demonstrates pattern with hardcoded responses |
Mock mode runs without any API key, showing realistic example outputs for the scientific workflow.
LangGraph provides several advantages over plain LangChain for multi-agent workflows:
The code uses LangGraph’s StateGraph for workflow definition. State is passed between agent nodes:
from typing import TypedDict, Annotated
from operator import add
class PipelineState(TypedDict):
goal: str
scout_output: str
planner_output: str
operator_output: str
analyst_output: str
archivist_output: str
messages: Annotated[list[str], add] # Accumulates across nodes
Each agent is a node function that receives state and returns updates:
def scout_node(state: PipelineState) -> dict:
chain = _create_chain("You are the Scout agent...")
output = chain.invoke({"input": state["goal"]})
return {
"scout_output": output,
"messages": ["Scout: Identified opportunities"]
}
The graph defines the flow between agents:
from langgraph.graph import StateGraph, START, END
graph = StateGraph(PipelineState)
# Add nodes
graph.add_node("scout", scout_node)
graph.add_node("planner", planner_node)
graph.add_node("operator", operator_node)
graph.add_node("analyst", analyst_node)
graph.add_node("archivist", archivist_node)
# Define edges
graph.add_edge(START, "scout")
graph.add_edge("scout", "planner")
graph.add_edge("planner", "operator")
graph.add_edge("operator", "analyst")
graph.add_edge("analyst", "archivist")
graph.add_edge("archivist", END)
# Compile to executable
app = graph.compile()
Running the pipeline with streaming:
for event in app.stream(initial_state):
for node_name in event:
print(f"{node_name} completed")
AgentsLangGraph/
├── main.py # Entry point
├── requirements.txt # Dependencies
└── pipeline/
├── state.py # PipelineState TypedDict
├── nodes.py # Agent node functions
├── graph.py # StateGraph definition
├── llm.py # LLM configuration (OpenAI/FIRST/mock)
└── tools/
└── analysis.py # analyze_dataset tool
cd Capabilities/local-agents/AgentsLangGraph
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
# Run with mock responses (no API key needed)
python main.py
# Or with OpenAI
export OPENAI_API_KEY=<your_api_key>
python main.py
# Or with FIRST (HPC environments)
export FIRST_API_KEY=<your_token>
export FIRST_API_BASE=https://your-first-endpoint/v1
python main.py
Custom goal:
python main.py --goal "Design a catalyst for ammonia synthesis"
Quiet mode (less output):
python main.py --quiet
| Aspect | LangGraph | Academy |
|---|---|---|
| Workflow definition | StateGraph with typed state | Agent-to-agent messaging |
| State management | Typed PipelineState dict | Passed between agents |
| Flow control | Graph edges | Pipeline or hub-and-spoke |
| Distribution | Single process | Supports federated execution |
LangGraph is particularly useful when workflows need: