Workflows

Sequential Workflows

Chain AI agents together in a linear pipeline for predictable, step-by-step automation using the unified workflow builder.

April 1, 2026
7 min read

Sequential workflows connect agents in a linear chain where each agent's output flows directly into the next — like an assembly line. This is the simplest and most predictable way to coordinate multiple agents, and it's built directly into the unified workflow builder.

How Sequential Workflows Work

Each agent task completes before passing its result to the next task in the chain:

Trigger → Agent A → Agent B → Agent C → Output

Execution is deterministic — you define the order, and that's the order it runs. There's no branching or dynamic routing in a purely sequential flow.

When to Use Sequential Workflows

Sequential works best when your process has clear, known steps that always happen in the same order.

ScenarioWhy Sequential Works
Content pipelinesResearch → Write → Edit → Publish always happens in order
Data transformationInput flows through predictable processing stages
Quality assuranceReview and validation stages are always required
Document processingExtract → Analyze → Summarize → Report
Compliance workflowsEvery step must be completed and logged

Good Use Cases

  • Content creation: Research → Write → Edit → Format
  • Lead processing: Capture → Enrich → Score → Route
  • Document review: Extract → Analyze → Summarize → Report
  • Customer onboarding: Verify → Setup → Welcome → Notify

When Sequential May Not Be Enough

If the steps themselves depend on runtime conditions — for example, if you don't know ahead of time which agents should be involved — consider a Dynamic Workflow instead. Dynamic workflows use the same builder but add hub-style delegation so agents can route work adaptively.

Building a Sequential Workflow

Step 1: Create a Workflow

  1. Navigate to Workflows in the sidebar
  2. Click Create Workflow
  3. Give it a descriptive name (e.g., "Blog Post Pipeline")

Step 2: Add a Trigger

Select how the workflow starts:

  • Manual — Useful for testing or one-off runs
  • Webhook — External system sends data to kick off the flow
  • Schedule — Runs automatically on a cron schedule
  • Form — Collects structured input before execution begins

[!TIP] Start with a Manual trigger while building. You can switch to automated triggers once your pipeline is working correctly.

Step 3: Add Agent Tasks in Order

Click Add Agent after the trigger and configure each task:

SettingDescription
AgentWhich AI agent handles this task
Task NameA clear, descriptive name (e.g., "Research Topic")
InstructionsTask-specific guidance for this step
Context ControlWhether the agent sees previous output or starts fresh

Repeat for each step in your pipeline.

Step 4: Connect the Nodes

  • Drag from the right handle of one node to the left handle of the next
  • Repeat until all tasks are connected in sequence
  • Use Auto-Layout in the toolbar to clean up the visual arrangement

The execution order follows the connections you draw.

Step 5: Test and Activate

  1. Click Test and provide sample input
  2. Watch execution flow through each node
  3. Review outputs at each stage and adjust agent instructions as needed
  4. Toggle Active when the workflow is ready to go live

Context Control

You control how much information each agent receives from the previous step:

SettingBehaviorUse When
Full ContextAgent sees all prior conversation historyAgent needs to understand earlier decisions
IsolatedAgent only sees the immediate output of the previous taskAgent should focus solely on its own task

Guidance

  • First task: Isolated is usually best — starts fresh from the trigger input
  • Tasks that build on prior work: Full Context lets the agent reference earlier outputs
  • Independent processing tasks: Isolated keeps focus tight and reduces noise
  • Final formatting/output tasks: Often Isolated — only needs the polished content

Example: Content Review Pipeline

Agents

  1. Research Agent — Gathers facts and sources on the topic
  2. Writer Agent — Creates a draft from the research
  3. Editor Agent — Polishes clarity and tone
  4. SEO Agent — Optimizes headings, meta description, and keywords

Workflow

[Form: Collect Topic]
        ↓
[Research Agent]
  Instructions: "Research the topic thoroughly.
  Provide 5-7 key facts with sources."
  Context: Isolated
        ↓
[Writer Agent]
  Instructions: "Write a 1000-word article
  using the research provided."
  Context: Full (sees research)
        ↓
[Editor Agent]
  Instructions: "Improve clarity, fix errors,
  enhance readability."
  Context: Full (sees research + draft)
        ↓
[SEO Agent]
  Instructions: "Add meta description,
  optimize headings, suggest keywords."
  Context: Isolated (only needs final content)
        ↓
[Output: Optimized Article]

Why This Works

  • Research starts isolated — no prior context needed, just the topic
  • Writer has full context — uses the research to write the draft
  • Editor has full context — understands the research and draft together
  • SEO is isolated — only needs the final polished text

Best Practices

Design Clear Handoffs

Each agent should produce output the next agent can easily use:

❌ "I found some information about the topic."
✅ "## Research Findings\n\n1. Key fact with source...\n2. Key fact with source..."

Add instructions like: "Format your output as structured markdown so the next agent can parse it easily."

Keep Pipelines Focused

  • 3–5 tasks is usually the right size
  • Each task should have one clear purpose
  • Too many steps slow execution and increase cost

Add Guardrails Between Critical Steps

Insert a validation task between steps where bad output would propagate:

[Writer] → [Quality Check] → [Editor]

See Adding Guardrails for details.

Test Incrementally

  1. Test each agent individually first
  2. Then test adjacent pairs
  3. Finally run the full pipeline

This makes it much easier to find where things go wrong.

Troubleshooting

Output not reaching the next task

  • Check that all nodes are connected
  • Confirm the previous task completed without errors
  • Review what the previous task actually produced

Context not being used

  • Make sure context is set to "Full Context" on that task
  • Verify the previous task produced meaningful, structured output

Execution is slow

  • Sequential tasks run one at a time by design
  • Consider combining simple tasks into one
  • Use faster models for lower-complexity steps

Related Guides