Workflows

Sequential Mode

Learn how sequential workflows chain AI agents together in a linear pipeline for predictable, step-by-step automation.

February 6, 2024
7 min read

Sequential mode chains multiple AI agents together in a linear pipeline, where each agent's output flows directly to the next agent in sequence.

What is Sequential Mode?

In sequential mode, tasks execute one after another in a predetermined order—like an assembly line. Each agent completes its work before passing the result to the next agent.

Input → Agent A → Agent B → Agent C → Output

This is the most straightforward way to coordinate multiple AI agents, perfect for processes with clear, predictable steps.

When to Use Sequential Mode

Sequential mode works best when:

ScenarioWhy Sequential Works
Predictable processesSteps are known in advance
Content pipelinesEach stage builds on the previous
Data transformationInput flows through processing stages
Quality assuranceReview stages happen in order
Compliance workflowsAudit trail is important

Good Use Cases

  • Content creation: Research → Write → Edit → Format
  • Lead processing: Capture → Enrich → Score → Route
  • Document review: Extract → Analyze → Summarize → Report
  • Customer onboarding: Verify → Setup → Welcome → Train

When NOT to Use Sequential

  • Tasks where the best approach isn't known upfront
  • Complex problems requiring adaptive decision-making
  • Situations where agents need to collaborate dynamically

For these cases, consider Orchestrator Mode.

How Sequential Mode Works in AffinityBots

The Flow

  1. Trigger fires (manual, webhook, schedule, or form)
  2. First task receives the trigger input
  3. Agent executes and produces output
  4. Output passes to the next task as input
  5. Process repeats until all tasks complete
  6. Final output is returned

Context Control

You control how much information flows between agents:

SettingBehavior
Full ContextAgent sees all previous conversation history
IsolatedAgent only sees the immediate input from previous task
Custom InstructionsAdd task-specific guidance

Full Context is useful when later agents need to understand earlier decisions. Isolated is better when you want each agent to focus only on its specific task.

Using Sequential Mode in the Playground

The Playground lets you test sequential agent interactions before building a formal workflow.

Testing Agent Chains

  1. Go to Playground in the sidebar
  2. Select your first agent
  3. Send a message and get a response
  4. Copy the response
  5. Switch to your second agent
  6. Paste the previous response as input
  7. Continue the chain manually

Quick Sequential Test

For rapid testing:

  1. Create a "test orchestrator" agent with instructions like:

    You coordinate a content pipeline:
    1. First, act as a researcher and gather information
    2. Then, act as a writer and create content
    3. Finally, act as an editor and refine
    
    Show your work at each stage.
    
  2. Test the full flow in one conversation

  3. Once satisfied, build the formal workflow

Building Sequential Workflows

Step 1: Create the Workflow

  1. Navigate to Workflows
  2. Click Create Workflow
  3. Select Sequential as the workflow type
  4. Name it descriptively (e.g., "Blog Post Pipeline")

Step 2: Add Your Trigger

Choose how the workflow starts:

  • Manual: Test via UI or API
  • Webhook: External system sends data
  • Schedule: Run on a cron schedule
  • Form: Collect structured input first

Step 3: Add Tasks in Order

Add tasks in the sequence you want them to execute:

[Trigger] → [Task 1: Research] → [Task 2: Write] → [Task 3: Edit] → [Output]

For each task, configure:

SettingDescription
AgentWhich AI agent handles this task
InstructionsTask-specific guidance
ContextFull history or isolated

Step 4: Connect the Nodes

In the visual builder:

  1. Drag from the right handle of one node
  2. Connect to the left handle of the next node
  3. Repeat for all tasks

The visual flow shows the execution order clearly.

Step 5: Test and Activate

  1. Click Test to run with sample input
  2. Review outputs at each stage
  3. Adjust agent instructions as needed
  4. Toggle Active when ready

Example: Content Review Pipeline

Here's a complete sequential workflow example:

The Agents

  1. Research Agent

    • Model: GPT-4
    • Purpose: Fact-check and gather sources
  2. Writer Agent

    • Model: Claude
    • Purpose: Create engaging content
  3. Editor Agent

    • Model: GPT-4
    • Purpose: Polish and improve
  4. SEO Agent

    • Model: GPT-4
    • Purpose: Optimize for search

The Workflow

[Form: Collect Topic]
        ↓
[Research Agent]
  Instructions: "Research the topic thoroughly.
  Provide 5-7 key facts with sources."
  Context: Isolated
        ↓
[Writer Agent]
  Instructions: "Write a 1000-word article
  using the research provided."
  Context: Full (sees research)
        ↓
[Editor Agent]
  Instructions: "Improve clarity, fix errors,
  enhance readability."
  Context: Full (sees research + draft)
        ↓
[SEO Agent]
  Instructions: "Add meta description,
  optimize headings, suggest keywords."
  Context: Isolated (only needs final content)
        ↓
[Output: Optimized Article]

Why This Works

  • Research is isolated—starts fresh with just the topic
  • Writer has full context—uses research to create content
  • Editor has full context—understands the research and draft
  • SEO is isolated—only needs the final polished content

Best Practices

1. Design Clear Handoffs

Each agent should produce output that the next agent can use effectively:

❌ Bad: "I found some information about the topic."
✅ Good: "## Research Findings\n\n1. Key fact...\n2. Key fact..."

Add instructions like: "Format your output as structured markdown that the next agent can easily parse."

2. Use Appropriate Context Settings

Task TypeRecommended Context
First taskIsolated (fresh start)
Tasks needing historyFull Context
Independent processingIsolated
Final formattingOften Isolated

3. Add Guardrails

Insert validation tasks between critical steps:

[Writer] → [Quality Check] → [Editor]

The quality check agent can catch issues before they propagate.

4. Keep Pipelines Focused

  • 3-5 tasks is usually optimal
  • Each task should have a clear, single purpose
  • Too many tasks can slow execution and increase costs

5. Test Each Stage

When debugging:

  1. Test each agent individually first
  2. Then test pairs of agents
  3. Finally test the full pipeline

Troubleshooting

Output not reaching next task

  • Check that nodes are properly connected
  • Verify the previous task completed successfully
  • Review the task's output format

Context not being used

  • Confirm context is set to "Full Context"
  • Check if previous tasks produced meaningful output
  • Add explicit instructions to reference earlier context

Tasks running out of order

  • In sequential mode, order is determined by connections
  • Verify edges connect in the correct sequence
  • Use auto-layout to visualize the flow clearly

Slow execution

  • Each task runs serially (one at a time)
  • Consider combining simple tasks
  • Use faster models for less complex tasks

Related Guides