Sequential Mode
Learn how sequential workflows chain AI agents together in a linear pipeline for predictable, step-by-step automation.
Sequential mode chains multiple AI agents together in a linear pipeline, where each agent's output flows directly to the next agent in sequence.
What is Sequential Mode?
In sequential mode, tasks execute one after another in a predetermined order—like an assembly line. Each agent completes its work before passing the result to the next agent.
Input → Agent A → Agent B → Agent C → Output
This is the most straightforward way to coordinate multiple AI agents, perfect for processes with clear, predictable steps.
When to Use Sequential Mode
Sequential mode works best when:
| Scenario | Why Sequential Works |
|---|---|
| Predictable processes | Steps are known in advance |
| Content pipelines | Each stage builds on the previous |
| Data transformation | Input flows through processing stages |
| Quality assurance | Review stages happen in order |
| Compliance workflows | Audit trail is important |
Good Use Cases
- Content creation: Research → Write → Edit → Format
- Lead processing: Capture → Enrich → Score → Route
- Document review: Extract → Analyze → Summarize → Report
- Customer onboarding: Verify → Setup → Welcome → Train
When NOT to Use Sequential
- Tasks where the best approach isn't known upfront
- Complex problems requiring adaptive decision-making
- Situations where agents need to collaborate dynamically
For these cases, consider Orchestrator Mode.
How Sequential Mode Works in AffinityBots
The Flow
- Trigger fires (manual, webhook, schedule, or form)
- First task receives the trigger input
- Agent executes and produces output
- Output passes to the next task as input
- Process repeats until all tasks complete
- Final output is returned
Context Control
You control how much information flows between agents:
| Setting | Behavior |
|---|---|
| Full Context | Agent sees all previous conversation history |
| Isolated | Agent only sees the immediate input from previous task |
| Custom Instructions | Add task-specific guidance |
Full Context is useful when later agents need to understand earlier decisions. Isolated is better when you want each agent to focus only on its specific task.
Using Sequential Mode in the Playground
The Playground lets you test sequential agent interactions before building a formal workflow.
Testing Agent Chains
- Go to Playground in the sidebar
- Select your first agent
- Send a message and get a response
- Copy the response
- Switch to your second agent
- Paste the previous response as input
- Continue the chain manually
Quick Sequential Test
For rapid testing:
-
Create a "test orchestrator" agent with instructions like:
You coordinate a content pipeline: 1. First, act as a researcher and gather information 2. Then, act as a writer and create content 3. Finally, act as an editor and refine Show your work at each stage. -
Test the full flow in one conversation
-
Once satisfied, build the formal workflow
Building Sequential Workflows
Step 1: Create the Workflow
- Navigate to Workflows
- Click Create Workflow
- Select Sequential as the workflow type
- Name it descriptively (e.g., "Blog Post Pipeline")
Step 2: Add Your Trigger
Choose how the workflow starts:
- Manual: Test via UI or API
- Webhook: External system sends data
- Schedule: Run on a cron schedule
- Form: Collect structured input first
Step 3: Add Tasks in Order
Add tasks in the sequence you want them to execute:
[Trigger] → [Task 1: Research] → [Task 2: Write] → [Task 3: Edit] → [Output]
For each task, configure:
| Setting | Description |
|---|---|
| Agent | Which AI agent handles this task |
| Instructions | Task-specific guidance |
| Context | Full history or isolated |
Step 4: Connect the Nodes
In the visual builder:
- Drag from the right handle of one node
- Connect to the left handle of the next node
- Repeat for all tasks
The visual flow shows the execution order clearly.
Step 5: Test and Activate
- Click Test to run with sample input
- Review outputs at each stage
- Adjust agent instructions as needed
- Toggle Active when ready
Example: Content Review Pipeline
Here's a complete sequential workflow example:
The Agents
-
Research Agent
- Model: GPT-4
- Purpose: Fact-check and gather sources
-
Writer Agent
- Model: Claude
- Purpose: Create engaging content
-
Editor Agent
- Model: GPT-4
- Purpose: Polish and improve
-
SEO Agent
- Model: GPT-4
- Purpose: Optimize for search
The Workflow
[Form: Collect Topic]
↓
[Research Agent]
Instructions: "Research the topic thoroughly.
Provide 5-7 key facts with sources."
Context: Isolated
↓
[Writer Agent]
Instructions: "Write a 1000-word article
using the research provided."
Context: Full (sees research)
↓
[Editor Agent]
Instructions: "Improve clarity, fix errors,
enhance readability."
Context: Full (sees research + draft)
↓
[SEO Agent]
Instructions: "Add meta description,
optimize headings, suggest keywords."
Context: Isolated (only needs final content)
↓
[Output: Optimized Article]
Why This Works
- Research is isolated—starts fresh with just the topic
- Writer has full context—uses research to create content
- Editor has full context—understands the research and draft
- SEO is isolated—only needs the final polished content
Best Practices
1. Design Clear Handoffs
Each agent should produce output that the next agent can use effectively:
❌ Bad: "I found some information about the topic."
✅ Good: "## Research Findings\n\n1. Key fact...\n2. Key fact..."
Add instructions like: "Format your output as structured markdown that the next agent can easily parse."
2. Use Appropriate Context Settings
| Task Type | Recommended Context |
|---|---|
| First task | Isolated (fresh start) |
| Tasks needing history | Full Context |
| Independent processing | Isolated |
| Final formatting | Often Isolated |
3. Add Guardrails
Insert validation tasks between critical steps:
[Writer] → [Quality Check] → [Editor]
The quality check agent can catch issues before they propagate.
4. Keep Pipelines Focused
- 3-5 tasks is usually optimal
- Each task should have a clear, single purpose
- Too many tasks can slow execution and increase costs
5. Test Each Stage
When debugging:
- Test each agent individually first
- Then test pairs of agents
- Finally test the full pipeline
Troubleshooting
Output not reaching next task
- Check that nodes are properly connected
- Verify the previous task completed successfully
- Review the task's output format
Context not being used
- Confirm context is set to "Full Context"
- Check if previous tasks produced meaningful output
- Add explicit instructions to reference earlier context
Tasks running out of order
- In sequential mode, order is determined by connections
- Verify edges connect in the correct sequence
- Use auto-layout to visualize the flow clearly
Slow execution
- Each task runs serially (one at a time)
- Consider combining simple tasks
- Use faster models for less complex tasks
Related Guides
- Orchestrator Mode - Dynamic agent coordination
- Creating a Workflow - Step-by-step workflow guide
- Adding Guardrails - Validate agent outputs
- Workflow Triggers - Configure how workflows start