Where The Authority Gate Sits
n8n, Zapier, and Make are pitched at the same buyer and solving three different problems. A reference comparison of what each platform is actually for, with AI agents in the workflow.

The bet each vendor is making
Three vendors are pitched at the same buyer, and they are running three different theories of what 2026 workflow automation should be. The category felt mostly settled until AI agents arrived. In the first half of 2026, n8n shipped per-tool human review for AI agents, Zapier added Human in the Loop and MCP for agent steps, and Make pushed AI Agents (New) into open beta. The category is splitting fast enough that a buyer working from last year's research is buying last year's product.
n8n treats automation as internal infrastructure. Its job is to give engineering and platform teams a workflow runtime they can self-host, version with Git, and gate at the tool level when an AI agent is involved. The bet is that serious AI automation moves closer to engineering, security, and platform teams, and that the right unit of authority is the individual tool call. Pricing is per completed execution, not per step. Unlimited users come on every plan. Self-hosting is the strategic feature, not the cheap one.
Zapier treats automation as a SaaS connector layer. Its job is to let an operator wire up many cloud apps without running infrastructure or writing code. The bet is that breadth, time-to-value, and non-engineering ownership beat depth and control for most teams. The catalog is the largest by a wide margin, the AI layer spans Zaps, Agents, MCP, and AI by Zapier, and Human in the Loop is a built-in approval action. Pricing is per task; the risk is task and activity inflation.
Make treats automation as a visual modeling problem. Its job is to let an analyst draw a scenario on a canvas, route it through filters, and inspect the bundles flowing between modules. The bet is that for visual, branched, data-shape-heavy work, the canvas is faster to reason about than code or a Zap list. Pricing is per credit; AI Agents (New) are in open beta with explicit safety warnings about sensitive use cases.
These are not three flavors of the same product. They are three different theories of what workflow automation should be in the agent era. The risk in the buying process is choosing the wrong category, not the wrong vendor.
Pick n8n if you need a per-tool authority gate
The right buyer is an engineering-leaning ops team, a platform team, or a growth-engineering function with a technical owner who can self-host or operate n8n Cloud. The kind of team where AI agents will eventually touch internal databases, private APIs, customer data, refunds, account suspensions, contract edits, or any system where one wrong action causes production loss.
The case for n8n is control. Self-hosting decides where execution data, credentials, logs, and secrets live. Version 2.6.0, released January 26, 2026, added human-in-the-loop review at the tool-call level. A gated tool cannot execute until a person approves the specific call and its parameters. That is a different authority boundary from approving a final output after the agent has already reasoned through the task. The corporate context is healthy: $180M Series C in October 2025 at a $2.5B valuation, around 858 employees, Berlin-based parent, source-available license that survives the worst-case vendor outcome.
The pricing math favors long workflows. Execution-based pricing makes a 12-step AI agent the same price as a 3-step webhook. Starter is €20 per month, Pro €50 per month, Business €667 per month for 40,000 self-hosted executions, and unlimited users on every plan. Self-hosting is not free in operations: plan on $300 to $500 per month of basic infrastructure plus 0.05 to 0.15 FTE of senior DevOps for routine maintenance, more in the first six months while the runbook is being built. n8n Cloud avoids most of the ops cost in exchange for the license bump.
Pick n8n when the gate matters. Skip it for teams without a technical owner or appetite for debugging infrastructure. Zapier ships faster for non-engineering ops. Make models visual transformations better for analysts.
Pick Zapier if you need to ship in days, not months
The right buyer is an operations, RevOps, marketing, customer success, sales, recruiting, or finance ops team that wants to connect many SaaS tools quickly without running infrastructure. Founder-led teams in the first twelve months of operation also fit.
The case for Zapier is reach and maturity. The catalog is 9,000+ apps, and the MCP page claims 30,000+ actions across them. AI by Zapier covers OpenAI, Anthropic, Gemini, and Azure OpenAI. Zapier Agents can use app actions and knowledge sources; Zapier MCP exposes app actions to external AI clients. Human in the Loop is a built-in tool: Collect Data pauses a Zap to gather reviewer input, Request Approval routes to one or more reviewers with email or Slack notifications, and Add Approval Steps wraps an agent's instructions. Zapier was founded in 2011 and has been profitable since 2014, with $310M revenue in 2024 and $400M projected for 2025 on $1.4M of total venture funding. The vendor risk is among the lowest in the category.
The weak point is authority and cost. Zapier's HITL is a workflow checkpoint, not a deterministic gate on every dangerous tool edge. Successful Zap actions count as tasks, MCP calls use two tasks each, Agents have a separate activity meter, and overages at 1.25x the base rate accumulate quickly when one workflow contains many successful actions per run. Zapier is also explicit that it is not HIPAA compliant and does not sign Business Associate Agreements. PHI workflows belong elsewhere.
Pick Zapier when speed and catalog breadth matter more than deployment control. Skip it for privileged internal systems, regulated workflows, or agentic automations where pre-execution containment is the central requirement.
Pick Make if scenarios are easier to read than code
The right buyer is an operations, marketing ops, RevOps, marketplace ops, finance ops, or automation team that thinks in scenarios, branches, filters, bundles, and field transformations. Make is the middle path: more visual modeling power than Zapier, less infrastructure control than n8n.
The case for Make is the canvas. Scenario history, module input/output bundles, routers, filters, and visual debugging make data-shape-heavy workflows easier to inspect than a long linear automation. The catalog is 3,000+ apps, smaller than Zapier's but large enough for most mainstream RevOps and ops use cases. Pricing is competitive at moderate volume: $9 per month Core, $16 per month Pro, $29 per month Teams at 10,000 credits per month. Make has been part of Celonis since the October 2020 acquisition of Integromat and the February 2022 rebrand. It operates as a business unit inside a German process-mining parent. The corporate stability is real; the strategic priority of the iPaaS unit inside a process-mining company is a question worth asking on a vendor call.
The caveat is AI maturity and beta status. Make AI Agents (New) is in open beta. The product supports modules, scenarios, or MCP server tools as agent tools, plus knowledge files and providers across OpenAI, Anthropic Claude, and Gemini. Make's own docs warn against using agents for sensitive data, high-stakes financial or strategic decisions, or strict legal requirements. Human in the Loop Enterprise is an Enterprise-only review-request app, not a per-tool deterministic gate. The credit meter is also easy to underestimate when a scenario fans out into many module executions.
Pick Make when seeing the scenario is the work. Skip it when full self-hosting, the broadest connector catalog, or per-tool AI authority gates are central to the buy.
When none of the three fits
If the workflow is no longer a business automation, leave the category. Durable execution platforms like Temporal and Inngest fit when the requirement is long-running state, retries, idempotency, recovery, and correctness under failure: product onboarding flows, billing operations, multi-day AI agent backends, and systems where every state transition has to be explicit.
For data pipelines, use data orchestration. Airflow and Prefect fit scheduled ETL, ML pipelines, data warehouse loads, and engineering workflows where Python, SQL, task dependencies, and operational metadata matter more than no-code app connectors.
For large enterprises with central IT, formal procurement, identity controls, and integration center ownership, Workato or Tray may fit better. They are heavier products, less transparent on pricing, but the right answer when the platform is an enterprise iPaaS rather than a team automation tool.
Use custom code on a job runner when the workflow is simple, high-volume, and developer-owned. The threshold is simple: if the visual canvas becomes a workaround for code you already know how to write and maintain, leave the canvas.
How the three compare
The cells map real choices: deployment model, pricing meter, AI agent control, human-in-the-loop discipline, and observability. Each row is a question a buyer is already asking.
| Criterion | n8n | Zapier | Make |
|---|---|---|---|
| Deployment | Self-hosted or n8n Cloud on Azure (EU); SOC 2 Enterprise, SOC 3 public | Cloud-only; Enterprise adds governance, audit, retention | Cloud platform; Enterprise on-prem agents for private network access |
| Data residency | EU Cloud on Azure; self-host anywhere | US (single region) | US or EU at organization creation, locked thereafter |
| Pricing meter | Per completed execution; €20 Starter (2.5K), €50 Pro, €667 Business (40K self-hosted); unlimited users every plan | Per task; $19.99 Pro, $69 Team; MCP call = 2 tasks; Agents on a separate activity meter | Per module action (credit); $9 Core, $16 Pro, $29 Teams at 10K credits/month |
| AI agent primitives | AI Agent node with tools, memory, vector stores, subagents; OpenAI, Anthropic, Bedrock, Vertex, Gemini, Groq, DeepSeek, Cohere, Ollama, Mistral | Zapier Agents use Zapier app actions and knowledge sources; AI by Zapier across OpenAI, Anthropic, Gemini, Azure | AI Agents (New) in open beta; tools as modules, scenarios, or MCP; OpenAI, Anthropic, Gemini |
| Human-in-the-loop | Per-tool-call review (v2.6.0, January 26 2026); approve or deny each tool call before execution; Slack, Teams, Discord, Telegram, Gmail, WhatsApp, n8n Chat | Built-in HITL on paid plans; Collect Data, Request Approval, post-Agents review; email/Slack/triggered Zap | HITL Enterprise app for review queues; Enterprise tier only; review-request rather than per-tool gate |
| Observability | Error workflows, execution history, retry, debug-from-prior-run, log streaming on Enterprise, source-controlled environments | Zap History, autoreplay/replay on paid tiers, audit log on eligible plans | Scenario history with input/output bubbles, full-text execution log search on Pro |
| HIPAA / BAA | Self-hosted only with own controls | Not supported, no BAA | Not supported by default |
| Edge | Per-tool authority gate; engineering-grade debugging; user economics | Catalog reach; mature ops; bootstrapped vendor stability | Visual modeling depth; canvas-first debugging |
What to test during a demo
These are the fifteen scenarios a buyer should run against each platform in a demo. They cover the failure modes a sales pitch is likeliest to skip: per-tool authority, credential scoping, recovery, accounting, and AI-agent edge cases.
- AI agent calling a CRM update tool. Verify HITL approval interrupts the call before execution.
- AI agent attempting a destructive operation (database delete, account suspension). Verify a per-tool deny rule stops the call without entering the approval queue.
- Workflow editor without explicit credential access. Verify they cannot use credentials already used in a shared workflow.
- Self-hosted node failure (n8n only). Verify queue-mode recovery and replay of in-flight executions.
- Bad model output in a structured-output step. Verify graceful retry and routing to an error workflow.
- Rate-limited downstream API. Verify backoff, retry, and reviewer notification on persistent failure.
- Revoked or rotated credential. Verify the workflow fails closed rather than retrying with the stale credential.
- Loop-heavy or fan-out scenario. Verify task, activity, or credit accounting at projected monthly volume.
- Multi-step approval path: agent proposes, human approves, CRM updates, notification sends. Verify the audit trail captures every step.
- Long-running execution above the platform's default timeout. Verify timeout handling and partial-state recovery.
- Workflow rollback. Roll back a published workflow change and verify exactly what reverts and what does not.
- Audit trace for a single execution end-to-end. Verify what is captured: prompt, tool, parameters, reviewer, decision, retries, final action.
- Data-shape transformation across three or more modules. Inspect intermediate bundles for type drift.
- Cost projection at 5,000 runs per month with 12 successful actions per run. Compare projected bill across platforms.
- HIPAA-shaped or PII-heavy data. Verify what flows through the platform's logs and what does not, and confirm BAA availability.
What to validate before signing
For self-hosted n8n, expect 0.05 to 0.15 FTE of senior DevOps for routine maintenance plus $300 to $500 per month of basic infrastructure. n8n Cloud avoids the ops burden in exchange for the license bump. Compare both options against Zapier and Make at projected volume before declaring self-hosting cheaper.
None of the three exports portable agent definitions. Switching costs are non-trivial: workflow logic, knowledge sources, prompts, tool schemas, OAuth tokens, and audit trails do not migrate. Plan on a three-to-six-week rebuild per non-trivial workflow when migrating between platforms at scale, and pick the platform expecting the choice is sticky for at least eighteen to twenty-four months.
For regulated workflows (HIPAA, GLBA, regulated finance, clinical decisions), confirm each vendor's compliance posture separately during procurement. Zapier is explicit that it is not HIPAA compliant. Make AI Agents docs warn against use for strict legal requirements. Self-hosted n8n inherits whatever controls the operator implements.
Vendors release frequently. Re-check the relevant changelogs in the week of the buying decision.
Pick by where the gate sits, not by who shipped it
The vendor decision is recoverable. The category decision is not. An AI agent attached to the wrong runtime is a story about an authority boundary that should have been there and was not.
Share
Methodology
For each vendor we read the pricing page, the AI-agent docs, the human-in-the-loop article, the release notes, and the security and compliance statements. We checked corporate context (funding, revenue, ownership) against primary announcements rather than analyst summaries. Pricing, feature dates, and knowledge limits are dated 2026-04-30. Re-check before signing. Where vendor materials are thin or recent, we say so. Make AI Agents (New) is open beta. Zapier cites different app counts on different pages, and we name the range rather than picking one. n8n's per-tool HITL feature is two months old.
Sources
- n8n pricing page
- n8n security and EU Cloud hosting
- n8n source repository and license
- n8n, Human-in-the-loop for AI tool calls
- n8n release notes including v2.5 and v2.6.0
- n8n source control and environments
- n8n RBAC and credential sharing
- n8n, Series C announcement
- TechFundingNews, n8n raises $180M at $2.5B valuation
- Zapier pricing page
- Zapier MCP product page
- Zapier, How is Zapier Agents usage measured?
- Zapier, Request approval to keep your workflow running
- Zapier, Add approval steps to your agent's instructions
- Zapier, Is Zapier HIPAA compliant?
- Sacra, Zapier revenue and funding profile
- GetLatka, Zapier $310M revenue
- Make pricing page
- Make security and compliance
- Make, Organizations and region selection
- Make, Introduction to Make AI Agents (New)
- Make Human in the Loop Enterprise integration
- Make scenario history and recovery
- TechCrunch, Celonis acquires Czech startup Integromat
- ExpressTech, the real cost of self-hosting n8n in 2026
- Latenode, n8n self-hosted pricing reality
- The Register, Cursor-Opus agent snuffs out PocketOS
- Fortune, AI-powered coding tool wiped out a software company's database
- Yuan, Su, Zhao, AEGIS: No Tool Call Left Unchecked
Tools mentioned
- n8n — Self-hostable workflow automation with AI agent and per-tool human-in-the-loop primitivesn8n
- Zapier — Cloud workflow automation with broad app coverage, MCP, Agents, and Human in the Loop approvalZapier
- Make — Visual cloud workflow automation with scenarios, routers, filters, and AI Agents (New) in open betaMake
- Temporal — Durable execution platform for long-running stateful workflowsTemporal
- Inngest — Durable workflow runtime for product and agent backendsInngest
- Apache Airflow — Open-source data orchestration for scheduled pipelinesApache
- Prefect — Workflow orchestration for data engineering teamsPrefect
- Workato — Enterprise iPaaS for centralized integration platformsWorkato
- Tray — Enterprise integration platform for cross-business-unit ITTray