Open source · MIT license · Public Beta

Orchestrate AI + human
workflows via API

Real DAG execution, human-in-loop approval gates, multi-agent routing. Start with 3 managed trial runs, then bring your own provider or self-host the same engine.

Real DAG execution Human checkpoints Multi-agent routing Deploy in 15 minutes
Get your API key View API docs →
POST /api/v1/runs
{
  "task": "Plan and launch a faceless YouTube channel about AI tools for freelancers",
  "agents": [
    { "name": "research" },
    { "name": "analyst" },
    { "name": "writer" }
  ],
  "ask_me_about": ["brand voice", "target audience"],
  "require_approval": false
}

← 202 Accepted in milliseconds
← Orchflow pauses only when human input is needed
← resumes automatically after your response

// how it works

From goal to result, automatically

Orchflow handles the orchestration layer so your product can focus on decisions, not glue code.

Task split
LLM splits your goal into 3-6 actionable subtasks based on the context of the run.
DAG built
LLM determines real dependencies between tasks. Parallel tasks can run at the same time.
Agents routed
Tasks are matched to your named agents by role and executed with shared run context.
Human gates
Run pauses when human input is needed, then resumes through a simple API call or webhook-driven flow.

// real example

What changes when you use Orchflow

Take a real workflow like competitor research, pricing analysis, and launch planning. The difference is not just output quality, it is how the work is executed and controlled.

Without Orchflow
# one-shot prompting + app-side glue code
prompt = "Research competitors, compare pricing, define positioning, ask for missing context, then create a launch strategy."

result = llm.generate(prompt)

# app now has to do the rest itself
# - no visible task graph
# - no built-in human pause
# - no task-level progress state
# - no resumable workflow after approval
# - retries, persistence, and routing are custom code
You get one large response back, and your app has to handle the workflow logic itself.
With Orchflow
run = orchflow.start({
  "task": "Research competitors, compare pricing, then draft our launch strategy.",
  "agents": [{"name": "research"}, {"name": "analyst"}],
  "ask_me_about": ["company details", "brand voice"]
})

# Orchflow handles the execution layer
# - task split + dependency graph
# - human checkpoint only where needed
# - visible run + task state mid-flight
# - automatic resume after human input
# - one final synthesized output
One API call starts a workflow your app can track, pause, and resume cleanly.

// deployment modes

Hosted first. Self-host when you need it.

Same engine, two ways to use Orchflow depending on how much control you want.

Hosted Orchflow

Best for getting from zero to a working run quickly.

  • Register and start immediately
  • 3 managed trial runs included
  • Add your own provider later

Self-hosted Orchflow

Best for teams that want BYOK, full deployment control, and local ownership of infra.

  • BYOK-first setup
  • Docker + Redis + Postgres
  • Same engine and API shape

// features

Everything you'd otherwise build yourself

Stop rebuilding orchestration infrastructure around raw model calls for every product.

DAG()
Real dependency graphs
LLM determines which tasks genuinely depend on each other. Parallel execution where possible. Not a fixed linear chain.
human.wait()
Human-in-loop
Suspend runs at checkpoints. State persists. Resume with one API call. Your response flows into the next task as context.
agent.route()
Named agent routing
Use specialist agents with custom prompts and model overrides. Tasks route by name and share run context.
retry()
Self-evaluation + retry
Agents evaluate their own output, retry with failure context, and escalate to human when needed.
persist()
Durable state
Redis for hot execution state. Postgres for durable run and task history. Runs survive restarts and tasks are visible mid-run.
byok()
Multi-LLM support
Gemini, OpenAI, Anthropic, and Ollama support. Register your provider once and use different models per task or agent.

// implementation examples

How Orchflow fits into a real product

Orchflow works best as infrastructure inside your own app. Here are two lightweight ways to use it.

Content workflow app

Your product sends a brief, Orchflow runs research and writing, then pauses for editor approval before final delivery.

  • Start run from your backend
  • Show task status in your UI
  • Resume after editor approval

Internal ops assistant

Your team starts a workflow for research, planning, or documentation, then Orchflow requests human context only when it is actually needed.

  • Use named specialist agents
  • Route brand or approval steps to humans
  • Store final output in your app
Python backend example
# Start a workflow from your own backend
import requests

response = requests.post(
  "https://api.orchflow.cloud/api/v1/runs",
  headers={"X-API-Key": "orch_your_key"},
  json={
    "task": "Research competitors, compare pricing, then draft our launch strategy.",
    "agents": [{"name": "research"}, {"name": "analyst"}],
    "ask_me_about": ["company details"]
  }
)

response.raise_for_status()
run_id = response.json()["run_id"]

// comparison

vs. direct LLM calls

Feature Direct LLM call Orchflow
Multi-step executionmanualautomatic DAG
State persistencenoneRedis + Postgres
Human approval gatesnot built insuspend + resume
Survives server restartnoyes
Multi-day workflowsnoyes
Specialized agentsone promptnamed agents + routing
Self-evaluation + retrynoautomatic
Webhook notificationsnoyes

Start in minutes

Register, try your first managed workflow, then bring your own provider or self-host when you're ready.

Get your API key View on GitHub →