AI agent orchestration on WebAssembly.
Agents with tool access need API keys. A prompt injection or misbehaving agent can exfiltrate secrets and make unauthorized calls.
Multi-turn agentic loops are long-running and fragile. A crash mid-loop loses all context, wastes tokens, and leaves inconsistent state.
Agents sleeping between steps — waiting for approvals or scheduling future actions — hold processes and resources open for the entire duration.
Running agents for different customers on shared runtimes creates data leaks most workflow engines don't address.
Workflows compile to WebAssembly and run in wasmtime. No filesystem, no raw network, no credential access. The sandbox enforces what agents can and cannot do.
API keys live in the connection service. A host proxy injects them at request time. Even a compromised agent cannot exfiltrate secrets.
Every tool call is checkpointed. Crashes resume from the last step with full conversation history intact. No repeated API calls, no wasted tokens.
JSON workflows compile to WASM modules — immutable, versioned artifacts. Zero interpreter overhead, smaller attack surface than scripted engines.
Each customer gets a fully isolated environment — own database, credentials, and runtime. No shared state, no cross-tenant risks.
Agents waiting for approvals or external events release all resources. The engine wakes them automatically when it's time — hours or days later.
JSON DSL compiles to sandboxed WASM. Tools are graph edges. Credentials stay server-side.
{
"name": "support-agent",
"steps": [
{
"id": "triage",
"type": "AiAgent",
"connection": "conn_openai_gpt4o",
"systemPrompt": "Triage tickets. Look up customers and issue refunds.",
"maxIterations": 10
},
{
"id": "lookup_customer",
"type": "Http",
"connection": "conn_crm",
"url": "/customers/:id"
},
{
"id": "issue_refund",
"type": "Http",
"connection": "conn_stripe",
"method": "POST",
"url": "/v1/refunds"
}
],
"edges": [
{ "from": "triage", "to": "lookup_customer", "label": "lookup_customer" },
{ "from": "triage", "to": "issue_refund", "label": "issue_refund" }
]
} Write agent workflows as JSON DSL with steps, tools, and connections.
Build to a standalone WASM module — immutable, versioned, reproducible.
Run in wasmtime. No filesystem, no raw network. All I/O mediated by the host.
AI agent calls tools via host proxy. Credentials injected at request time, invisible to agent code.
After each tool call: conversation history, tool results, and iteration count saved to database.
Crash at any point? Relaunch from last checkpoint. No lost context, no repeated calls.
| Runtara | LangGraph | CrewAI | Temporal + LLM | |
|---|---|---|---|---|
| Agent isolation | WASM sandbox (enforced) | Optional (Pyodide) | Docker (optional) | Process-level |
| Credential security | Proxy injection | Manual | Manual | Manual |
| Crash recovery | Per-tool checkpoint | Optional checkpoint | Task replay | Event sourcing |
| Tenant isolation | Dedicated env | Application-level | Enterprise only | Namespaces |
| Runtime | Compiled WASM | Python / JS/TS | Python | Multi-language |
LLM triages tickets, looks up customer data, takes actions. Checkpoints after each step. Survives deploys without losing conversation state.
AI analyzes a request, produces a recommendation, sleeps until human approval — days later, resumes from checkpoint and executes the decision.
Run customer-provided AI logic in WASM sandboxes within dedicated per-customer environments. No access to other tenants' data or credentials.
Chain AI agent steps — one analyzes, one plans, one executes — each with different LLM providers and tool sets, all checkpointed independently.
Open-source. Self-hosted or cloud. Written in Rust.