// the operator's manual for Cloudflare's edge platform — Workers, Pages, Queues, Dynamic Workers, and the Tunnel sibling
Wrangler is the official command-line interface for Cloudflare's developer platform. One CLI manages every primitive: Workers, Pages, KV, R2, D1, Queues, Vectorize, Workers AI, Hyperdrive, Durable Objects, Workflows, Containers, dispatch namespaces (Workers for Platforms), cron triggers, secrets, and TypeScript binding generation. If it lives on Cloudflare's edge, you provision and ship it through wrangler.
Wrangler 4.x is the current line. cloudflared (Cloudflare Tunnel) is a separate binary covered briefly in §tunnel for completeness, since the two are commonly paired.
# global (system-wide)
npm install -g wrangler
# or per-project (recommended for reproducibility)
npm install --save-dev wrangler
# ad-hoc
npx wrangler@latest --version
# bootstrap a new project end-to-end (Worker, Pages, Workflow, etc.)
npm create cloudflare@latest my-project
| command | does |
|---|---|
wrangler login | OAuth flow in the browser. Opens a token-scoped session. |
wrangler login --scopes <scope> | Restricted token (e.g. workers:write account:read) for least-privilege. |
wrangler logout | Revokes the local token. |
wrangler whoami | Prints the active account, email, and granted scopes. |
CLOUDFLARE_API_TOKEN=... | Bypass login with an env var (CI / scripts). |
CLOUDFLARE_ACCOUNT_ID=... | When you have access to multiple accounts. |
Every project has one config at the repo root. Toml or JSONC. Declares the script entry, build, bindings, environments, and triggers.
# wrangler.toml
name = "my-worker"
main = "src/index.ts"
compatibility_date = "2026-04-01"
compatibility_flags = ["nodejs_compat"]
[[kv_namespaces]]
binding = "CACHE"
id = "abc123..."
[[d1_databases]]
binding = "DB"
database_name = "prod"
database_id = "def456..."
[[queues.producers]]
binding = "INGEST"
queue = "events"
[[queues.consumers]]
queue = "events"
max_batch_size = 10
max_batch_timeout = 5
[env.staging]
name = "my-worker-staging"
[env.staging.vars]
ENV = "staging"
Compatibility date. Pin it. Cloudflare ships breaking runtime changes behind dated flags; compatibility_date freezes you to a known semantic. Update deliberately.
| command | purpose |
|---|---|
wrangler dev | Local dev server using workerd (the actual edge runtime). Hot reload on save. |
wrangler dev --remote | Run against real Cloudflare edge (use real KV/R2/D1 instead of local stubs). |
wrangler deploy | Push the Worker to production. |
wrangler deploy --env staging | Deploy a named environment. |
wrangler tail | Live-tail logs from a deployed Worker (or filter by status / sampling rate). |
wrangler versions upload | Upload a version without promoting; pair with gradual deploys. |
wrangler versions deploy | Promote uploaded version with a traffic split. |
wrangler rollback | Roll back to a prior version by id. |
A Worker is an ESM module exporting handlers (fetch, scheduled, queue, email, tail, trace). Wrangler bundles, uploads, and binds it to a workers.dev subdomain or your custom route.
// src/index.ts
export default {
async fetch(req: Request, env: Env, ctx: ExecutionContext) {
return new Response("hello edge");
},
};
A binding is a typed handle injected as env.<BINDING>. Every Cloudflare resource your Worker uses is a binding declared in wrangler.toml. wrangler types generates a matching TypeScript interface.
wrangler types # writes worker-configuration.d.ts
wrangler types --x-include-runtime # also pulls in @cloudflare/workers-types
wrangler kv namespace create CACHE
wrangler kv namespace list
wrangler kv key put --binding=CACHE "feature-flag:beta" "on"
wrangler kv key get --binding=CACHE "feature-flag:beta"
wrangler kv key list --binding=CACHE --prefix="feature-flag:"
wrangler kv bulk put --binding=CACHE ./bulk.json
wrangler kv key delete --binding=CACHE "feature-flag:beta"
wrangler r2 bucket create my-bucket
wrangler r2 bucket list
wrangler r2 object put my-bucket/path/file.bin --file=./local.bin
wrangler r2 object get my-bucket/path/file.bin --pipe > out.bin
wrangler r2 object delete my-bucket/path/file.bin
wrangler r2 bucket notification create my-bucket --event-type object-create
wrangler d1 create prod
wrangler d1 list
wrangler d1 execute prod --command="SELECT 1"
wrangler d1 execute prod --file=./schema.sql
wrangler d1 export prod --output=./backup.sql
wrangler d1 migrations create prod add_users
wrangler d1 migrations apply prod
Cloudflare Queues are a durable message bus glued to Workers. A producer Worker enqueues; a consumer Worker (or HTTP pull) drains in batches. Wrangler manages both ends.
wrangler queues create events
wrangler queues list
wrangler queues info events
wrangler queues update events --max-retries 5 --dead-letter-queue events-dlq
wrangler queues delete events
[[queues.producers]]
binding = "INGEST" # env.INGEST.send(msg) inside the Worker
queue = "events"
[[queues.consumers]]
queue = "events" # Worker exports a queue() handler
max_batch_size = 10
max_batch_timeout = 5 # seconds
max_retries = 5
dead_letter_queue = "events-dlq"
// producer (a Worker that received an HTTP request)
await env.INGEST.send({ type: "clip.rendered", slug: "CAPI-03a" });
await env.INGEST.sendBatch([msg1, msg2, msg3]);
// consumer (separate Worker — same or different)
export default {
async queue(batch: MessageBatch<EventBody>, env: Env) {
for (const msg of batch.messages) {
try {
await handle(msg.body);
msg.ack(); // remove from queue
} catch (e) {
msg.retry({ delaySeconds: 60 });
}
}
},
};
# HTTP-pull mode for non-Worker consumers (e.g. a process on claw)
wrangler queues consumer http add events
# gives back an endpoint to POST pull/ack/retry against
Dead-letter queue first. Always create the DLQ before the live consumer. A bad batch with no DLQ + max_retries=0 silently drops messages. With a DLQ wired, you can replay later.
wrangler durable-objects namespace create CHATROOMS
wrangler durable-objects namespace list
# in wrangler.toml
[[durable_objects.bindings]]
name = "CHATROOMS"
class_name = "ChatRoom"
[[migrations]]
tag = "v1"
new_sqlite_classes = ["ChatRoom"] # or new_classes for legacy KV-storage DOs
wrangler vectorize create my-index --dimensions=1536 --metric=cosine
wrangler vectorize list
wrangler vectorize info my-index
wrangler vectorize insert my-index --file=./vectors.ndjson
wrangler vectorize query my-index --vector="[0.1, 0.2, ...]" --top-k=10
wrangler vectorize create-metadata-index my-index --property-name=category --type=string
wrangler vectorize delete my-index
wrangler ai models # list available models
wrangler ai run @cf/meta/llama-3.1-8b-instruct \
--prompt="summarize this doc..."
wrangler ai finetune list
# in wrangler.toml
[ai]
binding = "AI" # env.AI.run("@cf/meta/...")
wrangler hyperdrive create prod-pg \
--connection-string="postgres://user:pass@host/db"
wrangler hyperdrive list
wrangler hyperdrive update prod-pg --caching-disabled
wrangler hyperdrive delete prod-pg
What we've been using all over the Organized AI hub.
wrangler pages project create my-site --production-branch=main
wrangler pages project list
wrangler pages deploy ./dist --project-name=my-site
wrangler pages deployment list --project-name=my-site
wrangler pages deployment tail --project-name=my-site
# Pages with Functions
# functions/api/[[catchall]].ts → server-side handlers in the same project
Workers for Platforms (WFP) lets you run user-supplied Workers inside your own Worker — multi-tenant, isolated, sandboxed. The user uploads a script; you load it via a dispatch namespace and call it like any other binding.
This is what platforms like Shopify Hydrogen and Snyk Composer use under the hood: every customer's code runs as their own Worker, not as a stringified function inside yours.
wrangler dispatch-namespace create customer-scripts
wrangler dispatch-namespace list
wrangler dispatch-namespace get customer-scripts
wrangler dispatch-namespace rename customer-scripts customer-prod
wrangler dispatch-namespace delete customer-scripts
# tenant-side wrangler.toml
name = "acme-corp"
main = "src/index.ts"
compatibility_date = "2026-04-01"
# deploy into the namespace, not as a top-level Worker
wrangler deploy --dispatch-namespace=customer-scripts
// dispatch Worker wrangler.toml
[[dispatch_namespaces]]
binding = "DISPATCHER"
namespace = "customer-scripts"
// dispatch Worker code
export default {
async fetch(req: Request, env: Env) {
const tenant = req.headers.get("x-tenant") ?? "acme-corp";
const userWorker = env.DISPATCHER.get(tenant); // load by name
return userWorker.fetch(req); // run their code
},
};
[[dispatch_namespaces]]
binding = "DISPATCHER"
namespace = "customer-scripts"
outbound = { service = "outbound-proxy" }
// every fetch() inside a tenant Worker goes through your outbound-proxy Worker first
// — perfect for rate-limiting, cost capping, allowlist-only egress.
Why this matters. WFP turns "let users write code" from a security nightmare into a routine deploy. Each tenant Worker has its own isolate, CPU/memory limits, observability, and quota. You charge them; Cloudflare bills you per CPU-ms.
Long-running multi-step processes that survive failures and deployments. Each step's result is checkpointed; if the Worker dies mid-flow, it resumes from the last completed step on the next invocation.
wrangler workflows list
wrangler workflows describe my-flow
wrangler workflows trigger my-flow --params='{"clip":"CAPI-03a"}'
wrangler workflows instances list my-flow
wrangler workflows instances describe my-flow <instance-id>
wrangler workflows instances terminate my-flow <instance-id>
// in wrangler.toml
[[workflows]]
name = "render-flow"
binding = "RENDER_FLOW"
class_name = "RenderFlow"
Run actual containers alongside Workers — for tools that don't fit the V8 isolate model (ffmpeg, headless Chromium, Python ML, etc.). Wrangler manages the image and the binding.
wrangler containers list
wrangler containers info my-image
wrangler containers logs my-image
wrangler containers update my-image --instances=10
# in wrangler.toml
[[containers]]
class_name = "Renderer"
image = "./Dockerfile"
max_instances = 10
# in wrangler.toml
[triggers]
crons = ["0 4 * * *", "*/15 * * * *"]
wrangler triggers deploy # sync trigger config without redeploying code
// in code
export default {
async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext) {
// runs at 04:00 UTC daily and every 15 min
},
};
wrangler secret put OPENAI_API_KEY # prompts for value
wrangler secret list
wrangler secret delete OPENAI_API_KEY
wrangler secret bulk ./secrets.json # bulk upload
# local dev secrets
# .dev.vars file at repo root, NOT committed
OPENAI_API_KEY=sk-...
META_APP_SECRET=...
# auto-generates worker-configuration.d.ts from your wrangler.toml bindings
wrangler types
# after adding a new binding to wrangler.toml — re-run to refresh types
wrangler types --x-include-runtime # also include runtime types
cloudflared creates an outbound-only encrypted tunnel from a private origin (your laptop, a home-lab box, claw) to Cloudflare's edge — without opening inbound ports. It's a different binary from wrangler, but pairs naturally: a Worker on the edge can act as the auth/routing layer in front of a Tunneled origin.
brew install cloudflared
cloudflared --version
cloudflared tunnel login # browser OAuth, downloads cert
cloudflared tunnel create my-tunnel
cloudflared tunnel list
# map a hostname to a local service via DNS
cloudflared tunnel route dns my-tunnel app.example.com
# ad-hoc — quick try
cloudflared tunnel --url http://localhost:5173
# config file (~/.cloudflared/config.yml)
tunnel: my-tunnel
credentials-file: /home/me/.cloudflared/<UUID>.json
ingress:
- hostname: dashboard.example.com
service: http://localhost:5173
- hostname: api.example.com
service: http://localhost:8787
- service: http_status:404
# run as a foreground process
cloudflared tunnel run my-tunnel
# or install as a system service (macOS / linux)
sudo cloudflared service install
sudo cloudflared service uninstall
localhost:5173 with no infra changes.wrangler deploy; the tunnel is a normal cloudflared service install.# GitHub Actions sketch
- name: Deploy Worker
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
command: deploy --env production
Use Cloudflare API tokens (not global keys) with the minimum scopes: Account.Workers Scripts: Edit, Account.Workers KV Storage: Edit, etc. Token created in the dashboard, stored as a GitHub secret.
compatibility_date is load-bearing. Pin it; never let it float. Yesterday's "fixed" runtime bug is tomorrow's regression for you.wrangler dev uses workerd locally — but service bindings to other Workers are stubbed unless you also --remote or run them locally too. Multi-Worker setups need wrangler dev per Worker on different ports, then service bindings pointing at http://localhost:<n>.max_batch_timeout is in seconds, not ms. Easy to write 5000 thinking ms, end up with 83-minute batches.step.do(), it's gone on resume. Wrap everything stateful.scheduled() handler throws, that tick is lost. Self-retry by enqueueing to a Queue from inside the cron.cloudflared tunnels survive Worker deploys — but if you change the hostname routing, the DNS entry needs cloudflared tunnel route dns again.