Wrangler CLI

// the operator's manual for Cloudflare's edge platform — Workers, Pages, Queues, Dynamic Workers, and the Tunnel sibling

What Wrangler is

Wrangler is the official command-line interface for Cloudflare's developer platform. One CLI manages every primitive: Workers, Pages, KV, R2, D1, Queues, Vectorize, Workers AI, Hyperdrive, Durable Objects, Workflows, Containers, dispatch namespaces (Workers for Platforms), cron triggers, secrets, and TypeScript binding generation. If it lives on Cloudflare's edge, you provision and ship it through wrangler.

Wrangler 4.x is the current line. cloudflared (Cloudflare Tunnel) is a separate binary covered briefly in §tunnel for completeness, since the two are commonly paired.

Install

# global (system-wide)
npm install -g wrangler

# or per-project (recommended for reproducibility)
npm install --save-dev wrangler

# ad-hoc
npx wrangler@latest --version

# bootstrap a new project end-to-end (Worker, Pages, Workflow, etc.)
npm create cloudflare@latest my-project

Auth

commanddoes
wrangler loginOAuth flow in the browser. Opens a token-scoped session.
wrangler login --scopes <scope>Restricted token (e.g. workers:write account:read) for least-privilege.
wrangler logoutRevokes the local token.
wrangler whoamiPrints the active account, email, and granted scopes.
CLOUDFLARE_API_TOKEN=...Bypass login with an env var (CI / scripts).
CLOUDFLARE_ACCOUNT_ID=...When you have access to multiple accounts.

Config — wrangler.toml / wrangler.jsonc

Every project has one config at the repo root. Toml or JSONC. Declares the script entry, build, bindings, environments, and triggers.

# wrangler.toml
name = "my-worker"
main = "src/index.ts"
compatibility_date = "2026-04-01"
compatibility_flags = ["nodejs_compat"]

[[kv_namespaces]]
binding = "CACHE"
id = "abc123..."

[[d1_databases]]
binding = "DB"
database_name = "prod"
database_id = "def456..."

[[queues.producers]]
binding = "INGEST"
queue = "events"

[[queues.consumers]]
queue = "events"
max_batch_size = 10
max_batch_timeout = 5

[env.staging]
name = "my-worker-staging"
[env.staging.vars]
ENV = "staging"

Compatibility date. Pin it. Cloudflare ships breaking runtime changes behind dated flags; compatibility_date freezes you to a known semantic. Update deliberately.

Dev loop

commandpurpose
wrangler devLocal dev server using workerd (the actual edge runtime). Hot reload on save.
wrangler dev --remoteRun against real Cloudflare edge (use real KV/R2/D1 instead of local stubs).
wrangler deployPush the Worker to production.
wrangler deploy --env stagingDeploy a named environment.
wrangler tailLive-tail logs from a deployed Worker (or filter by status / sampling rate).
wrangler versions uploadUpload a version without promoting; pair with gradual deploys.
wrangler versions deployPromote uploaded version with a traffic split.
wrangler rollbackRoll back to a prior version by id.

Workers

A Worker is an ESM module exporting handlers (fetch, scheduled, queue, email, tail, trace). Wrangler bundles, uploads, and binds it to a workers.dev subdomain or your custom route.

// src/index.ts
export default {
  async fetch(req: Request, env: Env, ctx: ExecutionContext) {
    return new Response("hello edge");
  },
};

Bindings

A binding is a typed handle injected as env.<BINDING>. Every Cloudflare resource your Worker uses is a binding declared in wrangler.toml. wrangler types generates a matching TypeScript interface.

wrangler types                      # writes worker-configuration.d.ts
wrangler types --x-include-runtime  # also pulls in @cloudflare/workers-types

KV — eventually-consistent key/value

wrangler kv namespace create CACHE
wrangler kv namespace list
wrangler kv key put --binding=CACHE "feature-flag:beta" "on"
wrangler kv key get --binding=CACHE "feature-flag:beta"
wrangler kv key list --binding=CACHE --prefix="feature-flag:"
wrangler kv bulk put --binding=CACHE ./bulk.json
wrangler kv key delete --binding=CACHE "feature-flag:beta"

R2 — S3-compatible object storage, no egress fees

wrangler r2 bucket create my-bucket
wrangler r2 bucket list
wrangler r2 object put my-bucket/path/file.bin --file=./local.bin
wrangler r2 object get my-bucket/path/file.bin --pipe > out.bin
wrangler r2 object delete my-bucket/path/file.bin
wrangler r2 bucket notification create my-bucket --event-type object-create

D1 — serverless SQLite at the edge

wrangler d1 create prod
wrangler d1 list
wrangler d1 execute prod --command="SELECT 1"
wrangler d1 execute prod --file=./schema.sql
wrangler d1 export   prod --output=./backup.sql
wrangler d1 migrations create prod add_users
wrangler d1 migrations apply  prod

Queues — push/pull message bus, at-least-once delivery

Cloudflare Queues are a durable message bus glued to Workers. A producer Worker enqueues; a consumer Worker (or HTTP pull) drains in batches. Wrangler manages both ends.

Provision the queue

wrangler queues create events
wrangler queues list
wrangler queues info events
wrangler queues update events --max-retries 5 --dead-letter-queue events-dlq
wrangler queues delete events

Wire producer + consumer in wrangler.toml

[[queues.producers]]
binding = "INGEST"             # env.INGEST.send(msg) inside the Worker
queue   = "events"

[[queues.consumers]]
queue              = "events"   # Worker exports a queue() handler
max_batch_size     = 10
max_batch_timeout  = 5         # seconds
max_retries        = 5
dead_letter_queue  = "events-dlq"

Send + handle

// producer (a Worker that received an HTTP request)
await env.INGEST.send({ type: "clip.rendered", slug: "CAPI-03a" });
await env.INGEST.sendBatch([msg1, msg2, msg3]);

// consumer (separate Worker — same or different)
export default {
  async queue(batch: MessageBatch<EventBody>, env: Env) {
    for (const msg of batch.messages) {
      try {
        await handle(msg.body);
        msg.ack();           // remove from queue
      } catch (e) {
        msg.retry({ delaySeconds: 60 });
      }
    }
  },
};

Pull from the outside

# HTTP-pull mode for non-Worker consumers (e.g. a process on claw)
wrangler queues consumer http add events
# gives back an endpoint to POST pull/ack/retry against

Dead-letter queue first. Always create the DLQ before the live consumer. A bad batch with no DLQ + max_retries=0 silently drops messages. With a DLQ wired, you can replay later.

Durable Objects — strongly-consistent stateful actors

wrangler durable-objects namespace create CHATROOMS
wrangler durable-objects namespace list

# in wrangler.toml
[[durable_objects.bindings]]
name = "CHATROOMS"
class_name = "ChatRoom"

[[migrations]]
tag = "v1"
new_sqlite_classes = ["ChatRoom"]   # or new_classes for legacy KV-storage DOs

Vectorize — vector index for RAG

wrangler vectorize create my-index --dimensions=1536 --metric=cosine
wrangler vectorize list
wrangler vectorize info my-index
wrangler vectorize insert my-index --file=./vectors.ndjson
wrangler vectorize query  my-index --vector="[0.1, 0.2, ...]" --top-k=10
wrangler vectorize create-metadata-index my-index --property-name=category --type=string
wrangler vectorize delete my-index

Workers AI — model catalog at the edge

wrangler ai models                 # list available models
wrangler ai run @cf/meta/llama-3.1-8b-instruct \
  --prompt="summarize this doc..."
wrangler ai finetune list

# in wrangler.toml
[ai]
binding = "AI"                  # env.AI.run("@cf/meta/...")

Hyperdrive — connection-pool front of Postgres / MySQL

wrangler hyperdrive create prod-pg \
  --connection-string="postgres://user:pass@host/db"
wrangler hyperdrive list
wrangler hyperdrive update prod-pg --caching-disabled
wrangler hyperdrive delete prod-pg

Pages — static + Functions

What we've been using all over the Organized AI hub.

wrangler pages project create my-site --production-branch=main
wrangler pages project list
wrangler pages deploy ./dist --project-name=my-site
wrangler pages deployment list --project-name=my-site
wrangler pages deployment tail --project-name=my-site

# Pages with Functions
# functions/api/[[catchall]].ts → server-side handlers in the same project

Dynamic Workers — Workers for Platforms (dispatch namespaces)

Workers for Platforms (WFP) lets you run user-supplied Workers inside your own Worker — multi-tenant, isolated, sandboxed. The user uploads a script; you load it via a dispatch namespace and call it like any other binding.

This is what platforms like Shopify Hydrogen and Snyk Composer use under the hood: every customer's code runs as their own Worker, not as a stringified function inside yours.

Provision a dispatch namespace

wrangler dispatch-namespace create customer-scripts
wrangler dispatch-namespace list
wrangler dispatch-namespace get customer-scripts
wrangler dispatch-namespace rename customer-scripts customer-prod
wrangler dispatch-namespace delete customer-scripts

Upload a tenant Worker into the namespace

# tenant-side wrangler.toml
name = "acme-corp"
main = "src/index.ts"
compatibility_date = "2026-04-01"

# deploy into the namespace, not as a top-level Worker
wrangler deploy --dispatch-namespace=customer-scripts

Call tenant Worker from your dispatch Worker

// dispatch Worker wrangler.toml
[[dispatch_namespaces]]
binding = "DISPATCHER"
namespace = "customer-scripts"

// dispatch Worker code
export default {
  async fetch(req: Request, env: Env) {
    const tenant = req.headers.get("x-tenant") ?? "acme-corp";
    const userWorker = env.DISPATCHER.get(tenant);  // load by name
    return userWorker.fetch(req);                  // run their code
  },
};

Outbound proxying — control what tenant code can reach

[[dispatch_namespaces]]
binding   = "DISPATCHER"
namespace = "customer-scripts"
outbound  = { service = "outbound-proxy" }

// every fetch() inside a tenant Worker goes through your outbound-proxy Worker first
// — perfect for rate-limiting, cost capping, allowlist-only egress.

Why this matters. WFP turns "let users write code" from a security nightmare into a routine deploy. Each tenant Worker has its own isolate, CPU/memory limits, observability, and quota. You charge them; Cloudflare bills you per CPU-ms.

Workflows — durable execution

Long-running multi-step processes that survive failures and deployments. Each step's result is checkpointed; if the Worker dies mid-flow, it resumes from the last completed step on the next invocation.

wrangler workflows list
wrangler workflows describe my-flow
wrangler workflows trigger  my-flow --params='{"clip":"CAPI-03a"}'
wrangler workflows instances list my-flow
wrangler workflows instances describe my-flow <instance-id>
wrangler workflows instances terminate my-flow <instance-id>

// in wrangler.toml
[[workflows]]
name = "render-flow"
binding = "RENDER_FLOW"
class_name = "RenderFlow"

Containers

Run actual containers alongside Workers — for tools that don't fit the V8 isolate model (ffmpeg, headless Chromium, Python ML, etc.). Wrangler manages the image and the binding.

wrangler containers list
wrangler containers info my-image
wrangler containers logs my-image
wrangler containers update my-image --instances=10

# in wrangler.toml
[[containers]]
class_name = "Renderer"
image = "./Dockerfile"
max_instances = 10

Cron triggers

# in wrangler.toml
[triggers]
crons = ["0 4 * * *", "*/15 * * * *"]

wrangler triggers deploy        # sync trigger config without redeploying code

// in code
export default {
  async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext) {
    // runs at 04:00 UTC daily and every 15 min
  },
};

Secrets

wrangler secret put OPENAI_API_KEY                       # prompts for value
wrangler secret list
wrangler secret delete OPENAI_API_KEY
wrangler secret bulk ./secrets.json                      # bulk upload

# local dev secrets
# .dev.vars file at repo root, NOT committed
OPENAI_API_KEY=sk-...
META_APP_SECRET=...

TypeScript types

# auto-generates worker-configuration.d.ts from your wrangler.toml bindings
wrangler types

# after adding a new binding to wrangler.toml — re-run to refresh types
wrangler types --x-include-runtime    # also include runtime types

Cloudflare Tunnel — sibling tool, separate CLI

cloudflared creates an outbound-only encrypted tunnel from a private origin (your laptop, a home-lab box, claw) to Cloudflare's edge — without opening inbound ports. It's a different binary from wrangler, but pairs naturally: a Worker on the edge can act as the auth/routing layer in front of a Tunneled origin.

Install

brew install cloudflared
cloudflared --version

Authenticate & create a tunnel

cloudflared tunnel login                              # browser OAuth, downloads cert
cloudflared tunnel create my-tunnel
cloudflared tunnel list

# map a hostname to a local service via DNS
cloudflared tunnel route dns my-tunnel app.example.com

Run the tunnel

# ad-hoc — quick try
cloudflared tunnel --url http://localhost:5173

# config file (~/.cloudflared/config.yml)
tunnel: my-tunnel
credentials-file: /home/me/.cloudflared/<UUID>.json
ingress:
  - hostname: dashboard.example.com
    service: http://localhost:5173
  - hostname: api.example.com
    service: http://localhost:8787
  - service: http_status:404

# run as a foreground process
cloudflared tunnel run my-tunnel

# or install as a system service (macOS / linux)
sudo cloudflared service install
sudo cloudflared service uninstall

Why pair with Workers

CI / CD

# GitHub Actions sketch
- name: Deploy Worker
  uses: cloudflare/wrangler-action@v3
  with:
    apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
    command: deploy --env production

Use Cloudflare API tokens (not global keys) with the minimum scopes: Account.Workers Scripts: Edit, Account.Workers KV Storage: Edit, etc. Token created in the dashboard, stored as a GitHub secret.

Pitfalls