The API Guys
Cloudflare Pages and Workers: What They Actually Do
·8 min read·The API Guys

Cloudflare Pages and Workers: What They Actually Do

api

Most developers think of Cloudflare as the thing that sits in front of their server - DNS, CDN, DDoS protection, maybe a WAF. That picture has been accurate for a long time, but it is increasingly incomplete. Over the past few years Cloudflare has built a substantial compute platform on top of that network, and Workers and Pages are where most of it surfaces for web developers.

This is a practical overview of what those products actually do, where they fit in a typical web stack, and when it makes sense to use them versus the alternatives.

What Cloudflare Workers is

Workers is a serverless compute platform that runs JavaScript, TypeScript, Python, and Rust at Cloudflare's edge - meaning on servers geographically close to your users rather than in a single data centre region. You write a function that receives an HTTP request and returns an HTTP response, deploy it, and Cloudflare handles the rest.

The runtime is not Node.js. Workers uses V8 isolates - the same JavaScript engine that runs in Chrome - spun up per request rather than per process. This makes cold starts effectively zero (sub-millisecond), because there is no container to boot or runtime to initialise. An isolate starts and handles the request in the same time it takes V8 to execute your code. The trade-off is that Workers does not have access to Node.js APIs: no fs, no child_process, no native modules. There is a compatibility layer that covers most common Node.js built-ins, but it is not a full Node.js environment.

Workers runs in over 300 data centres worldwide, and your code runs in whichever location is closest to the incoming request. There is no region selection to configure.

What Cloudflare Pages is

Pages started as a static site hosting service with Git integration - push to a branch, get a preview URL; push to main, deploy to production. In that basic form it is a competitor to Netlify and Vercel for static sites.

The more interesting part is Pages Functions, which layers Workers on top of a Pages site. A file-based routing convention (similar to Next.js's pages directory) maps files in a functions/ directory to server-side handlers. A request to /api/orders hits functions/api/orders.ts, which runs as a Worker. Your static assets and your server-side logic share the same deployment and the same domain, without needing to stitch them together yourself.

For teams deploying Next.js applications, Cloudflare has built an official adapter via OpenNext, and the stable Adapter API shipped in Next.js 16.2 earlier this month formalises this relationship. A full Next.js application - including server-side rendering, API routes, and middleware - can now be deployed to Cloudflare Pages with first-class support, rather than relying on community workarounds.

The supporting ecosystem

Workers and Pages become significantly more useful when combined with Cloudflare's storage and compute primitives. These are available to any Worker or Pages Function:

KV (Key-Value Storage): A globally distributed key-value store with eventual consistency. Values are replicated across Cloudflare's network so reads from any location are fast. Well suited to configuration, feature flags, user session tokens, and any data that is read far more often than it is written. Not suited to data that changes frequently and needs to be immediately consistent everywhere.

Durable Objects: Single-threaded, stateful objects that live at a specific location in Cloudflare's network. Unlike KV, a Durable Object guarantees that all requests to a given object are serialised through one instance. This makes them suitable for collaborative applications, WebSocket connections, and any use case that requires consistent shared state - things like live cursors, presence indicators, or distributed locks.

R2: Object storage compatible with the S3 API but with no egress fees. Cloudflare does not charge for data transferred out of R2 to users via Workers or the public internet. For applications that store and serve large files - images, documents, video - this can represent a meaningful cost difference versus S3. R2 also supports custom domains and public access without requiring a Worker in front of it.

D1: SQLite databases running at the edge. D1 is built on the SQLite engine, supports standard SQL, and can be queried from any Worker. It is designed for read-heavy workloads with globally distributed replicas; the primary write location is a single region, and reads are served from the closest replica. D1 suits applications that need a relational database for structured data but do not require the full feature set of Postgres or MySQL.

Workers AI: Inference on Cloudflare's GPU infrastructure, accessible as a Workers API. A growing catalogue of open models is available - text generation, embeddings, image classification, speech recognition - billed per token or per request. Running inference in a Worker means your AI call and your request handling share the same execution context, without a round-trip to a separate AI service. For simpler AI tasks (classification, embeddings, summarisation), this can simplify the architecture considerably.

What Workers is actually good for

The most immediately useful applications of Workers do not require the full storage ecosystem. Several common patterns are straightforwardly valuable for any team already behind Cloudflare:

Request transformation and proxying. A Worker can rewrite request paths, add or strip headers, forward traffic to different origins based on any logic you can express in code, and return synthetic responses without touching the origin server. A/B testing, gradual rollouts, and header-based authentication checks are all solvable at the Worker layer before a request ever reaches your application server.

Edge caching with custom logic. Cloudflare's built-in caching respects standard cache headers, but a Worker lets you implement custom caching strategies that the cache headers cannot express - different TTLs per user segment, stale-while-revalidate for specific paths, or cache key construction based on request body content.

Authentication and authorisation at the edge. Validating a JWT or checking an API key in a Worker means the check happens at the edge, in sub-millisecond time, without a round-trip to your origin. Requests that fail the check are rejected before they consume any origin resources. For APIs with high invalid-request rates - whether from misconfigured clients or malicious traffic - this is a meaningful efficiency and security improvement.

Lightweight APIs. Not every API endpoint needs a full application server behind it. A Worker with KV or D1 is sufficient for configuration endpoints, feature flag lookups, simple CRUD operations on small datasets, and webhook processors. Keeping these at the Worker layer reduces the load on your primary application server and adds geographic distribution at no architectural cost.

Where Workers fits with a Laravel backend

PHP does not run natively in Workers - the V8 runtime does not support it. Workers is not a replacement for your Laravel application. It is more useful as a layer in front of it.

A common pattern is to put a Worker between Cloudflare's edge network and a Laravel API. The Worker handles concerns that do not require application state: authentication token validation, rate limiting, request logging, geographic routing (directing EU users to an EU-hosted origin, for example), and response caching. The Laravel application handles everything that requires application logic and database access.

This separation keeps your application server focused on what it does well and offloads infrastructure concerns to a layer that is globally distributed and scales automatically. It also means that Cloudflare's network - rather than your server - absorbs the majority of invalid and abusive requests.

Workers vs Vercel Edge Functions vs AWS Lambda@Edge

All three run code at or near the edge, but the execution models differ. Vercel Edge Functions run on the same V8-based runtime as Workers and are tightly integrated with Next.js - they are the natural choice if you are deploying a Next.js application on Vercel and want edge middleware. They are not a general-purpose compute layer outside of the Vercel ecosystem.

AWS Lambda@Edge runs Node.js or Python functions triggered by CloudFront events, with cold starts measured in seconds rather than milliseconds and significant constraints on execution time and payload size. Lambda@Edge is more powerful in some respects - access to the full Node.js runtime, longer execution time limits - but the operational overhead is higher and the latency characteristics are worse for frequently cold functions.

Workers has the broadest network coverage, the fastest cold starts, and the most integrated ecosystem of storage primitives. For teams already using Cloudflare for DNS and CDN, it is the lowest-friction entry point into edge compute.

The practical starting point

If you are already behind Cloudflare, the most practical first step is not to architect a new system around Workers - it is to identify a specific request-level concern in your current application that does not require application state and move it to a Worker. Authentication header validation, rate limiting by IP or API key, and path-based routing are all good candidates.

The deployment model is straightforward: Workers deploy in seconds via the Wrangler CLI or CI/CD, and a free tier covers 100,000 requests per day with no time limit on execution within the allowed CPU time. The learning curve for the runtime is low for anyone familiar with the Fetch API and standard web platform APIs.

Pages is worth evaluating if you are currently deploying a static or hybrid site to Netlify, Vercel, or a cloud storage bucket. The built-in Git integration, preview deployments, and tight Workers integration make it a capable alternative, and the pricing model is often more favourable at scale.

What are you currently using Cloudflare for, and is edge compute something you have looked at for your stack?

Ready to Start Your Project?

Get in touch with our Leeds-based team to discuss your Laravel or API development needs.