Most developers spend $50–$500 a month on infrastructure for side projects that make $0. A VPS for the backend, S3 for storage, RDS for the database, a CDN on top — and you haven't even added monitoring yet.
Cloudflare solved this problem years ago. They just didn't make it obvious. Here's the unfair advantage: you can run a full production-grade web application on Cloudflare's free tier, indefinitely, at actual scale.
This is not "good enough for a demo." This stack runs at Cloudflare's edge in 300+ locations worldwide. It's faster than most paid setups, and it costs exactly $0 until you're generating serious revenue.
The Stack Overview
Here's what we're building with and what each piece does:
| Service | What it replaces | Free limit |
|---|---|---|
| Cloudflare Pages | Vercel / Netlify | Unlimited requests, 500 builds/month |
| Workers | Express / Lambda / EC2 | 100K requests/day, 10ms CPU/req |
| D1 (SQLite) | PlanetScale / Supabase / RDS | 5M rows read, 100K writes/day |
| R2 (Object Storage) | AWS S3 | 10GB storage, 1M Class A ops/month |
| KV (Key-Value) | Redis / DynamoDB | 100K reads/day, 1K writes/day |
The kicker: These services aren't "free with credit card" — they're genuinely free indefinitely. Cloudflare's business model is selling enterprise contracts. The free tier is their developer acquisition play.
Prerequisites
You need three things before we start:
- A Cloudflare account (free at cloudflare.com)
- Node.js 18+ and npm
- The Wrangler CLI — Cloudflare's command-line tool
Install Wrangler globally:
npm install -g wrangler
# Authenticate with your Cloudflare account
wrangler login Wrangler opens a browser window for OAuth. Authorize it, come back, you're set.
Step 1: Cloudflare Pages (Frontend)
Cloudflare Pages deploys your static site or frontend framework (React, Astro, Next.js, SvelteKit, etc.) from a Git repository with zero config. It's faster than Vercel for most static assets because it's serving from Cloudflare's actual CDN edge nodes, not regional servers.
Connecting your repository
In the Cloudflare dashboard → Workers & Pages → Create application → Pages → connect your GitHub or GitLab repo.
Every push to main deploys automatically. Every pull request gets its own preview URL. This is standard behavior that costs extra on Vercel's paid plans.
Build configuration
For an Astro project (like this site is built with):
# Framework preset: Astro
Build command: npm run build
Build output: dist/
Root directory: /
Alternatively, configure via wrangler.toml at your project root for version-controlled deployments:
name = "my-app"
compatibility_date = "2026-01-01"
[site]
bucket = "./dist" Free static site hosting with unlimited requests, automatic HTTPS, and global CDN. No credit card required.
Step 2: Workers (Serverless API)
Cloudflare Workers is where the real unfair advantage lives. It's a V8 isolate runtime — not a container or VM — which means cold starts are measured in microseconds, not the 500ms–2s you get with Lambda.
The free tier gives you 100,000 requests per day. For a side project or early-stage SaaS, you will not hit this.
Your first Worker
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
// Simple router
if (url.pathname === '/api/health') {
return Response.json({ status: 'ok', timestamp: Date.now() });
}
if (url.pathname === '/api/users' && request.method === 'GET') {
const users = await env.DB.prepare('SELECT * FROM users LIMIT 20').all();
return Response.json(users.results);
}
return new Response('Not found', { status: 404 });
},
} satisfies ExportedHandler<Env>;
The Env type is generated automatically by Wrangler from your wrangler.toml bindings. You get full TypeScript support for all bound services.
Routing API requests from Pages
Here's the clever bit: you can bind a Worker to your Pages project as a Functions handler. Create a /functions directory in your project root:
my-app/
├── dist/ # Static frontend output
├── functions/
│ └── api/
│ ├── [[route]].ts # Catch-all API route
│ └── health.ts # /api/health endpoint
└── wrangler.toml // functions/api/health.ts
export const onRequest: PagesFunction = async (context) => {
return Response.json({
status: 'ok',
region: context.request.cf?.colo ?? 'unknown',
});
};
Now /api/* routes go to your Worker functions, and everything else serves static files. One deployment, one domain, zero configuration.
Step 3: D1 Database (SQLite at the Edge)
D1 is Cloudflare's serverless SQL database built on SQLite. It replicates to read replicas near your users automatically. The free tier: 5 million row reads per day, 100,000 row writes per day.
For context: a typical SaaS with 500 daily active users might read 50,000 rows a day. You have 100x that headroom.
Creating a database
# Create the database
wrangler d1 create my-app-db
# Output:
# [[d1_databases]]
# binding = "DB"
# database_name = "my-app-db"
# database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" Copy the output into your wrangler.toml:
name = "my-app"
compatibility_date = "2026-01-01"
[[d1_databases]]
binding = "DB"
database_name = "my-app-db"
database_id = "your-database-id-here" Running migrations
Create a migrations directory and write SQL:
-- migrations/0001_initial.sql
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT UNIQUE NOT NULL,
created_at INTEGER DEFAULT (unixepoch()) NOT NULL
);
CREATE TABLE IF NOT EXISTS posts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL REFERENCES users(id),
title TEXT NOT NULL,
body TEXT NOT NULL,
published INTEGER DEFAULT 0,
created_at INTEGER DEFAULT (unixepoch()) NOT NULL
);
CREATE INDEX idx_posts_user_id ON posts(user_id);
CREATE INDEX idx_posts_published ON posts(published, created_at DESC); # Apply to local dev database
wrangler d1 migrations apply my-app-db --local
# Apply to production
wrangler d1 migrations apply my-app-db --remote Querying from a Worker
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const { pathname } = new URL(request.url);
if (pathname === '/api/posts') {
// Parameterized queries — never concatenate user input
const { results } = await env.DB
.prepare('SELECT id, title, created_at FROM posts WHERE published = 1 ORDER BY created_at DESC LIMIT ?')
.bind(20)
.all();
return Response.json(results, {
headers: {
'Cache-Control': 'public, max-age=60', // CDN caches for 60s
},
});
}
if (pathname.startsWith('/api/posts/') && request.method === 'GET') {
const id = pathname.split('/').pop();
const post = await env.DB
.prepare('SELECT * FROM posts WHERE id = ? AND published = 1')
.bind(id)
.first();
if (!post) return new Response('Not found', { status: 404 });
return Response.json(post);
}
return new Response('Not found', { status: 404 });
},
} satisfies ExportedHandler<Env>;
Note the Cache-Control header on the list endpoint — Cloudflare's CDN caches that response at the edge, so most requests won't even hit your Worker. That's how you stay under free tier limits even at scale.
Step 4: R2 Storage (File Uploads)
R2 is S3-compatible object storage. The killer feature: zero egress fees. AWS charges you every time someone downloads a file from S3. Cloudflare R2 does not. For anything serving files to end users, this alone can save hundreds of dollars a month.
Free tier: 10GB storage, 1M Class A operations (writes), 10M Class B operations (reads) per month.
# Create a bucket
wrangler r2 bucket create my-app-uploads # wrangler.toml
[[r2_buckets]]
binding = "UPLOADS"
bucket_name = "my-app-uploads" // Handle file upload in a Worker
if (pathname === '/api/upload' && request.method === 'POST') {
const formData = await request.formData();
const file = formData.get('file') as File;
if (!file || file.size > 10 * 1024 * 1024) {
return Response.json({ error: 'Invalid file (max 10MB)' }, { status: 400 });
}
const key = 'uploads/' + crypto.randomUUID() + '/' + file.name;
await env.UPLOADS.put(key, file.stream(), {
httpMetadata: { contentType: file.type },
});
return Response.json({
url: 'https://your-bucket.your-subdomain.r2.dev/' + key,
key,
});
} S3-compatible object storage with no egress fees. 10GB free, then $0.015/GB/month — vs S3's $0.023/GB plus costly egress.
Step 5: KV (Sessions & Caching)
Workers KV is a globally distributed key-value store. It's eventually consistent (writes take ~60 seconds to propagate globally), which makes it perfect for session storage, feature flags, and cached API responses.
# wrangler.toml
[[kv_namespaces]]
binding = "SESSIONS"
id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" // Simple session helpers
async function getSession(env: Env, request: Request) {
const sessionId = getCookie(request, 'session_id');
if (!sessionId) return null;
return env.SESSIONS.get(sessionId, { type: 'json' }) as Promise<{ userId: number; email: string } | null>;
}
async function createSession(env: Env, userId: number, email: string) {
const sessionId = crypto.randomUUID();
await env.SESSIONS.put(
sessionId,
JSON.stringify({ userId, email }),
{ expirationTtl: 60 * 60 * 24 * 7 } // 7 days
);
return sessionId;
}
function getCookie(request: Request, name: string): string | null {
const header = request.headers.get('Cookie') ?? '';
const match = header.match(new RegExp('(^|;\s*)' + name + '=([^;]+)'));
return match ? decodeURIComponent(match[2]) : null;
} Environment Variables & Secrets
Never commit secrets. Wrangler handles this cleanly:
# Set a secret (prompts for value — never stored in wrangler.toml)
wrangler secret put JWT_SECRET
wrangler secret put STRIPE_SECRET_KEY
# List secrets (shows names only, never values)
wrangler secret list
# Non-sensitive vars go in wrangler.toml under [vars]
# [vars]
# ENVIRONMENT = "production"
# API_BASE_URL = "https://api.example.com" Secrets are encrypted at rest and only available inside the Worker runtime. They never appear in logs or the dashboard.
// Type your environment bindings for full autocomplete
interface Env {
// Services
DB: D1Database;
UPLOADS: R2Bucket;
SESSIONS: KVNamespace;
// Secrets
JWT_SECRET: string;
STRIPE_SECRET_KEY: string;
// Vars
ENVIRONMENT: string;
} Free Tier Limits (What You Actually Get)
Let's be real about the numbers. Here's what the free tier means in actual user traffic:
| Limit | Equals approximately |
|---|---|
| 100K Worker requests/day | ~1,000 DAU doing 100 actions each |
| 5M D1 row reads/day | ~5,000 DAU reading 1,000 rows each |
| 100K D1 writes/day | ~10,000 users creating 10 records each |
| 10GB R2 storage | ~10,000 average-sized profile photos |
| 100K KV reads/day | Essentially unlimited for session checks |
When you exceed these, Cloudflare's paid tiers are extremely cheap — Workers starts at $5/month for 10M requests. By the time you need to pay, you have paying users.
The Bottom Line
This stack isn't a hack or a compromise. It's what you'd architect if you had a senior platform engineer designing your infrastructure with cost optimization as a first-class concern.
The companies that know this are already running production workloads on it. The developers who don't know it are paying $200/month for side projects.
That's the unfair advantage. You now have it.
Tools used in this guide
Pages + Workers + D1 + R2 + KV. The entire stack described in this guide. Free tier is genuinely free — no credit card required.
The static site framework this guide site is built with. Ships zero JavaScript by default, perfect for Cloudflare Pages.