Serverless
Serverless computing has evolved from a specialized cloud service to a fundamental architectural pattern. The approach abstracts away infrastructure management, letting developers focus purely on application logic while cloud providers handle provisioning and scaling.
What Defines Serverless
The core model is Function-as-a-Service (FaaS). Application logic is broken into individual functions that execute in response to events. Functions are stateless, ephemeral, and automatically scaled by the cloud provider.
The Major Implementations
Edge serverless
A separate category running closer to users:
- Cloudflare Workers — V8 isolates instead of containers, sub-millisecond cold starts, runs at 300+ edge locations.
- AWS Lambda@Edge — Lambda at CloudFront edge points.
- Vercel Functions / Netlify Functions — frontend-optimized, integrate with the deployment pipeline of Next.js / JAMstack sites.
- Deno Deploy — modern JS/TS runtime with global edge distribution.
Why Teams Adopt It
Where It Fits
The Real Costs
Cold start latency
The most-cited problem. When a function hasn’t been invoked recently, the provider has to initialize a new runtime — adding milliseconds to seconds of latency.
Mitigations: provisioned concurrency (pay to keep some warm), language choice, smaller deployment artifacts, edge-runtime alternatives (Cloudflare Workers cold-start in ~5 ms).
Vendor lock-in
Each provider implements serverless with unique APIs, deployment mechanisms, and feature sets. Migrating between providers can require significant rework.
Mitigations:
- Serverless Framework (serverless.com) — unified deployment interface across providers.
- Kubernetes-native serverless (Knative) — provider-portable.
- CloudEvents — industry standard for event format.
- Clean separation between business logic and cloud-specific integrations.
Debugging and observability
Distributed serverless environments need new approaches. Traditional debugging breaks down across many short-lived functions. Distributed tracing, structured logs, and platform-specific tools (AWS X-Ray, GCP Cloud Trace) are essential.
Long-running and stateful workloads
Serverless functions have execution time limits (15 minutes on Lambda, 9 on Cloud Functions). Stateful workloads need to externalize state to managed services (DynamoDB, Redis, Cloud Firestore). This forces some architectural patterns that wouldn’t otherwise be necessary.
When Serverless Is the Right Choice
Serverless and Multi-Tenant SaaS
For multi-tenant SaaS specifically, serverless has interesting properties:
- Per-tenant isolation — tenants can be isolated to separate function executions naturally.
- Tenant-specific scaling — a tenant’s spike in traffic auto-scales independently.
- Cost attribution — usage-based costs are easy to attribute per tenant.
- Easier multi-region — deploy the same functions to multiple regions for data-residency compliance.
The patterns serverless replaces in a SaaS — webhook handlers, image/PDF processing, scheduled tenant cleanup, email/notification dispatch — are often the workloads where serverless is most economical.
Recap
- Serverless = Function-as-a-Service. Event-triggered, autoscaled, pay-per-execution, no infrastructure management.
- Major platforms split into cloud-native FaaS (Lambda, GCF, Azure Functions), Kubernetes-native (Knative, OpenFaaS), and edge (Cloudflare Workers, Lambda@Edge).
- Best fit: variable / spiky / event-driven / background workloads.
- Watch out for: cold starts, vendor lock-in, debugging complexity, execution-time limits.
- For multi-tenant SaaS, serverless naturally supports per-tenant scaling and cost attribution.