Serverless Architecture: Four Patterns and the One That Wins
Four serverless patterns we've shipped into production, the tradeoffs each one buys, and the one we now reach for first when the requirements line up.

Serverless gets marketed as "no servers, no ops, infinite scale." In production, it is a set of constraints you trade against a set of constraints you already have. After 23 years of running infrastructure for customers across K-12, healthcare, and mid-market, we've settled on four patterns that earn their keep. One of them now gets picked first whenever the requirements fit. Here is the honest version.
Pattern 1: The Event-Driven Glue Layer
This is the pattern everyone starts with and most teams stop at. A file lands in object storage, an event fires, a function runs, something happens downstream. Think image thumbnails, PDF extraction, a webhook that writes a row to a database, a Slack notifier that watches a queue.
The reason it works is that there is no request-response latency budget to defend. If cold start costs you 800 milliseconds on the first invocation after an idle period, nobody notices because the downstream consumer is a database or another queue, not a human staring at a browser tab. The function does one thing, you can reason about it in isolation, and you pay almost nothing when the event stream is quiet.
Where this pattern breaks down is state. The moment your glue function needs to remember what it saw last time, or coordinate with another function, you are writing a distributed state machine without realizing it. At that point you either reach for a workflow service (Step Functions, Durable Functions, Temporal) or you regret your choices. We've seen both.
Use this pattern when: the work is idempotent, the trigger is genuinely event-shaped, and state lives somewhere else.
Pattern 2: The Synchronous API Backend
This is the pattern where serverless stops being cheap and starts being a different kind of operational problem. You put API Gateway in front of a function, the function talks to a database, and you call it a REST API. It works. It scales on paper. And then the bill arrives.
The cost profile of a serverless API is the inverse of a VM-backed API. VMs are expensive when idle and cheap per request once loaded. Functions are close to free when idle and expensive per request at sustained load. The crossover point, in our experience, sits somewhere around 1 to 2 million invocations per month for a typical CRUD endpoint. Below that, serverless wins on total cost. Above it, a pair of small VMs behind a load balancer beats Lambda or Cloud Run by a factor of 3 to 5.
The second problem is cold starts. A Python or Node function on Lambda can cold start in 200 to 500 milliseconds. A JVM or .NET function can take 2 to 6 seconds. Provisioned concurrency papers over this, but provisioned concurrency is just a VM with extra steps and a worse price.
Use this pattern when: traffic is genuinely spiky, you are early enough in the product that you do not know the load profile yet, or you want to stop thinking about capacity planning for a workload that will never get big.
Pattern 3: The Scheduled Batch Worker
This is the pattern that quietly saves people the most money. You take a cron job that used to run on a VM and you replace it with a scheduled function. Nightly report generator, weekly data export, monthly invoice batch — anything that ran on a server that was 95 percent idle.
The math here is unambiguous. A VM running 24 hours a day to execute a 12-minute job at 2 a.m. is waste. Moving that job to a scheduled function cuts the compute bill by a factor of 100 or more and eliminates an entire server you have to patch, monitor, and wake up for at 3 a.m. when it fails.
The only trap is the execution time limit. Lambda caps out at 15 minutes. Cloud Run Jobs and Azure Container Apps Jobs give you hours, which is usually enough, but you need to pick the right service. If your batch reliably runs longer than an hour, just put it on a VM or a container job with a real runtime budget.
Use this pattern when: the work is periodic, bounded, and currently sitting on an otherwise idle server.
Pattern 4: The Fan-Out Data Pipeline
This is the pattern we now reach for first when the requirements line up, and it is the one that most teams underuse. The shape is simple: a queue or stream on one end, a pool of functions in the middle, and a sink on the other end. You get horizontal scaling for free, backpressure handling for free, retries for free, and a cost model that is almost perfectly aligned with the actual work.
We've used this pattern for document ingestion pipelines, log enrichment, CRM sync jobs that process tens of thousands of records in a window, and inbound email processing for marketing automation. The common thread is that the work is embarrassingly parallel, each unit of work is independent, and the total throughput matters more than any individual request latency.
The reason this pattern wins is that it plays to every serverless strength and sidesteps every serverless weakness. Cold starts do not matter because the queue absorbs them. Cost scales linearly with work done, not with capacity provisioned. Failures are isolated to a single message and retried automatically. And the operational surface area is tiny — there is no cluster to patch, no autoscaler to tune, no idle capacity to apologize for on the monthly review.
Use this pattern when: the work is batch-shaped but high volume, latency tolerances are in seconds or minutes rather than milliseconds, and each unit of work is independent.
The One That Wins
If we had to pick one serverless pattern to defend in front of a skeptical CFO, it is the fan-out data pipeline. It is the pattern with the cleanest cost story, the smallest operational footprint, and the fewest traps. Event glue is a close second, but it tends to sprawl. Synchronous APIs should be approached with caution and a spreadsheet. Scheduled batch is a no-brainer wherever you have an idle VM running a cron job.
The thing to remember about serverless is that it is not a deployment model, it is a pricing model with unusual constraints. When the constraints fit the workload, the pricing is excellent. When they don't, you are paying a premium for an operational story you didn't need. The discipline is telling the difference before you ship, not after.
Three Takeaways
- Serverless is cheap when the work is bursty and expensive when the work is steady. The crossover is real and usually lower than people expect.
- Fan-out pipelines are the pattern most teams underuse. They are the clearest win and the least fashionable to talk about.
- If your function needs to remember something, you are building a distributed state machine. Use a workflow service or go back to a VM.
Talk with us about your infrastructure
Schedule a consultation with a solutions architect.
Schedule a Consultation