What Are Public Cloud Services? Three Proven Patterns
Public cloud is not one thing. It is a catalog of hundreds of services, and only a few patterns actually make sense for most businesses. Here are the three that consistently pay for themselves.

Public cloud services are the compute, storage, networking, database, and higher-level platform offerings you can rent by the minute from AWS, Azure, Google Cloud, and a smaller second tier of providers. That is the textbook definition and it is nearly useless, because "public cloud services" as a catalog runs to several hundred line items and most of them are irrelevant to most companies most of the time.
What is actually useful is knowing which patterns of public cloud consumption produce good outcomes and which ones produce cloud bills that keep the CFO awake. After 23 years of building and running infrastructure, here are the three public cloud patterns we consistently recommend and will stand behind. Everything else we treat with suspicion until a specific customer workload makes the case.
Pattern 1: Object storage as the universal backup and archive target
This is the single highest-value public cloud service for the average mid-market business, and it gets underused because it is boring.
S3, Azure Blob Storage, and Google Cloud Storage are all excellent object storage systems. They are durable — 11 nines of durability, which means you will lose a file roughly once per civilization. They are cheap — in the cents-per-gigabyte-per-month range for standard storage, and literal tenths of a cent for archive tiers. They support immutability, which means ransomware cannot encrypt or delete the backups even if your primary domain is compromised. They replicate across regions automatically if you ask them to. And every reasonable backup product on the market writes to them natively.
We recommend every customer, regardless of where their primary workloads live, use public cloud object storage as a backup and DR target. On-prem primary, backup to cloud. Private cloud primary, backup to cloud. Public cloud primary in region A, backup to region B. The economics are unbeatable at this specific layer because object storage has no compute overhead and the pricing model is pure pay-per-byte.
The honest caveat: watch egress fees. Hyperscaler pricing is designed so that storage is cheap and pulling the data back out costs money. For backup and DR this is usually fine because you rarely restore at full scale. If you find yourself routinely pulling terabytes back across the internet, the math gets uglier and you should look at providers like Backblaze or Wasabi, which charge zero or near-zero egress and have become legitimate options for this specific workload.
Pattern 2: Global-edge workloads on managed serverless or CDN
The second public cloud pattern that earns its keep is anything user-facing that needs to be fast everywhere in the world. Marketing sites, web applications, APIs serving mobile apps, content distribution, streaming — workloads where the actual competitive feature is low latency to a geographically distributed user base.
For this pattern, the hyperscalers and the CDN specialists (Cloudflare and Fastly in particular) provide something you cannot replicate yourself at any reasonable cost: hundreds of points of presence, anycast routing, managed TLS termination, DDoS protection, and edge compute that runs your code within 50 milliseconds of nearly every human on the planet. Even a very large enterprise cannot build that. Renting it is the right call.
Where we see customers get this wrong is by buying the expensive version of the pattern when the cheap version would do. You do not need a multi-region Kubernetes deployment with active-active databases if you are a SaaS company with 2,000 customers in three cities. You need a CDN in front of a single-region web application. The cost difference is an order of magnitude, and the experience for the end user is identical.
The other honest caveat: serverless functions are a great idea that turns into a big bill faster than people expect. A Lambda or Azure Function that handles a webhook is nearly free. A fleet of serverless functions that process every event in your business costs more than a small VM doing the same work, once you factor in the per-invocation fees and the cold-start workarounds. Serverless is a pattern for low-volume, bursty, or integration workloads. It is not a replacement for a long-running service.
Pattern 3: Bursty, experimental, or short-lived compute
The third pattern where public cloud consistently pays off is any workload where you genuinely do not know the size, the shape, or the lifespan. Proof-of-concepts. Data science experiments. Marketing campaign landing pages. Seasonal batch jobs. Training environments for developers. Staging environments that you want to spin up and tear down on demand. Anything where "is this still running next month?" is an open question.
For these workloads, the ability to allocate capacity in minutes, pay for what you use, and walk away when the project is done is genuinely transformative. Buying hardware to run a six-week experiment is absurd. Running it on AWS for three hundred dollars and then turning it off is exactly the right answer.
The honest caveat that trips people up: short-lived workloads have a way of becoming long-lived. The proof-of-concept ships, becomes a product, grows into a revenue line, and suddenly you have been paying by the hour for compute that should have been amortized hardware two years ago. The discipline is to revisit every cloud workload every 12 to 18 months and ask whether the original rationale still applies. If it does not, repatriate or rearchitect. We have helped customers cut cloud bills in half just by running that review honestly once a year.
What we do not recommend as default public cloud
Steady-state production workloads. Heavy-duty database servers. VDI at scale, which is our specialty and where we have very strong opinions. Internal line-of-business applications with predictable load. File servers. Print servers. Domain controllers. None of these workloads benefit from the elasticity you are paying the cloud premium to get. They belong on private cloud or on-prem infrastructure where the per-unit cost is a fraction of what the hyperscalers charge and where you own the performance.
We also do not recommend treating the hyperscaler managed service catalog as a menu to order from. Every managed service you adopt is a lock-in decision. Some are worth it — a globally distributed NoSQL database you cannot realistically build yourself is worth paying for. Most are not. Managed PostgreSQL, managed Redis, managed file storage, and managed queues are all easy to run on commodity infrastructure for a lot less money, and doing so keeps your application portable.
The point of the three patterns
Public cloud services work best when you use them for the things they are actually good at: durable cheap storage at planet scale, global edge delivery, and elastic ephemeral compute. These three patterns consistently produce good outcomes and defensible ROI across every customer we have worked with.
Everything beyond those three patterns is a case-by-case decision that deserves a spreadsheet and a second opinion. The hyperscaler sales motion is optimized to make you feel behind the curve if you do not move everything to cloud. The engineering reality is that the companies running the leanest and most reliable infrastructure are the ones that picked their public cloud spots carefully and kept the rest of the workload somewhere they control. Boring, unfashionable, and repeatedly correct.
Talk with us about your infrastructure
Schedule a consultation with a solutions architect.
Schedule a Consultation