Skip to main content
Cloud

Why Hybrid Cloud Outperforms Pure Public for Most Mid-Market Orgs

Most 'innovative hybrid cloud' articles are a vendor's feature list in a trench coat. Here are three patterns we actually use with customers that deliver real value.

John Lane 2026-05-21 6 min read
Why Hybrid Cloud Outperforms Pure Public for Most Mid-Market Orgs

I want to be honest: I get a little tired of "innovative hybrid cloud" articles. Most of them describe whatever feature the author's vendor shipped this quarter as if it were a fundamental new pattern. The actual innovative uses of hybrid cloud are less glamorous, harder to sell on a slide, and much more valuable to the customer running them.

Here are three patterns we run for customers today that deliver results we couldn't have achieved on either side of the hybrid line alone. None of them require exotic software, all of them are in production somewhere right now, and none of them are new in the sense of being untested. They're new in the sense that most organizations still haven't adopted them.

Pattern One: Data Gravity Stays Home, Compute Travels to the Data

Most cloud failures we see are not technical. They're economic, and the economic gotcha is egress. You can rent compute in a hyperscaler for pennies an hour, but moving a terabyte of data out of the hyperscaler costs tens to hundreds of dollars every time. If your workload is "pull a terabyte of data, do some work, write a few megabytes back," you want the compute near the data, not the other way around.

The innovative pattern here is inversion. Instead of moving data to the cloud for processing, keep the data on-prem or in a private cloud in a colocation facility with cheap bandwidth, and let the compute move to where the data is. Sometimes that compute comes from the hyperscaler — AWS Outposts, Azure Stack, Google Distributed Cloud — and sometimes it's a private cloud running the same orchestration platform (Kubernetes, OpenStack) that the developers would otherwise use in public cloud.

Why it works:

  • No egress fees for the terabytes of source data.
  • Lower latency between compute and data, which matters for large datasets.
  • Data residency and sovereignty are handled naturally because the data never leaves.
  • The developers get a familiar API (Kubernetes, object storage, managed databases) without caring where it physically runs.

We use this pattern for customers with large analytics workloads, video and media archives, scientific datasets, and anything regulated. The result is the experience of cloud for the people consuming it and the economics of on-prem for the people paying for it.

The catch: you need to invest in operational maturity on the private-cloud side. The public-cloud provider will not fix things for you at 2 AM. If you don't have (or can't hire) the operations skill, a managed private cloud from a specialist provider is usually the right compromise.

Pattern Two: The Hyperscaler as a Disaster Recovery Target, Not a Primary

Disaster recovery was one of the original promises of cloud and one of the places cloud has genuinely delivered. But the common way people use it — replicating VM images into a cloud region and paying for warm-standby compute — is the expensive version. The innovative version is much cheaper and almost as good for most workloads.

The pattern: use cloud object storage (S3, Azure Blob, GCS) with immutability and geo-replication as your DR target. Your backup tooling on-prem or in your private cloud writes image-level and application-level backups directly to object storage. No warm-standby compute, no always-on replicas, no cross-region traffic unless you actually need to recover.

When you do need to recover, you spin up compute in the cloud on demand from the backups. The RPO depends on your backup frequency (usually under an hour for critical systems). The RTO is longer than a warm standby but much shorter than "find new hardware and restore from tape." And the bill is tiny compared to running warm infrastructure you hope to never use.

Why it works:

  • Object storage is the cheapest layer of the cloud and the most durable. Pay for what you store, pay almost nothing for idle.
  • Immutable storage is the best defense against ransomware destroying your backups. If an attacker compromises your production environment, they cannot delete immutable objects regardless of the credentials they steal.
  • Geo-replication is a line item in the storage service, not a separate system to manage.
  • Recovery is scripted and tested quarterly. The actual invocation is a Terraform apply plus a restore job.

This is how we recommend almost every mid-market customer build DR today. It's the best value-for-money pattern in hybrid cloud, and it's underused because it doesn't sound innovative. It is.

Pattern Three: Bursty Workloads in Cloud, Steady Workloads at Home

This is the oldest pattern in hybrid cloud and somehow still the most misapplied. Everybody agrees with it conceptually. Very few organizations have actually built the automation to make it work.

The pattern is simple to describe: workloads that spike — end-of-month reporting, marketing analytics, seasonal traffic, ML training jobs, CI/CD runners, pre-release load testing — live in public cloud and scale on demand. Workloads that run at steady state — production line-of-business applications, databases, file services, authentication, backup infrastructure — live in private cloud where the economics are predictable.

The innovative part is how it's orchestrated. The old version was "two separate environments, deploy manually, hope they stay in sync." The modern version is a single control plane — typically Kubernetes with a federation layer, or a GitOps pipeline with a resource-placement policy — that treats the two environments as one fabric and places workloads based on declarative rules.

Why it works:

  • The bursty workloads get true elastic scaling without sitting idle between bursts.
  • The steady workloads get predictable pricing and no surprise bills.
  • Developers write once and let the placement policy decide where things land.
  • Cost anomalies are much easier to spot because steady and bursty costs are separated by design.

The hard part is honestly the workload classification. Teams have to commit to categorizing each application as steady or bursty and to automating the resulting placement. A lot of "hybrid cloud" deployments fail this test — they end up with an arbitrary mix instead of a policy. If you do this pattern without the classification discipline, you get the cost of two clouds and the benefits of neither.

What These Three Patterns Have In Common

Look at all three and you'll notice the same theme: hybrid cloud done well is not a collection of tools. It's a set of placement policies. The technology is almost commoditized at this point — object storage, Kubernetes, backup tooling, Terraform, identity federation. What separates the customers who save money and improve resilience from the customers who don't is whether they made a deliberate choice about what lives where and why.

"We'll figure it out" is the failure mode. "We'll put data-heavy workloads local, disaster recovery in cold cloud storage, bursty workloads in elastic cloud" is a policy, and it will cost you a third to a half less than the alternative.

Why This Isn't More Common

Two reasons, and neither is technical.

First, the hyperscaler sales motion is all-in. Their reps are not incentivized to tell you that steady-state workloads should stay home. Your account manager will sell you more cloud, not less. That's the job they have.

Second, internal teams are not always incentivized to minimize cost. If a cost-plus consulting engagement is paying to run a workload, the contractor who runs it doesn't get a bonus for moving it. If the engineering team gets promotions for shipping features, they don't get promotions for trimming the monthly bill. Hybrid patterns require somebody with accountability for the total three-year cost to drive the decision.

If you're that person, these three patterns are where we'd start.

Three Takeaways

  1. Invert the data-gravity model. Bring compute to the data, not the other way around, and the egress problem goes away.
  2. Use cloud object storage as your DR target, not warm standby. Same protection, dramatically cheaper, and ransomware-resistant when you use immutability.
  3. Treat steady and bursty workloads as distinct categories with policy-driven placement. Without the discipline, "hybrid cloud" is just a polite name for sprawl.

Talk with us about your infrastructure

Schedule a consultation with a solutions architect.

Schedule a Consultation
Talk to an expert →