Skip to main content
Infrastructure

Physical Data Centers: Five Reasons They Still Win in 2024

The cloud took a lot of workloads, and rightly so. But the idea that physical data centers are obsolete is a marketing fiction. Here are five reasons the colo is not going anywhere.

John Lane 2023-10-13 5 min read
Physical Data Centers: Five Reasons They Still Win in 2024

Somewhere around 2018, the industry decided that physical data centers were over and everyone was going to the cloud. Six years later, the cloud bills got big enough that CFOs started asking uncomfortable questions, and all of a sudden "repatriation" stopped being a dirty word. We have been running customer workloads in colocation facilities for twenty-three years, and we have moved plenty of those workloads into Azure and AWS when it made sense. We have also moved plenty back out. Here is why the physical data center is not going anywhere.

1. Steady-State Workloads Are Brutally Cheap on Your Own Hardware

The math on cloud compute assumes you are paying for elasticity. If your workload runs 24/7 at roughly the same load — which describes most line-of-business applications, databases, VDI hosts, file servers, domain controllers, and the entire infrastructure layer of a typical enterprise — you are paying a 3x to 5x premium for elasticity you are not using.

A dual-socket server with 768 GB of RAM and 30 TB of NVMe runs about $15,000 to $20,000 today. Amortized over five years in a half rack of colocation at roughly $600 to $900 per month, that server delivers compute at something like $0.02 per vCPU hour. The equivalent Azure D32s v5 is around $1.30 per hour on-demand, or maybe $0.50 per hour with a three-year reserved instance plus hybrid benefit. The gap is real and it is not closing.

Customers who repatriate steady-state workloads off the hyperscalers routinely cut their infrastructure spend by 50 to 70 percent. We know because we do these migrations every quarter.

2. Latency to Your Users Is Not Negotiable

If you have office workers, call center staff, or manufacturing systems that need to talk to an application a few hundred feet away, running that application in us-east-1 is absurd. The speed of light is a hard limit, and the closer your compute is to your users, the better their experience. This is especially true for VDI, voice, any real-time analytics workload, and anything talking to local hardware like scanners, PLCs, or medical devices.

A data center that is 5 ms from your users beats one that is 35 ms from your users every time, and neither one is slower than one that is 80 ms away across two cloud provider peering points. Latency is not a solvable problem with faster CPUs. You either have it or you don't.

3. You Actually Own the Security Boundary

Compliance and security auditors love physical data centers because the boundary is unambiguous. Your cage has a lock. The lock has an access log. The ports on your switch are yours. The person holding the crash cart is on your payroll or on a contract you negotiated. When an auditor asks who has access to your data, you can answer the question definitively.

In the cloud, the answer is "the cloud provider's employees under the terms of their SOC 2 report, plus whoever your identity provider says." That is a fine answer for most use cases. It is not a fine answer for every use case. For regulated industries that still have paper-based controls, for customers with contractual restrictions that predate the cloud era, and for workloads where the cost of a breach is measured in lives rather than dollars, physical control is worth paying for.

4. Egress Bandwidth Is the Hidden Tax

The cloud providers charge you almost nothing to ingest data and a significant amount to get it back out. If your application moves data in bulk — a video platform, a backup service, a scientific computing workload, a data lake that actually gets queried by external consumers — you are paying egress fees that can dwarf your compute bill.

A colocation facility charges you for bandwidth commits, which are typically in the range of a dollar or two per megabit per month on a 95th-percentile burstable circuit. That works out to something like a penny or two per gigabyte for heavy users, versus seven to nine cents per gigabyte for the first ten terabytes of hyperscaler egress. If you move multiple terabytes a month out of your environment, your data wants to live on metal you own.

5. Hardware Diversity and the Right Tool for the Job

When you build in a physical data center, you pick the hardware. That sounds obvious until you realize how constrained the cloud makes you. Want 2 TB of RAM in a single host for an SAP HANA workload? You can do it in the cloud, but it is expensive and specific SKUs go in and out of availability. Want a specific NIC with SR-IOV enabled and tuned queue pairs for low-latency networking? Good luck finding that as an instance family. Want an NVMe drive with 5 GB per second sustained write bandwidth for a database workload? In your own rack, that is a $2,000 part you order on Tuesday.

Physical data centers let you pick the hardware that fits the workload instead of picking the workload that fits the available hardware. For specialized applications — HPC, media rendering, real-time trading, genomics, manufacturing telemetry — this is the whole ballgame.

What We'd Actually Do

For most of our customers, the sensible architecture is a hybrid: a physical data center footprint that carries the steady-state load, plus a cloud presence for bursty workloads, global reach, SaaS integrations, and disaster recovery. The physical footprint carries 70 to 90 percent of the compute and storage at 30 to 40 percent of the cost. The cloud footprint handles the things only the cloud is good at.

We run most of our customers on Proxmox or VMware in leased colocation space, with automated failover to a second facility for disaster recovery and Azure Blob as an immutable ransomware-resistant backup target. It is not glamorous, but it is cheap, fast, and it has not lost data for any of the customers who have been running on it for a decade or more.

Three Takeaways

  1. Steady-state workloads belong on hardware you own. The cloud premium is real and it does not shrink when your usage is predictable.
  2. Latency and egress are hidden cloud taxes that repatriation eliminates. If your application is bandwidth-heavy or latency-sensitive, do the math before you sign a three-year reserved instance.
  3. Hybrid is not a transitional state. It is the destination for most enterprises, and it has been for years. The marketing story that everything goes to the cloud was always oversold.

Talk with us about your infrastructure

Schedule a consultation with a solutions architect.

Schedule a Consultation
Talk to an expert →