Cloud-Based Video Storage: Four Real-World Considerations
Cloud video storage sounds simple until you run the numbers. Four considerations determine whether it's actually cheaper than on-prem — or quietly more expensive.

Video is the workload that breaks more cloud storage assumptions than any other. It is big, it is constant, and when it moves around it generates egress fees that have surprised more than one customer's finance team. Every few months I get a call from a customer who moved their surveillance, body camera, broadcast archive, or training video library to a cloud provider because it seemed like the modern thing to do, and now the monthly bill is twice what they budgeted. The call always starts the same way: "I don't understand what we're paying for."
Cloud-based video storage can work well. It can also quietly become the most expensive infrastructure decision a mid-market customer makes in a given year. The difference comes down to four considerations that get skipped during the initial evaluation. If you think them through up front, cloud video storage is often a good answer. If you skip them, the math is rarely kind.
One: Storage Tier Is the Entire Conversation
Cloud providers offer a spectrum of storage tiers, from hot object storage designed for frequent access to cold archive tiers priced fifty to one hundred times cheaper for data you rarely retrieve. For text and database data, the tier choice is usually straightforward. For video, the tier choice determines whether the bill is reasonable or absurd, and the decision is more subtle than it looks.
The mistake most customers make is defaulting everything to the hot tier "just in case." A year of body-worn camera footage at standard quality can run into the hundreds of terabytes. At hot storage prices, the annual bill lands somewhere between four and eight times what cold archive would cost for the same data. For footage that is almost never viewed but has to be retained for legal reasons, hot storage is the wrong tier.
The better approach is to tier by age and access pattern. Footage from the last thirty days, which investigators and operators actually look at, lives on hot storage. Footage between thirty days and one year, which is occasionally retrieved for incident review, lives on a warm tier. Footage older than a year, which is kept only for retention policy, lives on cold archive. The lifecycle transitions happen automatically on a schedule. The total cost drops by half or more compared to a single-tier strategy, and nobody notices the difference in day-to-day use.
The catch is that cold storage has retrieval delays and retrieval fees. Pulling a day of old footage out of a cold archive can take minutes to hours, and the retrieval can cost more than the month of storage did. If your workflow includes occasional bulk retrieval — for a discovery request, an audit, or a legal hold — the retrieval model matters as much as the storage model. Model it before you commit.
Two: Egress Fees Are Where the Budget Dies
Every cloud provider charges for data leaving their environment. For most workloads this fee is negligible. For video, it is the line item that destroys budgets.
Here is the scenario I have seen more than once. A customer moves their surveillance video to cloud storage. The recording endpoints upload the footage, which is free because ingress is free. Six months later, the customer wants to do a video analytics project and needs to pull a training set out of cold storage to run through a machine learning pipeline on their own hardware. Or they want to hand a month of footage to a third party for an investigation. Or they want to migrate to a different cloud provider because the pricing has changed. Whatever the reason, the egress fees on that operation land like a dump truck. I have seen single retrieval operations cost more than a year of storage.
The fix is to design for egress from the beginning. Do your processing in the same cloud the storage lives in so the data doesn't leave. Negotiate egress discounts with the provider at contract time — they are available if you ask, especially for large commitments. Use compression and tiering to minimize the volume of data that ever needs to move. And most importantly, keep a running model of "what happens if we need to pull half of this data out in a month" so that the retrieval path is priced and tested, not hypothetical.
The customers who get this right treat egress as a first-class design constraint. The customers who don't treat it as an afterthought are the ones who call me about the bill.
Three: Retention Policy Is Not Optional
Every industry that stores video has a retention rule, and most customers do not track them carefully enough. Law enforcement has state-specific retention schedules for body camera and dashcam footage. Healthcare has retention rules for procedure recordings. Schools have retention rules for campus surveillance. Broadcasters have content licensing windows. The rule sets differ, the penalties for getting it wrong are real, and the cost of storing everything forever "just to be safe" is often the hidden driver of an oversized cloud bill.
A good cloud video storage design enforces retention programmatically. Every piece of footage is tagged with a retention class at ingest, and lifecycle policies automatically delete or archive footage when it crosses the retention threshold. This is not hard to implement — the cloud providers all support object-level lifecycle policies — but it requires somebody on the customer side to actually own the retention policy and keep it updated.
The failure mode I see most often is a customer who implemented the system years ago with a single blanket "keep everything for seven years" rule, never updated it, and is now storing five times the data they are legally required to keep. Cleaning that up is a one-time project that usually pays back in a single quarter. Not having the policy in place to begin with is the reason it got that bad.
There is also a legal hold dimension. When something becomes part of a litigation or investigation, the retention rules get suspended and the footage has to be preserved until the hold clears. Your storage design needs a legal hold mechanism that works at scale and that produces defensible audit trails. Don't wait for the first litigation to discover you don't have one.
Four: The Network Between the Camera and the Cloud
The fourth consideration is the one that looks like a networking problem but is actually a storage architecture problem. Cloud video storage assumes you can get the video to the cloud in real time, reliably, and without dropping frames. For organizations with fiber at their camera locations, this is fine. For organizations with cameras at remote sites, construction sites, vehicles, or facilities with flaky connectivity, this is not fine at all.
The common failure pattern is an organization that deploys cloud storage without thinking about the upload path. Cameras buffer locally when connectivity drops. The buffer fills up. Older footage gets overwritten before it can be uploaded. By the time anybody notices, weeks of footage are missing and nobody can reconstruct what happened.
The fix is to design the network path as carefully as you design the storage tier. Bandwidth planning for the steady-state upload rate, plus headroom for burst retries after outages. Local buffering with enough capacity to survive the longest realistic outage window. Health monitoring that alerts on upload lag, not just on connectivity loss. And for sites with truly unreliable connectivity, a local storage appliance that holds recent footage and uploads to the cloud asynchronously, so the cloud becomes the archive tier rather than the primary store.
Done well, a hybrid model — local fast storage plus cloud archive — gives you the best of both worlds. Low-latency playback at the site where the cameras are, with cheap long-term retention in the cloud for compliance and analytics. Done badly, you have cameras with nowhere to send their footage, and a cloud bucket that slowly fills up with gaps.
Putting the Four Together
Cloud-based video storage is a legitimate option for most mid-market organizations, but it is not an "upload and forget" decision. The customers who end up happy with it treated the storage tier, the egress model, the retention policy, and the network path as four separate engineering decisions, each with a concrete answer before they migrated. The customers who ended up unhappy treated it as a line item on an IT plan and trusted the default configuration.
My bias, after 23 years of doing infrastructure work, is toward hybrid designs for video workloads. Cameras feed a local storage appliance, the appliance replicates to cloud cold storage on a schedule, and the cloud holds the long-term archive with clear retention policies and tested retrieval procedures. That design is often cheaper than pure cloud, more resilient to network problems, and easier to reason about during an incident. It is also less exciting to talk about in a vendor pitch, which is probably why more customers end up on pure-cloud designs they didn't fully understand. If you are planning a video storage project and the pitch you've heard so far hasn't mentioned tiering, egress, retention, or the network path, you are being sold the easy version of the story. Ask the harder questions before you sign.
Talk with us about your infrastructure
Schedule a consultation with a solutions architect.
Schedule a Consultation