Cloud Pipelines for Media & Entertainment: Three Things That Break First
Render farms, editorial, and review-and-approval on cloud look great in the vendor deck. Here is where they actually break, and what to do about it before you migrate.

Every post-production shop we have talked to in the last few years has been told the same story by a hyperscaler sales team: move your render farm to the cloud, put your editorial on cloud workstations, set up review-and-approval in a browser, and you will never buy hardware again. In practice, when shops actually try it, three things break before anything else. If you are planning a migration, plan for these first and the rest of the project gets a lot less painful.
Break Number One: Egress and Storage Tiering
The asset management problem in media is not like the asset management problem in enterprise IT. A single episodic TV production can generate 40 to 120 TB of camera original per week. A feature film can run into the petabytes before you count VFX plates. Cloud storage is cheap at the sticker price, but the moment you need to pull that data back out for a finishing pass, a color session, or a DI grade, egress charges turn into a line item that shows up on the finance call.
The pattern that works is tiered storage with an explicit promotion and demotion policy. Camera original and completed masters live in object storage — S3, Azure Blob, or an on-prem object store like MinIO or Cloudian. Active working sets live on high-performance NAS near the compute, either in a cloud region or in a colo next to your render nodes. Proxies live on whatever is cheap and close to the artists.
The mistake we see most often is teams treating cloud object storage as a working filesystem. It is not. Latency is wrong, throughput is wrong, and the cost model punishes you for small-file access patterns. If you are running an NLE against cloud storage, put a caching tier between them. Lucidlink, Hammerspace, and the native cloud NAS offerings (Azure NetApp Files, FSx for NetApp ONTAP) all exist for exactly this reason. Use one of them — do not try to mount a bucket and hope.
The other mistake is forgetting that egress is asymmetric. Pulling a 2 TB project down once at the end of a job is survivable. Pulling 2 TB down every morning because your artists are working against cloud storage from an on-prem workstation is not. Model the data flow before you sign the contract.
Break Number Two: Color-Accurate Remote Review
Review and approval on cloud is sold as a browser experience. Upload a cut, share a link, client opens it on a laptop, everyone nods, you move on. This works fine for internal dailies and offline cuts. It does not work for anything where color matters.
Browser-based playback does not honor ICC profiles, does not preserve HDR metadata reliably, and assumes the viewer's monitor is calibrated (it is not). If your client is a network doing a final color pass on an HDR Dolby Vision master, sending them a browser link is malpractice. You need a calibrated reference monitor on the client side, a managed endpoint (ClearView Flex, Sohonet, Streambox, or equivalent), and a review session driven by a human operator who can confirm the pipeline.
The cloud-native version of this exists — Sohonet ClearView Flex and Evercast both run on cloud infrastructure — but the endpoint is not a browser. It is a calibrated box sitting next to a calibrated display. Plan the budget accordingly. Do not let a vendor tell you the web player is good enough for final color. It is not, and the redo when your client's DP catches the gamut mismatch will cost more than the managed endpoint would have.
Break Number Three: Render Farm Economics
Cloud render farms are the poster child for "burst to cloud" and they do work, but the economics are more nuanced than the sales pitch. A cloud render node at full hyperscaler on-demand pricing is 3x to 5x more expensive per frame than a well-utilized on-prem or colo render node. Spot and preemptible pricing closes the gap, but not all the way, and you have to build your queue manager to handle preemption gracefully.
The honest calculation looks like this. If your farm runs above roughly 60 percent utilization on a 12-month average, owning hardware in a colo beats cloud on a three-year TCO basis by a comfortable margin. If your farm runs below 30 percent, cloud wins outright. In between, a hybrid model — own the baseline, burst the peaks — is almost always the answer.
The thing the sales deck skips is the queue manager work. Deadline, Tractor, and OpenCue all support cloud burst, but the integration work to make it seamless (spinning up nodes, attaching storage, licensing renderer seats, tearing down at the end of a job) is real engineering. Budget it. Do not assume the cloud provider's "AWS Thinkbox" or equivalent marketing page means you click a button and it works.
And license the renderer carefully. Arnold, V-Ray, Redshift, Houdini — every one of them has different cloud licensing rules, and some of them are expensive enough that the license cost eats the compute savings. Check the fine print before you commit to an architecture.
What We Would Actually Recommend
For a mid-size post house or VFX vendor coming to us for a cloud strategy, here is the pattern we recommend more often than not.
Keep editorial on-prem or in a colo near editorial. Avid, Premiere, and Resolve all work best against high-performance local shared storage. Remote editorial exists and works for specific workflows, but it is not the right default for a shop doing heavy cutting every day.
Put finishing, color, and VFX compositing on managed cloud workstations or cloud-adjacent colo workstations. Teradici/PCoIP Ultra or HP Anyware on a cloud GPU instance (Azure NVv4/NVadsA10 or AWS G5/G6) gives you a genuinely usable remote experience. Pair it with a caching tier — do not mount object storage directly.
Use object storage as the archive and the mastering substrate. Not as the working filesystem. Promote data into fast storage when you need it, demote it when you are done.
Build a hybrid render farm. Own the baseline, burst to spot on cloud for peaks. Use a queue manager that supports preemption cleanly. Model the cost monthly, not once.
For review and approval, use a managed endpoint for anything color-critical. Browser review is fine for dailies. It is not fine for a network color session.
The Takeaway
Media and entertainment workflows are genuinely unusual, and the cloud patterns that work for enterprise IT do not map cleanly onto them. The vendor decks all promise a pure-cloud future. The shops actually delivering shows are almost all running hybrid, and the ones who tried to go all-in on cloud usually pulled partway back within 18 months — either because the egress bills got ugly, or because the remote experience was not good enough for the finishing work, or because the render farm economics did not hold up under real utilization.
Plan for the three failure modes above before you migrate anything. If your architecture answers all three honestly, the rest of the project gets a lot easier.
Talk with us about your infrastructure
Schedule a consultation with a solutions architect.
Schedule a Consultation