Skip to main content
DevOps

5 Container Benefits (Plus 3 Pitfalls Nobody Warns You About)

Real benefits of containers, real tradeoffs of Kubernetes, and honest guidance on when lighter options like App Runner or Cloud Run make more sense.

John Lane 2021-10-27 5 min read
5 Container Benefits (Plus 3 Pitfalls Nobody Warns You About)

Containers are genuinely transformative for how software gets packaged, shipped, and run. Kubernetes is genuinely the right tool for a certain class of problems. Both are also heavily overused for problems that don't need them, and the resulting operational overhead consumes engineering time that should be going to product development. Here's an honest take on when containers are the answer and when a lighter option wins.

Five Real Benefits

1. Environment Parity

"Works on my machine" is a class of bug that containers essentially eliminate. The same image runs on the developer's laptop, in CI, in staging, and in production. The OS-level dependencies, the runtime version, the system libraries — all identical. If it fails in production, it fails the same way in staging.

This alone justifies containerization for most applications, regardless of where they run.

2. Packaging and Distribution

A container image is a single artifact that describes exactly what's needed to run the application. No "install these packages, then run these scripts, then copy these files." The image is the deployable unit, and it's immutable.

For CI/CD, this means builds are deterministic. For deployment, it means rollback is a config change, not a rebuild.

3. Isolation Without Full VMs

Containers isolate processes from each other with much less overhead than full virtualization. You can run 50 containers on a VM where you might have run 3 VMs before. The memory and CPU overhead is minimal.

This matters for cost. It also matters for development — running a local stack of 10 services on your laptop is feasible with containers, impractical with VMs.

4. Deployment Primitives Become Consistent

Every modern deployment system knows how to handle containers. Kubernetes, ECS, Cloud Run, App Runner, App Service, Docker Swarm, Nomad — they all take an image and run it. The ops knowledge transfers across platforms.

5. Language Agnostic Infrastructure

Your Python service, Go service, Java service, and Node service all deploy the same way, scale the same way, and are monitored the same way. The infrastructure stops caring what language your app is written in.

This is a significant productivity improvement for polyglot teams.

Three Pitfalls Nobody Warns You About

1. Kubernetes Is Not a Solved Problem

The biggest trap in containerization is assuming that Kubernetes is the natural next step. For most teams, it isn't. Kubernetes has hundreds of moving parts, a learning curve measured in months, and operational failures that are hard to debug without deep expertise.

What breaks in a typical Kubernetes deployment:

  • Ingress controllers with cert-manager that silently fail certificate renewal
  • Persistent volumes that don't get cleaned up, leading to storage exhaustion
  • Pod eviction policies that surprise teams during memory pressure
  • Network policies that block legitimate traffic in ways that are hard to diagnose
  • Node pool autoscaling that doesn't scale when you need it to
  • Helm chart version incompatibilities during upgrades
  • CSI drivers and CNI plugins that break during Kubernetes upgrades

A team of 3 engineers does not have the capacity to own Kubernetes well. A team of 30 might. Below that threshold, managed alternatives are almost always better.

2. Image Sprawl and Supply Chain Risk

Containers are only as secure as the base image and the packages inside them. Most teams don't track this well. The "latest" tag on Ubuntu or Debian pulls whatever was current when the build ran, which means two builds a week apart can have different security posture.

What to do:

  • Pin base images to specific versions or digests, not tags
  • Scan images in CI (Trivy, Grype, Snyk) and fail the build on high-severity findings
  • Use minimal base images (distroless, Alpine, Wolfi) to reduce attack surface
  • Sign images with cosign and verify at deploy time
  • Maintain a private registry with approved base images

The supply chain story for containers has gotten a lot better with sigstore, SLSA, and SBOM tooling. Teams that ignore it are accumulating risk.

3. Observability Gets Harder, Not Easier

Containers make it harder to use traditional monitoring because processes are ephemeral, logs don't live on disk, and "ssh into the box" is not a thing. Teams that skip building container-native observability end up blind.

What you need:

  • Structured logging to stdout/stderr
  • Log aggregation that captures every container's output (Loki, CloudWatch Logs, Azure Log Analytics, Fluent Bit)
  • Metrics via Prometheus or OTel instrumented in the application
  • Distributed tracing across services (Jaeger, Zipkin, Datadog APM, Honeycomb)
  • A way to investigate a failing pod before it gets replaced (kubectl debug, ephemeral containers)

The "I'll add monitoring later" approach fails immediately in a containerized environment. Budget for observability up front.

When Lighter Options Beat Kubernetes

For most teams, managed container services eliminate most of the Kubernetes pain while keeping the container benefits.

  • AWS App Runner / Fargate: Containers without cluster management. Good for stateless web services.
  • Google Cloud Run: Serverless containers. Pay per request. Excellent for bursty workloads.
  • Azure Container Apps: Built on Kubernetes but abstracts most of it away. Good middle ground.
  • Azure App Service / AWS Elastic Beanstalk: Even simpler. Not technically "containers" but give you the deployment model without the ops burden.
  • Docker Swarm: Simpler orchestration, still maintained, sufficient for many use cases.
  • Hashicorp Nomad: Orchestrator that's dramatically simpler than Kubernetes while handling the same workloads.

We recommend Kubernetes specifically when:

  • You have a team that can own the operational burden (rule of thumb: 1 dedicated platform engineer per 50-100 application engineers)
  • You have specific requirements that lighter platforms can't meet (custom CRDs, multi-tenant workload isolation, specific networking)
  • Your workload mix benefits from unified orchestration across many different service types
  • You need true portability across clouds

Otherwise, start with the simplest thing that works.

What We'd Actually Do

For a team containerizing for the first time:

  1. Containerize the application. Dockerfile, image registry, CI build pipeline. This alone is valuable.
  2. Deploy to a managed container service that fits your complexity — Cloud Run, App Runner, Container Apps. Get to production.
  3. Build observability from day one. Structured logs, metrics, tracing.
  4. Scan images in CI. Fail on high-severity findings.
  5. Reconsider Kubernetes later if and only if the managed service hits a limit you can't work around.

For a team already on Kubernetes:

  1. Invest in platform engineering. The cluster needs an owner.
  2. Pin and scan everything. Supply chain hygiene is table stakes.
  3. Don't over-customize. Every custom operator is technical debt.
  4. Plan for upgrades. Kubernetes upgrades are the thing everyone puts off and regrets.

Three Takeaways

  1. Containers are great. Kubernetes is a trap for many teams. Start with managed container services and only move to Kubernetes if you have a specific reason.
  2. Supply chain hygiene is non-optional. Pin, scan, sign, verify.
  3. Observability is harder with containers, not easier. Build it in from day one.

Talk with us about your infrastructure

Schedule a consultation with a solutions architect.

Schedule a Consultation
Talk to an expert →