Data Encryption in the Cloud: Three Tips Most Implementations Skip
Every cloud provider tells you your data is encrypted. True, technically. Here are three things most implementations still miss — and how to close the gaps before an auditor finds them for you.

"Your data is encrypted at rest and in transit." Every cloud provider says it. Every compliance questionnaire checks the box. And in 90 percent of the production environments we audit, the statement is technically true and materially misleading. The encryption exists. It isn't doing what most people assume it is doing. Here are the three gaps we see over and over again, and what to do about each one.
The Baseline Everyone Has Already
Before the tips, the baseline: yes, every major cloud provider encrypts data at rest by default. Azure Storage, S3, GCS, Azure SQL, RDS, Cosmos DB, DynamoDB — all of them write encrypted bytes to disk with provider-managed keys. Yes, TLS is available for all data in transit, and yes, most services require it by default now. If you just turned on a storage account and dropped files in it, your data is encrypted.
The problem is that this default encryption protects you against exactly one threat: someone physically stealing a disk out of a datacenter. Nothing else. The cloud provider holds the keys. Any service principal with permission to read the data can read it in clear. An attacker who compromises an IAM role reads the data in clear. A misconfigured public bucket serves the data in clear. Encryption at rest with provider-managed keys is table stakes. It is not a security control against the threats that actually matter in cloud environments.
Tip One: Use Customer-Managed Keys for the Data That Matters
The first gap is that most implementations are still using provider-managed keys for data that warrants customer-managed keys. The difference matters for three reasons:
- Revocation. With a customer-managed key (CMK), you can disable the key and every piece of data encrypted under it becomes unreadable instantly, across every service principal, every service, every region. This is the only way to stop a data breach in progress once credentials are compromised. You cannot do that with provider-managed keys.
- Audit. CMK usage is logged through Azure Key Vault, AWS KMS, or GCP Cloud KMS. Every decrypt operation shows up in the key log. You can answer the question "who read this data in the last 30 days" with actual evidence. Provider-managed keys give you no such log.
- Compliance scope. Most regulated frameworks — HIPAA, PCI-DSS, FedRAMP, CJIS — either require or strongly prefer customer-managed keys for sensitive data. Auditors know the difference even if marketing materials blur it.
The implementation is not free. Customer-managed keys add latency, add cost (typically $1 to $3 per month per key plus per-operation charges), and add failure modes (if you mishandle the key, the data is gone). We recommend CMKs for exactly these categories: customer PII, payment data, health records, source code, and anything subject to a regulatory framework. For telemetry, logs under 90 days, and internal reference data, provider-managed keys are fine.
The implementation pattern we use: one CMK per data classification, not one per service. The key represents the blast radius of a compromise. If you have 30 keys and lose track of which one protects what, you have 30 keys and zero actual control. Keep it to four or five per environment, aligned to data sensitivity.
Tip Two: Encrypt the Metadata, Not Just the Payload
The second gap is that encryption implementations almost always protect the payload and leave the metadata wide open. A database table encrypted at rest still has column names, row counts, index structures, and query patterns visible to anyone with read access to the underlying engine. A storage bucket with server-side encryption still has object names, sizes, timestamps, and access logs visible in the control plane. A message queue with TLS in transit still has message envelopes, routing keys, and sender identities visible to anyone reading the queue metrics.
Metadata leakage is the real risk in most cloud compromises we investigate. An attacker who reads the names of your files in an S3 bucket often learns enough about your business to make the next step of the attack obvious: which bucket holds the customer data, which one holds the backups, which one holds the crown-jewel reports. The payload encryption didn't protect any of that.
Three concrete actions:
- Use opaque identifiers for object names. Not
customers-2024-q1-pii.csv. Use a hash or a UUID and keep the human-readable name in a protected metadata database. The auditor will still find the data; the attacker browsing a misconfigured bucket will not. - Minimize logs and metrics that contain user identifiers. Application logs, access logs, and metrics all leak information about who did what when. Keep the logs, but hash the identifiers before they go in, so an attacker who compromises your logging stack does not get a free list of your customers.
- Encrypt sensitive fields at the application layer before they hit the database. Column-level encryption in the application, not the database, means the DBA, the cloud provider, the backup system, and anyone with read-only database access all see ciphertext. Only the application with the key sees clear. This is the single most effective control against insider threat on customer data.
The third action is the one most teams skip because it is real work. It is also the one that pays off the most when something goes wrong.
Tip Three: Rotate Keys on a Schedule and Actually Test It
The third gap is that key rotation exists in the policy document and does not exist in practice. Every compliance framework requires key rotation. Every customer we audit can show us a policy that says keys rotate every 365 days. Very few of them can show us a key that has ever actually been rotated.
The reason is that rotation is scary. In a system with customer-managed keys protecting production data, a bad rotation is a self-inflicted outage. Teams write the policy, schedule the first rotation, realize they don't have a tested procedure, and quietly extend the deadline forever. Then an auditor finds it and the conversation gets uncomfortable.
How to actually do it:
- Automate rotation from day one. Azure Key Vault and AWS KMS both support automated rotation of the underlying key material while preserving the key identifier. Turn it on when the key is created, not later. If you wait until the system is in production, you will not turn it on.
- Rotate in test first, on every release. Your non-production environment should rotate its keys every week as part of the deployment pipeline. Anything that breaks will break in test, where it is cheap. If you've never rotated in test, don't rotate in production.
- Test revocation separately from rotation. Rotation generates a new key version. Revocation disables the old one. Rotation does not prove revocation works. Every quarter, spin up a sandbox, encrypt some data, disable the key, and verify the data is unreadable. This is the test that matters if you ever need to revoke for real.
What Encryption Isn't
One more thing worth saying: encryption is not a substitute for access control. Every encryption story we've seen fail has failed because the attacker had legitimate access to the decryption path. The IAM role was too broad, the service principal was reused, the key policy let too many services decrypt. The encryption was working exactly as designed — it was just decrypting for the attacker because the attacker was holding the right credentials.
If your access control is sloppy, encryption does not save you. Close the IAM gaps first, then worry about whether your keys are customer-managed. The order matters because the second fix is pointless without the first.
Talk with us about your infrastructure
Schedule a consultation with a solutions architect.
Schedule a Consultation