7 Backup Strategies That Survive Ransomware
The 3-2-1-1-0 rule, immutable storage, tested restores, and the specific backup patterns we implement for customers after a ransomware event.

Ransomware changed backups forever. A backup strategy that was considered best practice in 2015 is a liability in 2025. The reason is simple: modern ransomware targets backups specifically. It has administrative credentials, it knows where your Veeam repository lives, and it will encrypt your backups before it touches production so that you have no way back.
Every backup strategy in this article is designed to survive that attack. Each one has a specific role to play, and the right answer for most organizations is a combination.
1. The 3-2-1-1-0 Rule
The old 3-2-1 rule (three copies, two media types, one offsite) is not enough. The current standard is 3-2-1-1-0:
- 3 copies of your data
- 2 different media types
- 1 copy offsite
- 1 copy immutable or air-gapped
- 0 errors on the last restore test
The two additions matter. The immutable copy is the one ransomware cannot touch. The zero-error rule is what forces you to actually test.
2. Immutable Object Storage
The single highest-leverage control against ransomware. An immutable backup written to S3 Object Lock in Compliance mode or Azure Blob with an immutable policy cannot be deleted or modified for the duration of the retention lock — not by an attacker who has full admin credentials, not by the storage admin, not by the cloud vendor's support staff.
How to configure it right:
- Use Compliance mode on S3, not Governance mode. Governance can be overridden by a sufficiently privileged IAM user.
- Use a separate AWS account or Azure subscription for the backup target. Different credentials, different blast radius.
- Set retention based on your recovery requirements, not on a vendor default. Most ransomware attackers dwell in a network for 30 to 60 days before detonating. Your retention needs to cover that window.
This is the single change we make most often for customers who've had a ransomware near-miss.
3. Per-Application Backup Jobs with Application-Consistent Snapshots
A crash-consistent backup of a SQL Server database is a backup that probably won't restore. You need application-consistent snapshots that tell the database to flush and quiesce before the snapshot is taken.
For SQL Server: VSS writers plus native SQL backups to disk, then backup the backup files. Or use a vendor that speaks SQL natively (Veeam, Commvault, Rubrik).
For PostgreSQL and MySQL: pg_basebackup and mysqldump respectively, ideally combined with WAL/binlog shipping for point-in-time recovery.
For Exchange and SharePoint: Use backup software that understands the applications, not a file-level backup of the data directories.
4. Geographic Separation
One copy needs to be far enough away that a regional event doesn't take both copies. "Far enough" depends on your threat model, but for most US customers we recommend at least 500 miles between primary and backup. For cloud backups, different region. For on-prem backups, different metropolitan area.
Flood, fire, power event, regional fiber outage — these still happen. Your secondary location is the insurance policy.
5. Cross-Cloud Backup Copies
The newest pattern we've been adding to customer environments: a copy of critical backups in a second cloud vendor. Azure primary, AWS secondary, or vice versa. Tools like rclone, Cloudberry, or native cross-cloud services can ship backups automatically.
Why: Because cloud vendors do occasionally have account-level events. Accidental account suspensions, billing holds, regional outages, compromised root credentials. The probability is low, but the impact is "every piece of data you have is unreachable." A second cloud is cheap insurance.
Budget guidance: A secondary copy of your top-tier backups in Wasabi or Backblaze B2 costs roughly a third of the primary cloud's object storage. For critical data, it's worth it.
6. Backup Catalog Protection
Here's the one people forget: the catalog is as important as the data. If your Veeam server gets wiped, you have backup files but no way to find what's in them. If the catalog is compromised, restores become a manual forensic exercise.
What we do:
- Run the backup server as a hardened, isolated host with no direct production trust
- Back up the catalog itself to immutable storage, separately from the data backups
- Document the process for rebuilding the catalog from raw backup files
- Test the rebuild at least once
7. Regular, Unannounced Restore Tests
The only real test is a restore. Everything else is theater. The cadence we recommend:
- Weekly: Restore at least one file from a random backup job to a scratch location. Automate this. Alert on failure.
- Monthly: Full restore of at least one tier-one workload to an isolated environment. Spin it up, verify it starts, verify the application works.
- Quarterly: Full restore drill across multiple workloads simultaneously, timed as if it were a real event.
- Annually: Restore drill that includes bringing up a working production copy in the DR location without any access to the primary environment.
The annual drill is the one that catches the hidden dependencies (DNS, AD, certs, shared secrets) that will bite you in a real event.
What We'd Actually Do
For a mid-market organization with no current backup protection against ransomware:
- Week 1: Enable immutable storage on your existing backup target, or provision a new immutable target. Azure Blob immutable tier or S3 Object Lock Compliance mode.
- Week 2: Move the backup server credentials out of the production domain. Separate service accounts, separate management plane.
- Week 3: Verify application-consistent backups for your databases. Fix any that aren't.
- Week 4: Run a full restore drill of a tier-one workload. Document what went wrong. Fix it.
- Month 2: Add a secondary copy in a different cloud or region.
- Ongoing: Monthly restore drills, quarterly tabletop, annual full DR exercise.
This is maybe 40 hours of engineering work and a modest ongoing storage bill. It's the best money you'll spend on security this year.
Three Takeaways
- Immutable storage is the single most important backup control in 2025. If you only do one thing from this article, do this one.
- Your backup catalog is a target. Protect it separately.
- Restore tests are the only test that matters. Schedule them, automate them, and do not let them slip.
Talk with us about your infrastructure
Schedule a consultation with a solutions architect.
Schedule a Consultation