Prepare for ransomware before you’re hit • The Register

Prepare for ransomware before you’re hit • The Register


Sponsored Feature What sort of disaster would you rather prepare for? Hurricanes are destructive, but you know when one’s coming, giving you time to take defensive action. Earthquakes vary in their destructive power, but you never know when they’re going to hit, meaning your ability to recover after the impact is critical.

There’s a corollary with ransomware here explains David Paquette, product marketing manager at HPE-owned continuous data protection company Zerto. Ransomware attackers are constantly hitting your organization looking for vulnerabilities and it is only a matter of time before they slip past your security defenses.

It’s your ability to recover that makes the difference to your survival. That’s why organizations need to start thinking of ransomware in disaster recovery terms, not just simple backup and recovery terms, he argues.

Five years ago, he explains, the threat was “baby ransomware” focused on encrypting file data. Beyond perimeter defenses and good security hygiene protecting against ransomware was “a backup use case”, with file level restore usually sufficient.

Now ransomware and its creators have become more advanced, says Paquette: “It is propagating into backups and encrypting them or deleting them. It’s encrypting entire applications, entire sites. And the consequences are far beyond a simple loss of data.”

It’s important to grasp that a large part of the threat is not that data and applications can’t be recovered or restored. Rather, it’s the time it takes to facilitate the recovery.

Research by insurance giant AIG shows business interruptions due to ransomware are typically seven to ten days long. But some companies can experience outages of as long as 21 days, and as Paquette says, “many organizations don’t survive that.”

This means a defense in depth approach is essential when facing ransomware. It’s at the core of recommendations and playbooks from organizations such as the US Cybersecurity and Infrastructure Security Agency (CISA), the UK’s National Cyber Security Centre (NCSC), and the European Agency for Cybersecurity (ENISA).

Traditionally, defense in depth equates to multiple security layers, with the aim of thwarting an attacker at multiple points. So, as well as perimeter defenses and ensuring patches are up to date, organizations should focus on access control, scanning their networks and software, and of course ensuring their data is backed up.

Who’s innovating fastest?

The problem, says Paquette, is that cybersecurity is “the only industry where the scale of innovation is on the side of the criminal.” So, we have to face the fact that even the best defended organization will see some attacks hit home.

Once we accept this, it becomes clear that true defense in depth means those multiple security layers need to be complemented with multiple recovery layers.

“You have to be prepared to be able to recover quickly and with near-zero data loss from multiple different copies of data, hopefully across different platforms, different locations, and with immutability,” Paquette says.

This clearly means traditional backup practices, relying only on off-site and offline tape as the ultimate destination, are not going to cut it. But it also means that snapshot-based methods – often presented as the most efficient way to preserve VM-based application architectures – also fall short.

“Traditional backup techniques, even those based on snapshot technologies, haven’t improved enough over the last 40 years to provide the frequency or recovery points or the speed of recovery needed for today’s digital world,” Paquette explains.

While snapshots may be more efficient than traditional bulk backup solutions, he continues, they are still disruptive to production systems, which is why many organizations still choose to snapshot their VMs at night. Which inevitably means the potential for hours of data loss.

But the real problem comes with big applications spanning multiple VMs. “You’re snapshotting one VM at a time, and then archiving it somewhere.” But recovery means “You have to put them all together like a jigsaw puzzle to get that application sorted. The problem is that it takes forever and isn’t consistent. That’s a big reason why organizations take 10 to 12 days to recover from ransomware.”

He cites the example of Dutch textile giant Tencate Protective Fabrics, whose previous solution captured its data before it was hit by ransomware. But it was still left with 12 hours of data loss and a two week lag before it could get back up and running, “because they were rebuilding file directories trying to get these applications back.” A subsequent attack, once it had installed Zerto’s continuous data protection technology, resulted in 10 seconds of data loss, and a recovery period of ten minutes.

That difference is because Zerto’s approach focuses on replicating at the hypervisor level. It centers on two components, the lightweight Virtual Replication Appliance and the Zerto Virtual Manager,which enable and manage “near synchronous, block-level replication”. The total installation is around 16MB. Changes – or checkpoints in Zerto’s parlance – are stored in a journal for up to 30 days, meaning admins can rewind back to any point during that time period.

Being lightweight aids recovery

Crucially, this has virtually no impact on the resources of the primary datacenter, according to Paquette. “You’re reading the IO directly and then replicating it to a secondary site. This doesn’t touch the network side, it doesn’t touch the storage side, you’re using the resources of a secondary site.”

That secondary site – and additional copies – may be on-premises, or in the cloud. “It’s what we call one-to-many replication. You’re just streaming the same data twice over to different places, just in case.”

At the same time, data can be copied to a cloud repository, such as Azure or AWS: “You’re using immutable copies that are unable to be changed, they’re using object lock and burn. So even if that secondary site is compromised, you have a third copy available, that is definitely not compromised, because it’s immutable.”

When Zerto is set up, an admin can designate the VMs that make up a given application as a Virtual Protection Group and they are then replicated together. “So, when the moment comes to recover, you just select the checkpoint and then all of those VMs come back at the same point in time.”

Beyond that, Zerto offers multiple ways of retaining and recovering data. “Zerto DR is your first few lines of recovery, to get your fastest recovery possible,” Paquette explains. For a “basic crypto lock or file level recovery. You use Zerto local replication to restore in seconds.”

If VMs or applications are affected, recovery can be local, or from a secondary site. If applications or entire sites are encrypted, “you recover from a secondary site, maybe a third site.” Finally, there is the option of recovering from the immutable copy of data.

Backup for SaaS and cloud natives

While VMs still account for the overwhelming majority of enterprise workloads, they do not constitute the entirety. Zerto also offers backup for SaaS, and offers continuous data protection for cloud native workloads, such as Kubernetes applications, and for Amazon EC2 workloads.

SaaS data has become increasingly appealing to hackers, not least because users often let their guard down, assuming the cloud vendor is responsible for data protection – a responsibility they rarely assume.

More immediately, since its acquisition by HPE last year, Zerto’s offering is in the process of being integrated into HPE GreenLake, meaning users will be able to spin up Zerto’s DR services alongside HPE Recovery Service through a single console and under the same cloud-like consumption model.

That said, Paquette highlights one important factor that can’t be completely handed over to automation – testing your recovery procedures. This is essential to ensure not just that the tooling does what it should in an emergency, but that the human team does what it should too. In traditional scenarios, this generally means choosing an off-peak time. For a retailer, running a recovery test over a weekend in February might be very different from grappling with a real disaster during peak shopping season in November and December.

Zerto’s answer is an on-demand sandbox environment that allows organizations to mimic the production environment. This means teams can run recovery drills at peak times and at higher frequencies.

The sandbox also underpins post attack testing of a live recovery, to ensure that recovered data doesn’t repropagate malware. “No-one in their right minds should just recover without testing that data… the last thing you want is to have the same problem immediately or reintroduce malware and have it detonate again in a couple of weeks.”

But that is the nature of ransomware. It’s a disaster that can and will occur at any time. Even the best defended organization is unlikely to be completely immune to every attack, and a flawed restore process can compound the damage. Potentially fatally. So, to have any hope of getting back on your feet, you need to understand your recovery position before you’re knocked out.

Sponsored by HPE.

You May Also Like…