Back to List

Traditional vs. Cloud Assets for Your Business Continuity Plan

Skyline Technologies  |  
Mar 12, 2019
Let's consider how we have historically implemented IT assets to satisfy our business continuity plan.

Traditional Cold Site

A traditional cold site is in a second owned or rented location. It’s typically not as nice as the primary site, and it's often at your business’s secondary location. For instance, Skyline has three offices in Wisconsin. We have a primary data center at one of them, and we could utilize one of our other sites as a secondary data center. If we didn't have other locations, then we'd be considering either owning another location or renting through co-location providers.
At a cold site, we’re either renting physical hardware or virtual infrastructure. Along with that cold site, we’re replicating data applications and virtual machines in some way (whether it's native to the hardware), or we have an application or system that is managing or performing that replication for us. We typically don't have any applications running in the secondary location.
We may have a domain controller active and running so that if its servers come online, then they can talk to that domain right away –but that's not always the case. The main thing with the traditional cold site is that we typically have very low confidence in our recovery because we’re not testing our business continuity plan. Cold sites typically have a larger RTO. We must hydrate those machines. Nothing is running so our first step is to get everything up and running.

Case Study

We were working on a business continuity strategy for a company, and their IT manager defined success as, "We need to get to a point where we no longer choose to endure a 2-hour production outage before we finally pull the trigger on the recovery plan." That’s even after they had invested in the secondary site and were replicating their data, virtual machines, SQL data, and were utilizing storage replication and application replication. Because of their lack of testing, they had so little confidence in their secondary site that they would literally endure a production outage anywhere within their critical business application for two hours before they would implement their business continuity plan. That is a long time to be down.

Traditional Hot Site

Traditional hot sites have active-or passive-owned/rented secondary locations and hardware. We have our mission-critical services and applications running in our secondary site that requires some form of data replication. The responsibility of data replication (as well as primary/secondary orchestration) typically comes from within the running application –such as our database services and some form of a sync process running. Then, because our critical systems are running in the secondary site, we typically have a higher recovery confidence –especially if it's active-active. Whether the hot site is active-active or active-passive, still make sure you’re testing.
With traditional business continuity approaches, patching is a major consideration and critically important to the success of a recovery plan. Both traditional hot and cold sites require ongoing patching. Since a hot site is running VMs, the IT department is usually aware of them and updating them as patches are applied.
The other thing about our traditional approach is that there's usually only one geographic region involved for the secondary location. Most of the time it’s even in the same state as the primary location. This is a huge amount of exposure when it comes to investing in a business continuity plan. Are we truly protecting our organizations the way we should?

Azure Cloud – Cold Site

This is where we're going to get the most cost value out of an Azure cloud-based recovery. Microsoft has a service called Azure Recovery Services-Site Recovery that’s essentially a recovery vault with storage, and we replicate the volumes of our running servers on-premises to the vault in the Azure cloud. This is strictly an Infrastructure-as-a-Service (IaaS) approach so we're replicating our running servers on-premises to the cloud disk. As they appear within Windows or Linux, they get replicated to Azure. We can integrate that with VMware and Hyper-V, and we can also replicate our physical servers.
Because this is disk replication, there is an agent either in VMware, Hyper-V, or on the physical server that is tracking the changes that are occurring on the disks and replicating those to the vault in Azure. We do get RPO-based alerting. So, when you set up the site recovery vault and you start replicating servers, you create replication policies to set a group of servers to a 15-minute recovery point.
The agents then replicate your data to ensure you can maintain that recovery point. If it can't keep up with the changes, then you get an alert saying, "We're no longer within our 15-minute RPO. If a failure were to occur right now, you would not be able to recover within that 15-minute point in time that you were hoping for." One of the reasons you may not be hitting your RPO is bandwidth to the Azure data center.

Azure Cloud-Hot Site

Here we can utilize Infrastructure-as-a-Service and/or Platform-as-a-Service. We can mix and match as we see fit within the Azure Cloud. If we have SQL servers that need to stay in our secondary site in Azure, then we can set up those servers in IaaS, and they can be running and replicating with our on-premises servers. If you can use Azure web apps, then the PaaS offering would be just fine as our secondary location, even though it's running on a web server on premises.
We can also mix and match with Azure recovery services. Maybe not everything that's critical to our business needs to be active and running in the Azure cloud, right? Maybe we have things that are critical, but the recovery time is 15 minutes vs. maybe less than 5 minutes.
With that, we can have an entire recovery plan that mixes and matches active servers and cold servers within the same recovery. Azure Traffic Manager is essentially the front-end to our client traffic that manages whether that traffic is running to our on-premises environment or to our cloud environment.

Get the free eBook

This is the second of 5 blogs on Where Should the Cloud Fit Into Your Business Continuity Plan? To read them all right now, download our free eBook.
cloud business continuity plan ebook


Love our Blogs?

Sign up to get notified of new Skyline posts.


Related Content

Blog Article
Essential Business Continuity Terms Your Business Should Know
Skyline Technologies  |  
Feb 26, 2019
A strong business continuity plan is something we all want for our organizations. We get and set our requirements with the business for what this is going to look like, and then we work to build a budget and a design to ensure that we're meeting our business continuity strategy. Unfortunately...
Blog Article
How to Protect PHI Data by Limiting External Access to Power BI
Paul FullerPaul Fuller  |  
Nov 06, 2018
It was a second-latte-kind-of-day for health claims processor, Jane. Unknowingly, as she was waiting for the barista to work his magic, Nosy Pete looked over at her laptop screen five feet away and saw that his second cousin’s brother-in-law’s nephew’s ex-girlfriend’s...
Blog Article
Utilizing Azure Regions
Tyler StelzerTyler Stelzer  |  
May 08, 2018
Often when people hear Azure, they think “cloud”. But they should be thinking “clouds”. Azure has datacenters across the world that can be thought of as independent clouds. This gives organizations multiple options when determining the best utilization of a cloud-based...
Blog Article
How to Use Azure Search Services
Steven NelsonSteven Nelson  |  
Apr 10, 2018
Azure has a unique service offering aptly named “Azure Search”. This is a search-as-a-service cloud solution that lets you add a rich search service to your apps.   The search service abstracts the complexities of document retrieval through both a REST API and a .NET SDK...
Blog Article
Azure Backup Simplifies Your Cloud Backup and Recovery Strategy
Erik DempseyErik Dempsey  |  
Feb 27, 2018
We recently started implementing Azure Backup here at Skyline, and I was tasked with digging into the service to find out capabilities, costs, retention, etc. I have to say, after all the research, lab work, testing, and move to production, I could not be happier with all the different options...