Active Architectures for Backup Solutions
CIOREVIEW >> Backup >>

Active Architectures for Backup Solutions

Frank Yue, Director of Solution Marketing, Application and Delivery, Radware

When we build backup solutions for our applications and data, it is important to consider more than just the replication and storage of the critical data that we accumulate. It is good that a redundant copy of the data is saved, but how are you going to access the information? In the disaster recovery scenario that you have imagined and written your contingency plans for, do operational processes change because the backup solution provides a different framework to access the applications and data?

"Dynamic network and application delivery technologies within the private and cloud infrastructures, are key to a successful backup solution"

Traditionally, when we design backup solutions, we are focused on the replication and storage of data that our business accumulates. Backup network and application delivery architectures ensure that the data we collect is available if something happens to our active business operations infrastructure. For optimal operations, multiple redundant and active environments provide the most cost-efficient and operationally effective solution.

Changing End-user Processes is Nuts

This is like the squirrel that lives in an oak forest. She (or he) eats the ripe acorns that the trees produce. At the same time, the squirrel is taking some of the nuts and burying them. When the trees stop producing acorns, the squirrel will find caches of buried nuts to survive. At the end of the day, the squirrel will miss some of the acorns that she buried and they are lost.

The squirrel’s problem is that her method of finding acorns is different during the two different seasons. We avoid this problem by developing backup models that provide seamless application and data access through the transition to emergency backup procedures. The end users should not be perceptibly aware of the different states when the backup strategy is designed and implemented properly. This results in no loss of business continuity and productivity even though there may be major changes occurring in the background.

We need to make sure that the nuts (data) we are accessing are all accounted for and available. We also need to make sure that the processes (network and applications) we use to access those nuts are consistent under all of the operational scenarios. There should be little to no impact on the typical end-user for the majority of disaster recovery scenarios.

Offline Backup: Only for Worst Case Scenarios

Many backup strategies look towards the creation of an offline data storage strategy. Ultimately, this is not the proper primary solution businesses should implement. Offline backups are useful since they provide physical and logical separation from the live production network.

The nature of being offline means that there is a time gap between backups and a significant mean time to repair (MTTR) to restore the data from the offline copy. Often, the access to the data is not addressed in this strategy. What is the state of the application that utilizes the data? How will the end-users be able to connect to the applications?

Active-standby: Functional, but Cost- Effective?

Businesses can augment the offline data storage strategy by implementing active-standby network architecture. In this model, they build redundant network infrastructure. Applications reside in this offline network and have access to the backup data. If there is a disaster that affects the primary and active IT infrastructure, an allegorical switch can be flipped to transition at all of the end-users to the redundant network and applications.

Usually, network technologies will be the switch to move end-users to the backup solution. DNS records can be changed to point to backup servers. Network routing protocols can be manipulated to steer traffic to different datacenters. Ideally, these processes are automated through dynamic technologies like global server load balancing (GSLB) for DNS, and BGP or OSPF for network routing.

The downside to an active-standby model is that significant resources need to be spent to create and maintain the standby infrastructure that will only be used during disaster scenarios. The network and applications needs to be kept up to date along with the network infrastructure. The components need to be tested regularly to ensure that they will work. When the real event arrives, the backup solution needs to become active, quickly and reliably.

Active-active: Use what you pay for

If the infrastructure is built and maintained, does it make sense to keep it in a standby mode? Redundancy and availability can be achieved as a backup solution while being active at the same time. How can a solution be a backup and also active concurrently? A brick and mortar business had a successful grocery store and decide to add an identical one in a new location. If something happens to one of the stores, they can easily direct all of their customers to the other one as long as each has the capacity to support everyone.

We can build network infrastructures that fol­low a similar model. As long as the data is current and accessible by both facilities, then we can distribute the load across both datacent­ers. If one of the datacenters has an outage, then the dynamic net­working and application deliv­ery technologies can automatically detect and divert end-users to the available infrastructure.

For full business continuity, it is necessary for each facility to have the ability to support 100 percent of the load. When one facility goes down, the other must absorb the additional load without affecting end-user productivity. This can become cumbersome and expensive since the business needs to build out the network and application infrastructure to 200 percent capacity in this model.

Active-active-active: Cloud to the Rescue

We can reduce the overall investment needed by increasing the number of redundant facilities a business builds to support their IT environment. With two facilities, if one fails, the remaining facilities need to provide 100percent capacity. If we increase the number of facilities to three, then the two remaining active facilities must only provide 50percent capacity each. All facilities being equal, this means one only needs to invest in 50percent capacity x 3 sites, for a total of 150percent capacity. This is a 25percent savings in capital expenditure and a probable significant operational savings as well.

Backup solutions and disaster recovery scenarios improve with diversity. Diversity is important to prevent any single flaw that affects multiple facilities. We can leverage public and private cloud technologies to create scalable instances of our application and data delivery infrastructures easily and quickly. The cloud becomes our third active instance, as long as there is consistency with the end-user experience and usability.

Dynamic network and application delivery technologies within the private and cloud infra­structures is key to a successful backup solution. They are necessary for seamless business continuity no matter what disaster impacts the IT architecture.

Read Also

For Richer Insights

Heidi Mastellone, Director, Customer Experience, Selective Insurance

Delivering Unique Customer Experience via Technology

Brian Powers, Customer Experience Officer, Likewize

A Modern Policy Admin Platform with Cost and Customer Experience in Mind

Chris Eberly, VP, Life IT, Lincoln Financial Group

Laying the Foundation of a Satisfying Commuter Experience

Yvette Mihelic, Director Customer Experience, John Holland Rail and Transport

The Ever-Evolving Landscape Of Customer Experience Management

Gonzalo Carpintero Navarro, Senior Vice President Operations & Head of Business Transformation Office (BTO), Radisson Hotel Group