Runbook Template

We’ve prepared this complimentary runbook template to help you ensure your organization is prepared in advance of the next service disruption.

The Disaster Recovery Imperative

Nearly all organizations today rely on information technology and the data it manages to operate. Keeping computers and networks running, and data accessible, is imperative. Without this information technology, customers cannot be serviced, orders taken, transactions completed, patients treated, and on and on.

Disasters that create IT downtime are numerous and common, spanning the physical and logical, the man-made and natural. Organizations must be resilient to these disasters, and able to operate in a disruption of any type, whether it is a security incident, human error, device failure, or power failure.

State of Preparedness

Most organizations know the importance of disaster recovery, and firms of all sizes are investing to drive greater uptime. An IDC study on business continuity and disaster recovery (DR) showed that unplanned events of most concern were power, telecom, and data center failures (physical infrastructure) – more so than natural events such as fire or weather. Security was considered the second most critical and extreme threat to business resiliency.

Seventy-one percent of those surveyed had as many as 10 hours of unplanned downtime over a 12-month period. This underscores the importance of greater uptime and DR, which is driving firms to conduct DR tests more frequently. Approximately one in four firms are conducting DR testing quarterly or monthly, while another 45% are testing semi-annually or annually.

This is a marked increase from previous research, which IDC conducted three years ago, where firms were testing annually at best. However, 25% of firms are still not doing any DR testing.

Advice

By Laura DuBois, Program Vice President, Storage, IDC:

DR planning is complex and spans three key areas: technology, people, and process. From an IT perspective, planning starts with a business impact analysis (BIA) by application/workload. Natural tiers or stages of DR begin at phase 1 – infrastructure (networking, AD, DHCP, etc.) – then extend to recovery by application tiers. Each application tier should have an established recovery time objective (RTO) and recovery point objective (RPO) based on business risk.

DR testing is essential to adequate recovery of systems and data, but also to uncover events or conditions met during real disasters scenarios that were not previously accounted for. Examples include change management such as the needed reconfiguration of applications or systems. Also, the recovery of systems in the right sequence is important. To ensure that DR testing, planning, and recovery is organized and effective, many organizations use a disaster recovery “runbook.”

A DR runbook is a working document, unique to every organization, which outlines the necessary steps to recover from a disaster or service interruption. It provides an instruction set for personnel in the event of a disaster, including both infrastructure and process information. Runbooks, or updates to runbooks, are the outputs of every DR test.

However, a runbook is only useful if it is up-to-date. If documented properly, it can take the confusion and uncertainty out the recovery environment which, during an actual disaster, is often in a state of panic. Using the runbook template provided here by Evolve IP can make the difference for an organization between two extremes: being prepared for an unexpected event and efficiently recovering, or never recovering at all.

DRaaS Solutions from Evolve IP - Protect with Prem Extend

Your Runbook Template

A disaster recovery runbook is a working document unique to every organization that outlines the necessary steps to recover from a disaster or service interruption.

Runbooks should be updated as part of your organization’s change management practice. For instance, once a production change has been committed, runbook restoration instructions should be reviewed for accuracy. In addition to synchronizing runbooks with corporate change management, the outcomes and action plans of each DR test should also be incorporated into run book update cycles.

How to Use this Template

This template outlines the critical components of your disaster recovery and business continuity practices. Disaster recovery tests should be regularly conducted, reviewed, and plans subsequently updated.

Use this template as a guide for documenting your disaster recovery test efforts. It includes sections to specify contact information, roles and responsibilities, disaster scenarios likely to affect your business and recovery priorities for your business’ IT assets.

Keep in mind there may be more sections of a runbook based on your deployment model; this template serves as a standard with all its sections applicable (and necessary) to any disaster recovery testing procedure. Similarly, your runbook may look different if you are working with a managed service provider that handles most or all aspects of your disaster recovery tests.

This template supports a “do-it-yourself” approach to disaster recovery and business continuity. Evolve IP provides technology and services that support this approach. Additionally, Evolve IP offers Managed Disaster Recovery as a Service (DRaaS) which enables clients to outsource the setup, configuration, testing and management of their Evolve IP DRaaS solution. In addition to ongoing monitoring and scheduled testing, recovery engineers are available 24x7x365 to manage the failover process when a disaster is declared.

To learn more about Managed DRaaS, view our Data Sheet or contact us for more information.

Recovery Scenarios

Though not part of the runbook itself, we’re providing this section to list some common events that would cause DR scenarios. These threats are general and could affect any business, so you might also want to list those which would threaten your business specifically.

Evolve IP annualy survey IT professionals and C-level executives. The 2018 Disaster Recovery Technologies Survey included the feedback of 1,000+ IT professionals and C-level executives who shared how their company is handling disaster recovery, and what concerns and issues they have.

One of the subject areas included in the study is the cause of outages. The 2018 findings illuminate the fact that your business should not just be prepared for the news-making types of disaster threats (hurricanes or tornados, for example). Instead, consider all these potential causes for disaster:

Source: https://www.evolveip.net/resources-library/2018-draas-survey
Source: https://www.evolveip.net/resources-library/2018-draas-survey

It is wise to also list disaster scenarios that are unique to, or are more likely to affect, your business. For each possibility, include details on the scenario, methods for data restoration on the part of the provider and your company, and procedures by which DR events will be initiated.

For example:

Scenario #1:

List your first disaster scenario or business continuity threat here. Examples might include significant loss of hardware, a power outage of significant length, an infrastructure outage, disk corruption, or loss of most or all systems due to unavoidable natural disaster. Identify and address those disaster scenarios that are most relevant and likely to affect your business.

For each scenario, include:

  • Overview of the associated scenario and systems most likely to be affected by the threat
  • Time frame of potential outages, based on the likely elements of the specific scenario
  • Systems that may be brought up locally via on-premise failover equipment or premise-based cloud enablement technology
  • Procedures for initiation of system failover to external data centers
  • Priority schedule for system restoration
  • Procedures for contacting your hosting provider (if applicable) to initiate critical support

Continue listing disaster scenarios with all important details. Do not feel limited to only a few disaster recovery scenarios; list all those that could realistically impact your business along with the associated recovery procedures. The table below may be an effective tool for listing your potential DR scenarios:

EventPlan of ActionOwner
Power failureEnact affected system run book plansApplication business owner
Data center failureEnact total failover planDisaster Recovery Coordinator (DRC)
Pending weather event (winter storm, hurricane, etc.)Review all DR plans, notify DRC, put key employees on standbyDisaster Recovery Coordinator, (DRC) Business Owner
   
   
Distribution List

This section is also critical to the development of your runbook. You must keep a clearly defined distribution list for the runbook, ensuring that all key stakeholders have access to the document. Use the chart below to indicate the stakeholders to whom this runbook will be distributed.

RoleNameEmailPhone
Owner   
Approver   
Auditor   
Contributor (Technical)   
Contributer (DBA)   
Contributor (Network)   
Contributor (Vendor)   
Location

Specify the location(s) where this document may be found in electronic and/or hard copy. You may wish to include it on your company’s shared drive or portal.

If located on a shared drive or company portal, consider providing a link here so the most recent version is readily accessible.

If this runbook is also stored as a hard copy in one or multiple locations, list those locations here (along with who has access to those locations). We do recommend making your runbook available outside of shared networks, as the document must be readily accessible at time of disaster in the event that primary systems like email are not accessible to employees. In other words, ensure your runbook is accessible under any circumstances!

Document Control

Document creation and edit records should be maintained by your company’s disaster recovery coordinator (DRC) or business continuity manager (BCM). If your organization does not have a DRC, consider creating that role to manage all future disaster recovery activities.

Document NameDisaster Recovery Run Book for [Your Company’s Name Here]
Version 
Date Created 
Date last modified 
Last modified by 

Keep the most up-to-date information on your disaster recovery plan in this section, including the most recent dates your plan was accessed, used and modified. Keep a running log, with as many lines as necessary, on document changes and document reviews, as well.

Document Change History
VersionDateDescriptionApproval
v1.001/20/2018Initial versionBusiness Owner / DRC
v1.107/06/2018End of year DR test action plan updates to run bookTest Manager / DRC
    
    
    

Contact Information

This section will list your service provider’s contacts (if applicable) along with those from your IT department. This is the team that will conduct ongoing disaster recovery operations and respond in the case of a true emergency. Specific roles listed below are examples of those that might comprise your team.

All of these roles need to be in communication when in a disaster recovery mode of operation. For pending events, this same distribution list should be used to provide advanced notice of potential incidents. Customer support teams should also not be overlooked as they are the first line of communication to your customer base. Forgetting this step will cause extra work on your primary recovery team as they take time to explain what is going on.

Your Company's ContactsTitlePhoneEmail
NameDisaster Recovery CoordinatorPrimary Phone
Secondary Phone
Email
NameChief Information OfficerPrimary Phone
Secondary Phone
Email
NameNetwork Systems AdministratorPrimary Phone
Secondary Phone
Email
NameDatabase Systems AdministratorPrimary Phone
Secondary Phone
Email
NameChief Security OfficerPrimary Phone
Secondary Phone
Email
NameChief Technology OfficerPrimary Phone
Secondary Phone
Email
NameBusiness OwnerPrimary Phone
Secondary Phone
Email
NameApplication Development Lead (as applicable)Primary Phone
Secondary Phone
Email
NameData Center ManagerPrimary Phone
Secondary Phone
Email
NameCustomer Support ManagerPrimary Phone
Secondary Phone
Email
NameCall Center ManagerPrimary Phone
Secondary Phone
Email
Service Provider Contacts (if applicable)RolePhoneEmail
NameDisaster Recovery CoordinatorPrimary Phone
Secondary Phone
Email
NameCustomer ServicePrimary Phone
Secondary Phone
Email
NameEmergency SupportPrimary Phone
Secondary Phone
Email
NameSr. System EngineerPrimary Phone
Secondary Phone
Email
NameDirector – Service DeliveryPrimary Phone
Secondary Phone
Email

If you are working with a service provider, this position might be alternately filled with an account or test manager.

Data Center Access Control List

Maintain an up-to-date access control list (ACL) specifying who, in both your company and your IT service provider (if applicable), has access to your data center and resources therein.

Also specify which individuals can introduce guests to the data center. This will be useful for determining, in the event of an emergency scenario, who may be designated a point person for facilitating access to critical infrastructure. During a recovey event your primary operations team is going to be busy recovering systems, so be sure you know who to contact and how to gain access to your data center.

Examples are provided in the table below. Remove, replace and add individuals to this list as appropriate for your organization and infrastructure.

NameRoleConact InfoAccess Level
NameChief Technology OfficerPhone
Email
General access. Can authorize guest access.
NameDirector of Service DeliveryPhone
Email
General access. Can authorize guest access.
NameService Delivery EngineerPhone
Email

Server room access, cage/cabinet, NOC access. Cannot authorize guest access.

NameSystems EngineerPhone
Email
Server room access. Cannot authorize guest access.
NameNetwork EngineerPhone
Email
Server room access. Cannot authorize guest access.
NameChief Security OfficerPhone
Email
General Access. Can authorize guest access.
NameChief Information OfficerPhone
Email
General Access. Can authorize guest access.

Communication Structure of Plan

Document Change History

During any disaster event there should be a defined call tree specifying the exact roles and procedures for each member of your IT organization to communicate with key stakeholders (both inside and outside of your company). When defining the call structure, limit your tree and branches to a 1:10 ratio of caller to call recipient.

As a first step, for example, your Disaster Recovery Coordinator might call both the company CEO and head of operations, both of whom would then inform the appropriate contacts in their teams along with key customers, service providers, and other stakeholders responsible for correcting the service outage and restoring data and operations.

An example call tree might appear as follows:

And, for the situation written above, your general progression of calls might be as follows:

Disaster Recovery CoordinatorHead of OperationsDirector of Service DeliverySr. Systems Engineer
Network Engineer
Systems Administrator
CEODirector ofBusiness DevelopmentSales Contact
PR Representative

Declaration Guidelines

As you create your runbook, you must consider guidelines for declaring a disaster scenario. Guidelines that we recommend are specified in the chart below:

SituationActionOwner
Workaround does not exist in a matter of time that does not affect customer SLAsDeclare application level failover and enact failover to secondary site 
Restoration procedres cannot be completed in your production environmentDeclare application level failover and enact failover to secondary site 
A production environment no longer exists or is unable to be accessedDeclare a data center failure and enact a total failover plan from primary to secondary data center 
Service provider issues cannot be resolvedNotify service provider and have them enact DR plans 

The use of technology can be incorporated into the declaration steps of a DR plan. Be sure not to declare on the first instance of an event unless it is completely understood that secondary instances of the event will result in increased damage to your customer or your business systems. The table below details some standard practices to use in order to mitigate premature declarations. SLAs should be built in a manner that allows for some troubleshooting and system restoration prior to the need to declare a disaster.

Also use this section to outline standard monitoring procedures along with associated thresholds. List all system monitors, what they do, their associated thresholds, associated alerts when those thresholds are met or exceeded, the individual(s) who receive the alerts, and the remediation steps for each monitor.

List event monitoring standards by defining thresholds for event types, durations, corrective actions to be taken once the threshold is met, and event criticality level. Use the following chart (or a derivative thereof for your monitoring standards) to specify event monitoring standards.

The first few rows have been filled in with examples:

Event TypeDuration of EventCorrective ActionEvent Criticality
Performance Monitoring Status = Warning Alert Level> 2 minutesIsolate problem device / recycle deviceCritical Level
Memory Usage > 80%> 5 minutes
  • Isolate physical device / virtual machine
  • configure memory pool increase
  • clear memory cache
  • clear memory buffer
Critical Level
CPU Usage > 90%> 3 minutes
  • Increase compute allocation (virtual)
  • add additional compute resources into application pool

Critical Level

Memory> 15 minutes
  • check memory queue
  • clear memory cache of affected system
  • increase memory allocation (virtual)
 
Storage   
Network   
Ping Check   
IP Check   

These event types (memory, storage, network, ping check and IP check) are categories of events for which you should list specific examples in this chart.

Alert Response Procedures

List out your step-by-step procedures for responding to service issue alerts in this section. As an example, Evolve IP’s ticket submission and response procedures follow this general outline:

Service interruption identified > Service Delivery Manager contacted

  1. Ticket is opened with support team (either in-house or third party provider’s ticket creation system).
  2. Contact key stakeholders to ensure they are aware of the alert and determine if any current activity or recent changes may be responsible for the service interruption.
  3. Verify that alert is legitimate and not an isolated single user issue or monitoring time out.
  4. Notify end users of ticket creation.
  5. Contact the appropriate member(s) of your operations or engineering teams to notify them of the alert and assign investigation and data restoration procedures.

Issue Management and Escalation

This section should list detailed procedures for issue management and escalation, when necessary, in the case of an unmet service objective.

Escalation procedures will vary by levels of operation and severity of the associated activities. At Evolve IP, for example, we categorize standard operating procedure interruptions in five levels (5 being the lowest severity, 1 the highest). Of course, these can and will differ among organizations. The following serves only as an example:

  1. Fatal – Functionality has ceased completely with no known workaround for all users. Impact is highest.
  2. Critical – Functionality is critically impaired but still operational for some users. Impact is high.
  3. Serious – Functionality is impaired but workarounds still exist for all users. Impact is moderate.
  4. Minor – Some functionality is impaired but there is a reasonable workaround for some users. Impact is low.
  5. Request – This is an enhancement-related service request that does not at all impact current operations or functionality.

Depending on the severity of the service interruption, your escalation procedures will vary by parties involved, response chain, response time and target resolution.

Changes to SOP During Recovery

Recovery events necessitate the priority of data and business process restoration. At times, other non-critical standard operating procedures (SOPs) must be suspended.

During a recovery event, recovery operations should take precedent over inbound queries or tickets. Monitors and alerts should also be reviewed for suspension until recovery is complete. This is a best practice procedure to avoid flooding your network operations center (NOC) and support teams with bogus or bad alarms.

Change management policies should also be altered to expedite recovery procedures. For example, adding a new server or firewall rule in a standard environment might take one day once all necessary reviews and permissions are met. But during recovery operations, a standard firewall change should be expedited to support recovery operations.

Ticketing of work during recovery operations should be reviewed to ensure the necessity of any requested tasks. Non-critical tickets should be deferred and addressed once recovery procedures are complete.

Remember, the number one rule in recovery is: Recover! Get things back up and running whether in a workaround, failover or full restore state.

That in mind, use this section to identify which standard operating procedures will be suspended in the event of a true emergency scenario (one that would fall under your critical or fatal service interruption classifications). List out specifications for change management, monitors and alerts, and problem and issue resolution during recovery procedures. Certain non-critical standard operating procedures may be suspended, such as in the following situation:

A user submits a call/ticket to your service desk stating they cannot access the company website. This ticket would be responded to with a message that your organization is currently in a recovery operations cycle and your service ticket will be addressed as soon as technicians have completed the restoration work.

System Level Procedures

Your runbook content, up to this point, has addressed organizational points of concern. At this stage in your runbook you should have fully documented procedures in your company for issue management and escalation, criteria for evaluating and declaring an emergency scenario, and procedures for ensuring all key stakeholders and responsible parties are in communication and are ready and able to take the necessary steps to begin disaster recovery procedures.

From this point forward, the runbook will shift focus to system level procedures to address infrastructure and network level configurations, restoration steps, and system level responsibilities while in disaster recovery mode.

Infrastructure Overview

Provide a detailed overview of your IT environment in this section, including the location(s) of all data center(s), nature of use of those facilities (e.g. colocation, tape storage, cloud hosting), security features of your infrastructure and the hosting facilities, and procedures for access to those facilities.

Network Diagram

Specify the location of all facilities in which your company’s data is stored. Include an address and directions to each location.

Example of a Network Diagram:

Examples of a data center diagram need to be detailed enough to provide your backup recovery team member the necessary information to perform his or her responsibilities if called upon.
Access to Facilities

Data centers and colocation facilities typically maintain strict entry protocol. Certain members of your organization will typically hold the appropriate credentials to enter the facility. Detail members of your team (and/or your IT service provider’s team) who have access to all data facilities along with any requirements for access.

Order of Restoration

This section will include instructions for recovery personnel to follow that lay out which infrastructure components to restore and in which order. It should take into account application dependencies, authentication, middleware, database and third party elements and list restoration items by system or application type.

Ensure that this order of restoration is understood before engaging in restore work. An example is provided below. The rest of the table should be filled out in the exact order that restoration procedures are to be completed.

Order of Restoration Table:
Event TypeDuration of EventCorrective ActionEvent Criticality
Performance Monitoring Status = Warning Alert Level> 2 minutesIsolate problem device / recycle deviceCritical Level
Memory Usage > 80%> 5 minutes
  • Isolate physical device / virtual machine
  • configure memory pool increase
  • clear memory cache
  • clear memory buffer
Critical Level
CPU Usage > 90%> 3 minutes
  • Increase compute allocation (virtual)
  • add additional compute resources into application pool

Critical Level

Memory> 15 minutes
  • check memory queue
  • clear memory cache of affected system
  • increase memory allocation (virtual)
 
Storage   
Network   
Ping Check   
IP Check   

These event types (memory, storage, network, ping check and IP check) are categories of events for which you should list specific examples in this chart.

System Configuration

This section should include systems and application specific typology diagrams and an inventory of elements that comprise your overall system. Include networking, web app middleware, database and storage elements, along with third party systems that connect to and share data with this system.

You should lay out each of your systems separately and include a table for your network, server layout and storage layout.

Network Table:
Device TypeNamePrimaryOS LevelGatewaySubnetMask
Firewall      
Load Balancer      
Switch      
Router      
       
       
Server Table:
Server Name/PriorityOSPatchIP AddressSubGatewayDNSAlternate DNSSecondary IPsProduction Mac Address
          
          
          
          
          
          
Storage Table:
NameLUNAddressRAID ConfigurationHost Name
     
     
     
     
     
     

Backup Configuration

Use this section to list instructions specifying the servers, directories and files from (and to) which backup procedures will be run. This should be the location of your last known good copy of production data.

ServerSoftwareVersionBackup CycleBackup SourceBackup Target
      
      
      
      
      
      
      
      
      
      
      
      
      

Monitors

Listed by server, be sure that these monitors are put in place and activated as part of your restore activities. Restoring from a disaster should result in a mirror to your production environment (even if scaled). Monitors and alerts are a critical element to your production system.

Server NameMonitorCycleAlert
    
    
    
    
    
    
    
    
    
    
    

Roles and Responsibilities

Service Delivery Responsibility Assignment Matrix

Table Key:

CodeDescription
RResponsible Party: Those who do the work to achieve the task
AAccountable Party: The party ultimately answerable for the correct and thorough completion of the deliverable or task, and the one from whom responsible party is delegated the work
CConsulted Party: Those whose opinions are sought, typically subject matter experts; and with whom there is two-way communication
IInformed Party: Those who are kept up-to-date on progress, often only on completion of the task or deliverable; and with whom there is just one-way communication

This matrix describes the participation by various roles to complete DR tasks or deliverables. It clarifies roles and responsibilities for IT stakeholders in your organization as well as any service providers involved with your business’ disaster recovery program. Fill in the matrix below, specifying the roles for your company, your service provider (if applicable) and any other 3rd parties that will be involved in your disaster recovery tests.

Positions that will fill these roles and responsibilities will often include your DR coordinator, network engineer, database engineer, systems engineer, application owner, data center service coordinator, and your service provider. Identify the responsibilities of each of these roles in a disaster event, then map them onto a matrix of all activities associated with recovery procedures, as in the example table provided below.

ActivityResponsible Parties
RACI
Maintain situational management of recovery eventsDRCDRCDRCAll
React to server outage alerts    
React to file system alerts    
React to host outage alerts    
React to network outage alerts    
Document technical landscape    
Configure network for system access    
Configure VPN and acceleration between your business and service provider network (if applicable)    
Maintain DNS or host file    
Monitor service provider network availability (if applicable)    
Diagnose service provider network errors (if applicable)    
Create named users at OS level    
Create domain users    
Manage OS privileges    
Create virtual machines    
Convert physical servers to virtual servers    
Install base operating system    
Configure operating system    
Configure OS disks    
Diagnose OS errors    
Start/Stop the virtual machine    
Windows OS licensing (or your operating system)    
Security hardening of the OS    
Daily server level backup    
Patch Management for Windows servers (or your operating system)    
Provide a project manager    
Provide a key technical contact for OS, network, and SAN    
Coordinate deployment schedule    
Support, management and update of Protection Software    
Install, support management and update of Terminal Server    

Data Restoration Processes

Use this section to outline the steps necessary to respond to outage alerts and, subsequently, restore data from backup records. Include your order of backup operations in this section, including data dependencies (based on organization of your data backups) and troubleshooting steps.

These processes will be followed in the event that a data recovery is necessary, including scenarios in which data is still running but a backup is needed, restoring data in a post-disaster event or restoring from a backup volume.

In this section you should identify the order of operations for a data restore, the location of your backup, and step-by-step procedures to re-establish your data volumes into your production environment.

Restoration Procedures

Though your order of operations should stay relatively consistent, list steps taken for each and every backup system. For example:

Payroll system backup:

  • System “XYZ” – Payroll
  • Start Db server – vm2345-qa1
  • Start Application server – vm354-r1
  • Start web server – vm6_ws4
  • Terminal server to Ws1_Vf1_Payroll (This is only an example of what procedures for one system restoration may look like. For each of your actual systems, similarly list step-by-step instructions for full system backup.)
  • Login to backup archive – url: backup.archive.payroll
    • Create temp target folder for backup file
    • Login: user1
    • Password: 1resu
  • Navigate to most recent backup file
  • Select file
  • Select restore target Ws1_Vf1_PayrollProd1
  • Initiate restore
    • Select overwrite options
  • Confirm dialog box warning “Are you sure?”
  • Complete restore backup file
  • Login to Ws1_Vf1_PayrollProd1
  • Start Payroll App local\temp\dirs\payrollprod1.exe
  • Navigate via Explorer to temp backup folder
  • Select file
  • Open payrollprod application console
  • Select data source > temp\backup\payrollWs1bckup
  • Import
  • Validate through report test 1 run

Use the rest of this section to similarly list restoration procedures for each of your backup systems.