Top 10 Things You Want in Your Backup Contract (Part 4)
The previous blog, "Rate of Change," is impacted by this post's fourth point: "Automatic Off-Siting of Data."
Disclaimer: Take these as basic starting templates and get local legal advice, as local jurisdictions may require specific changes.
The previous blog, "Rate of Change," is impacted by this post's fourth point: "Automatic Off-Siting of Data."
Disclaimer: Take these as basic starting templates and get local legal advice, as local jurisdictions may require specific changes.
What do we mean by Automatic Off-Siting of Data? Hopefully you are no longer manually taking data off site for a customer as part of your managed service, but have moved to the automatic push of data to a disaster recovery site. This process is a critical step in ensuring that your customer's data is off site when a disaster occurs, and it is very dependent upon the customer's bandwidth.
But I don't have control of the bandwidth! This is exactly why you want this next section in your contract. It goes hand in hand with the Rate of Change clause. The amount of bandwidth available for replicating data off site is critical to getting the data transmitted before the next backup cycle occurs. As a rule of thumb, you can transmit about 10GB for every 1Mb of 100 percent dedicated bandwidth.
Therein lies the rub: You will rarely have the bandwidth dedicated to you. (Remember: you are concerned about the speed UP, not DOWN.) So you need to include the following text in your contract:
During the hours of 5 PM to 8 AM every day, the minimum available bandwidth provided to ABC VAR from client shall be 4 Mb/s.
During the hours of 8 AM to 5 PM every day, the minimum available bandwidth provided to ABC VAR from client shall be 2 Mb/s.
You can adjust the hours and the amount of data, but don't get carried away with weekends. This section requires the customer to provide you with enough bandwidth to allow the average rate of change of data to get off site. If you run into issues with a customer, and you see that your data is getting queued up and not getting off in time, then you need to look at the size of the data's rate of change and the amount of bandwidth that is available to you. You may need special bandwidth analysis tools such as Blue Coat's that can pinpoint how the data is being used.
The hidden clause: Finally, you should have the following clause that protects you WHILE the data is being transferred:
ABC VAR will be responsible for maintaining only the data that is contained in or successfully transferred to the remote vault site.
This allows for coverage in case the bandwidth is not enough, the changed data is too big or the disaster occurs in the middle of a transfer. This last scenario is an issue because it can take up to 24 hours to transfer the last data file.
Bottom Line: Now that we have the data backed up and off site, how do we address restores? Next week, we'll get into the what, why, how and when of the restore business.
If you are interested in finding out more about Zenith's TigerCloud with built-in business continuity, click HERE.
Rich Reiffer is VP of Cloud Practice at Zenith Infotech. Rich has been in the business of technology since the dark ages starting with Burroughs Corp., spending time with Steve Jobs (NeXT) and Ray Noorda (Novell). Rich has been in the VAR channel since the mid 80's with companies like Inacomp and Businessland finally forming his own company, Trivalent, in 1991. After 20 years of building data centers, etc. Rich has come on board with Zenith to head up the Cloud group. Monthly guest blogs such as this one are part of Talkin' Cloud's annual platinum sponsorship.