By Rob Mackle, Sales and Service Director at www.Backup-Technology.com
May 30, 2016

Backup Technology Cloud Backup Expert Tips: Does Data De-Duplication Save Storage Space and Money?

Like every disruptive technology, the cloud is hungrily absorbing and assimilating within itself, a number of minor innovations and utilities. Data de-duplication is one such innovation that has been successfully integrated with cloud technologies to deliver value.

Technology experts are quick to point out that data deduplication is not really a technology. It is a methodology. It is software driven process that identifies and removes data duplicates in a given data set. A single copy of the data is retained in the store, while all duplicates of the data are removed and replaced with references to the retained copy. All files that initially contained a copy of the data, now contains a reference to the data item retained in the store. Whenever the file containing the deduplicated data item is called for, an instance of the data will be inserted at the right place and a fully functional file will be generated for the user. This method of compressing data reduces the amount of disk space that is being used for data storage and reduces costs of storage.

The growing importance of de-duplication can be traced to the growing volumes of data being generated by businesses. As businesses continue to generate data, space becomes a major constraint and financial resources may have to be allocated for acquiring larger storage capacities. Consequently, any technology that allows them to “have the cake and eat it too” is welcome!

Data deduplication can be “in-line” or “post process”.

In line data deduplication is a process that de-duplicates data before it is sent to storage server. This saves on bandwidth and time-to-backup, as the amount of data being transmitted over the Internet is reduced and only the “clean” data reaches the storage server. However, the process of de-duplication at the client end of the system is itself a time consuming process and is extremely resource intensive.

Post-process de-duplication removes duplicates from the data that has been uploaded to the storage server. There is neither saving of time or bandwidth during transmission, but there is certainly a saving of processing time and client hardware resources at the point of transmission of data, since all de-duplication processes happen on the cloud vendor’s server. Modern day backup companies use a combination of the two methods for obvious advantages.

Backup Technology have integrated data-de-duplication with its cloud backup and recovery solutions. The all-in-one suites for cloud computing and online backup automatically provide data de-duplication services to the subscribing clients. The software automatically detects and deletes all duplicate data and creates appropriate references to the data during the backup process. This saves time and money and results in faster time to backup and recover. The extensive versioning that is used in tandem adds to the strength of the software as older versions of any backed up file can be recovered — even if it was deleted from the source computer. For these and other similar reasons, we invite you to try our award winning cloud backup and disaster recovery and business continuity services, powered by Asigra. We are confident that you will be completely satisfied with what we have to offer!

About the Author: Rob Mackle is Sales and Service Director at Backup-Technology (an iomart company), an Asigra powered cloud backup and disaster recovery solutions provider. Rob blogs here.

 

 

Tags: , ,