The data deduplication for Backup can generally take place at the file or block level. This defines the minimal data fragment that will be checked by the system to avoid the data redundancy. The Hash algorithm generates a unique identifier (hash number) for each analyzed chunk of data which is stored in an index and is used to figuring out duplicates data, the duplicated fragments have the same hash numbers.
In ITbrain Backup we use the file-level deduplication technology, which means that the deduplication works at the file level by eliminating duplicate files. That occurs on the account level, each account has its own index of file hashes. The file-level deduplication takes less resources and may be deployed over larger amounts of physical storage, only files that have changed since the last backup cycle will be backed up in order to avoid duplication and to optimize the backup performance.
Thank you for your interest in ITbrain Backup. Please do not hesitate to let us know if you have any further question about this topic!