Veeam v7.0 – Tape parallelization

Today I’ve realized that a single backup to tape job for lager environments won’t cut it. I configured a single tape job containing all regular backup jobs and this is how it turned out:


Maybe because Veeam customers (probably due to the lack of tape support in older versions) tend to deploy the solution without backup to tape or at least have used other tools to do the job.

However this customer had a full backup size of about 19 TB (Veeam .vbk files).  The job was running at 65 MB/s and it would have lasted about three and a half days to complete. Depending on the frequency this could be okay but it would also be a waste of resources because the reaming drives would be idling doing nothing.

After consulting the Veeam community forum I can summarize:

  • If you have two or more tape drives but just a single job only a single drive will be used
  • To speed things up you would need to create multiple jobs pointing to different media pools, so each job can control one of the drives
  • In essence – a single job can’t control multiple drives in parallel

So creating four media pools with identical retention policies and pointing each job to “its own” media pool would reduce the time required to less than a 24 hours.


I hope this helps to save some time and to speed of your backup to tape jobs.

Veeam v7.0 – Backup Copy and its potential

This time I want to pick up an idea which I saw first over on Timo’s blog and add some more details which the customers and of course even me personally really like.

For now I assume the basic understanding of Veeam’s “Backup Copy” feature is clear, if not you may want to check out this post

So what’s the point of this post?

I just supported a customer to set up the following backup infrastructure and I want to share the experiences we made. The customer has two backup servers, one to use as primary server running the main instance of Veeam Backup & Replication v7.0 R2 to protect a vSphere environment and a second one which acts as target for the Veeam Backup Copy. Both servers are installed with Windows Server 2012 to leverage the built in deduplication in addition to Veeam deduplication to achieve a repository wide deduplication. Of course the secondary server is placed in a second in another fire section.


The primary backup server has four repositories in total configured:

  • Local1
  • Local2
  • Remote1
  • Remote2

Only the two local repositories are being used as target for regular backup jobs and the backup copy job picks up those restore points and “copies” them over to the remote repositories on the second server.

To achieve this I added all repositories as “Windows Server” which will install the Veeam Transport Service on the target system. This was pretty easy to setup and everything worked fine.

By using the direct SAN access via Fibre Channel we could avoid a potential bottleneck. If there is the chance to directly attach your backup server to the SAN I would go for it.

But what if you have to restore a VM or file if your primary backup server is down? Unlikely you think? Just during this setup we had massive issues with faulty hardware on the primary server and had to replace hardware multiple times.

So the plan was (as Timo already described) to install the Veeam Backup & Replication (management) server on the second server as well and to add the “remote” repositories which actually act as target for the backup copy as local repositories.


If you add the repositories the first time you can import the already copied restore points and the server is ready to perform restore operations. But the backup copy is an ongoing process, so to keep the restore points within VBR up to date you will need to use a small PowerShell script. To get a copy of a working script check of Timo’s blog post.

Then I unfortunately run in some problems. As soon as I installed VBR on the second server, the primary server lost its connection to the second server and the backup copy jobs stopped.

The error message from the backup copy job indicated that it could be a firewall related issue

“Error: Client error: Failed to connect to the port [Backup02:2502]

Maximum retry count reached (5 out of 5)

Incremental copy was not processed during the copy interval”


As soon as I disabled the local firewall on the second server the connection came back online.

So I did a quick comparison between the firewall rules on the primary serverFirewallRulesPrimary

to those on the secondary server (no Management Server installed):FirewallRulesSecondary

As you can see the “Veeam Backup Transport Service (In)” rule was missing. But just adding this rule (or a custom rule which allows the ports 2500-3000) didn’t fix it, so I enabled the Windows firewall logging to see what else got blocked:

netsh firewall set logging droppedpackets = enable

You may need to create the log first, which is usually located in: C:\Windows\System32\LogFiles\Firewall\pfirewall.log

The log showed the following entries when I tried to connect to the second server:

2014-01-30 14:03:33 DROP TCP <Source IP Backup01> <Dest. IP Backup02> 49662 6160 52 S 623441555 0 8192 – – – RECEIVE

So I manually added another rule to allow port 6160 and that fixed it! Unfortunately I couldn’t test which default Veeam would enable this communication because the server still suffers massive hardware issues L but now the solution works as expected.

A detailed list of all required ports can be found in KB1518


However let’s do a final step back to the conventional part of this post, because this setup allows a bunch of tweaks to further customize the solution to your needs:

  • Different deduplication settings on the local and remote repositories to get fast restores and even more restore points.
  • You can combine the Backup Copy with the GFS option to keep full backups based on the GFS principle
  • Add a tape library to bring the backups off site.
  • Or you may want to back up the VBR configuration of the primary server on the second server and vice versa.
  • Also some love to the Hyper-V fans out here, of course this also applies to your preferred platform. You could even speed things up by installing the Transport Service on all Hyper-V nodes.
  • Copy your backups to the cloud, accelerated by the Veeam WAN acceleration. And maybe restore them in the Cloud?



Please keep in mind that the second server has only a “passive” connection to the vCenter and doesn’t perform actively backups to not violate any license terms and is only used in case the primary server is not available for restore operations. I hope this helps!

Veeam v7.0 – My favorites – Tape support 1

This week I supported a customer moving from Backup Exec 2012 to Veeam v7.0 for their backup to tape solution and because everybody was happy in the end, I decided write a little bit about the tape support.

We started with an upgrade from 6.5 to 7.0 R2 which I already described in this post. Then we simply disabled all Backup Exec services and we were ready to go.

Without any issues Veeam recognized the Fibre Channel attached Dell TL2000 tape library which was equipped with two drives. The native Dell (IBM) Windows drivers have already been installed and everything worked out of the box.VeeamTapeOptions

Then we performed a “Library Inventorying” to discover all available/free tape drives VeeamTapeInventoryingMediaPools

This process created two default pools, one for usable (free) tapes and one for medias which have been used by backup exec before. For those Veeam will display the following error “Unknown MTF writer”. I’m not sure if this was just caused by the write protection on the tapes, because Backup Exec actually also uses the Microsoft Tape Format.

Once discovered you can perform several operations on those tapeMediaOptions

One thing we missed was the option to schedule an inventory task to re-scan for new or removed tapes. A quick search revealed that this is currently only possible via PowerShell and thanks to the Veeam Community it didn’t take long to solve this problem


asnp VeeamPSSnapin

$Library = Get-VBRTapeLibrary –name “Name of the Library”

$Library | Start-VBRTapeInventory


Then we were ready to create our own media pools and jobs. For all of you looking for a way to implement a Grandfather-Father-Son (GFS) principle, you will need to prepare the media pools accordingly or you will need to leverage the “Backup Copy” feature in addition, because there is no option for GFS within the backup to tape job itself. However this isn’t a huge problem because you can simple create multiple pools, for example:

  • Weekly Pool – Up to 5 weeks write protection
  • Monthly Pool – Up to 12 months write protection
  • Yearly Pool – Up to 3 years write protection

According to these pool you can create multiple B2T jobs which can use a proper media pool.

An alternative would be to use the Backup Copy feature to “copy” your restore points to a second repository/location and to keep certain restore points according to the GFS principle. Those can be easily backed up via a “File to Tape” job. Note that a copy job cannot be used as source of a backup to tape job.

This customer really liked the “Reverse Incremental” backup method which in combination with the backup to tape turned out to be a good fit.

The reverse incremental method keeps just a single full backup file (*.vbk) which moves forward from day to day as long as you don’t perform an active full backup. ReverseInc

Source: veeam_backup_7_userguide_vmware.pdf

The plan was to use this backup method to keep at least 14 restore points on disk and to write just a single full backup file to tape once a week. The customer really liked this combination because they didn’t need to worry about the number of fulls on disk nor on tape.

But I can’t recommend this method for customers who want to keep a whole backup chain on disk simply because this isn’t supported in combination with the reverse incremental method (*.vrb files).B2T_MediaPoolSelect

If you enable the processing for incremental, which will also process *.vrb files you will end up with multiple full backups on tape. This how the backup repository looked likeFilesOnDisk

What happened was that for every *.vrb on disk a full backup *.vbk will be written to tape! So in this case you would end up with three full backups on tape.VRBtoVBKonTape

If you disable the incremental backup processing as depicted in in the screenshot above, Veeam will only copy the full backup file(s).

Now so far so good, but what for those of you who prefer the forward incremental method which is often used in a combination with synthetic fulls? ForwardInc

Source: veeam_backup_7_userguide_vmware.pdf

This will enable you to keep a complete backup chain on tape, because VBR supports to copy also the incremental files (*.vib) to tape. This is how the repository looked like


And so VBR copied the complete backup chain to tapeVIBonTape

But in my opinion this method has a little drawback, at least at the beginning. And please feel free to correct me if I’m wrong here.

Let’s way we also want to keep 14 restore points and we setup the job on Monday. In addition to that we would like to perform a synthetic full and the backup to tape every Saturday.

The following numbers should represent restore points not a date!ExampleForwInc1

In the first week Veeam will create a standard and a synthetic full backup which will be kept on disk until it is automatically deleted according to the backup retention policy. So two full backups will be backed up to tape in that week. This is because VBR will simply backup all restore points associated to a backup job to tape no matter how many full backups the backup chain contains. At least I wasn’t able to find a way to limit this in any way. Of course in the third week the number of full backups will be reduced to two on disk and to a single file which will be backed up to tape. ExampleForwInc2

 *The full backup 6* (previously 13) is already on tape and doesn’t need to be backup up again.

However now back to my customers where we created multiple media pools according to their needs as well as multiple B2T jobs.


As source for a backup to tape job you can add a bunch of backup jobs or even whole repositories. And of course in addition to the scheduling options or the selection of the media pool, you can also enable hardware compression and some automation options.B2TCreation4

Then we started the first B2T job and as I mentioned earlier, everything just worked.


At this point I should mention that we had to replace the drivers at one point because we rarely had issues with drive locks. By installing the so called “non-exclusive” drivers we were able to easily fix hat.

We were kinda surprised how fast the job started to push data to tape. All of us were used to a delay for backup to tape jobs in other tools, but not with VBR v7.0 which started almost instantaneously.

But this wasn’t the end, we had two drives to use and we wanted to use those simultaneously to speed things up.

To be able to use two drives at the same time we had to use at least two jobs as well as two different media pools, because a single job will lock the target media pool when the backup has started.Tape0 Tape1

I mentioned it at the beginning of the post and I don’t want to neglect the possibility to also backup files to tape. We used this feature to backup Windows & Acronis bare metal backups/images to tape.


B2TCreation4 B2TCreation5 F2TCreation5

In the end all were happy to easily migrate to VBR v7.0 but there are some things we would like to see in future releases:

  • Statistic for tapes, like error counters
  • Refined scheduling to better support GFS usage. For example a weekly and monthly backup to tape job on the last weekend of the month, with different media pools, would backup the same data twice. Once on the weekly tapes and once on the monthly tapes but usually the last weekly media becomes the monthly tape.
  • Schedules for a library inventory within the GUI

But don’t get me wrong they managed to implement a really solid backup to tape support which is really easy to use and so I’m sure also other customers will be happy to get rid of their current backup to tape solution.

Veeam v7.0 – My favorites – Backup Copy 4

In this post I want to check out another of my favorite features in Veeam v7. I guess I’m not the only one who wanted this feature so desperately to get a solid remote “copy” of the backup data without scripting and such stuff.

Veeam finally realized this feature with their current release and actually in a really awesome and simple way. Even if it differs from that what most people expect when they think about “backup copy”.

So what do we need to get started?

All we need is to add a new repository, this is actually the exact same process as for primary repository and we all have done that before. As always we are free to choose between all supported typed of backup repositories.


Allow me to skip a complete walkthrough to add a repository, because all you need is a CIFS\NFS share or a Windows machine and proper access rights/login credentials to get that done.

It’s needless to say that this repository should be offsite or a least in a different server room/fire section.

Once the repository has been added we will find it somewhere in our repository list.RemoteRepo

Now we are good to go to add a new “Backup Copy” job.

CopyBackup1The first step is to select a proper Name & description as well as interval in which the job will check for new restore points. Mare about that in a second, for now it just defines how often data will be copied if there are new data available at the time of the check. Each run will copy the most recent restore point over to the remote repository.

CopyBackup2Then we can add all virtual machines which should be protected by coping their backup data to the remote repository.

CopyBackup3In this step we can select the remote repository created earlier and the number of restore points to keep at the remote repository. Most of you will be very familiar with that principle because it works the same way as in regular backup jobs.

In addition to that it’s possible to make use of the so called “Grandfather Father Son (GFS)” principle. We can select the number of weekly, monthly, etc. backups we want to keep for archiving purposes, so that these don’t get overwritten over time.

If we want to define which day and time should represent our weekly or monthly backup, we use the “Schedule” button to set these details as we need them for example according to our corporate policies. Veeam will keep the restore point closest to the specified day and time so that they represent our weekly, monthly, etc.


Sorry for the German days in the screenshot, the OS is installed in my native language!

CopyBackup5The next step is to select the transport mode Veeam uses for the data transfer. The “Direct” mode by default will copy all new blocks of a restore point and will use as much as bandwidth it can get.

The new Enterprise Plus feature “WAN accelerator” is able to speed up data transfer across slower WAN connections. This mode leverages not only a cache of the paired WAN accelerator nodes, it can also use the complete remote backup repository as “advanced or infinite” cache to verify if a block really needs to be transferred or not.

Anton Gostev explained the WAN acceleration in a deep dive at TFD9 which can be found here:

CopyBackup6As last step we can set a time frame in which the job is allowed to copy data. For example we could hold copy jobs during our working hours.


But why I’m always talking about restore points, the feature is backed “backup copy”!?

To answer that lets take a look in the repositories I used in my lab

CopyBackup9 CopyBackup10

We can see that they actually have nothing in common.

  • The number of restore points / files is differs
  • The naming of the backup files is different

CopyBackup8Even inside Veeam when I trigger a restore, the number in restore points differs.


So what’s going on here?

The most important fact is that the backup copy doesn’t simply copies files from A to B, it uses its own more intelligent logic.

To be able to combine multiple items within a single job, the files will be named after the name of the job, actually the same principle as we already know from regular backup jobs.

But what about the number of restore points?

The backup copy feature works like an “incremental forever” backup job. So as soon as the job is set up, it will copy the most recent restore point to the remote repository, no matter how much restore points there are already available on the primary repository.

And even if the most recent backup is an incremental backup on the primary repository, the first copy will be a full backup on the remote repository. So in my case the last incremental of my vCenter VM was about 400 MB, the first backup copy resulted in a 38GB full backup .vib file.

From now on every back copy job run will check for new restore points on the primary repository and then transfer those blocks required to represent the according incremental restore point at the remote repository. Even if the most recent restore point is a synthetic or active full, Veeam will only create an incremental at the remote site. Veeam will make use of the complete backup chain at the remote repository to represent the exact same restore point as the full at the primary repository.


Veeam offers a great user guide which depicts step by step how backup files will be transformed once the maximum number of restore points gets exceeded and how in GFS mode the archiving feature respects the need for permanent backup files. I can only recommend to get a copy of the user guide which can be found HERE.


My conclusion?

I really love this new feature which had led to this post and I can only recommend using it. Veeam once more proved that they got the magic key to combine good technical solutions with a simple interface in easy way to use.