Veeam


Veeam v7.0 – Tape parallelization

Today I’ve realized that a single backup to tape job for lager environments won’t cut it. I configured a single tape job containing all regular backup jobs and this is how it turned out:

VeeamDrivesOverview

Maybe because Veeam customers (probably due to the lack of tape support in older versions) tend to deploy the solution without backup to tape or at least have used other tools to do the job.

However this customer had a full backup size of about 19 TB (Veeam .vbk files).  The job was running at 65 MB/s and it would have lasted about three and a half days to complete. Depending on the frequency this could be okay but it would also be a waste of resources because the reaming drives would be idling doing nothing.

After consulting the Veeam community forum I can summarize:

  • If you have two or more tape drives but just a single job only a single drive will be used
  • To speed things up you would need to create multiple jobs pointing to different media pools, so each job can control one of the drives
  • In essence – a single job can’t control multiple drives in parallel

So creating four media pools with identical retention policies and pointing each job to “its own” media pool would reduce the time required to less than a 24 hours.

VeeamBackupToTapeJob2

I hope this helps to save some time and to speed of your backup to tape jobs.


Quantum DXi (V1000) – My first impression

Some days ago I discovered that Quantum is offering a free virtual appliance of their DXi Backup to Disk series and I thought this would be a cool thing to have in my lab. I’ve deployed two of those appliances right away to learn about Quantum’s technology and if it could be a product for future projects. This post should reflect my thought, findings and some general information about deduplication appliances.

Backup to Disk with inline Deduplication and Compression

Those appliances no matter if virtual or hardware are designed to provide storage capacity primarily for backup or archiving purposes. This storage capacity can be presented via CIFS, NFS, Virtual Tape Library (VTL) or via OpenStorage (OST) API. So basically those appliances act as backup repository/target.

The DXi series comes with a simple web interface which enables easy setup and management, but I admit the design needs some improvements. The following screenshot shows how easy it is to create a file share.DXiAddShare

I’ve used my Veeam B&R installation and attached a CIFS share using Active Directory authentication as new backup repository.

As data comes in it gets de-duplicated and compressed inline which allows to write way more logical data onto the appliance as physically is actually available. So for those of you looking for a way to store as much data on disk as possible, such an appliance would probably a good choice.

As many of you probably know, Veeam also offers a pretty solid deduplication which works on a per job basis. So having a global deduplication across all jobs can help to save even more space. But the benefit of a deduplication engine with a variable block size already kicks in using just a single job. (Dedupe AND compression enabled in Veeam! Compression should be disabled). There is a best practice guide available for Veeam & DXi right here.

VeeamFilesOnDisk DXiDataReduction

Replication

Now having the data written to the appliance is great but most companies have the need to get the data off site. With the built-in replication it’s quite easy to replicate the data between two DXi appliances or to setup a many to one replication for remote and branch offices. The DXi replication only sends unique blocks which will reduce the amount of transferred data.DXiReplication

The replication can be easily scheduled using the “Scheduler” which also allows to enable a throttle to avoid bandwidth contention during working hours. DXiScheduler

The replication also offers a file based replication mode, which keeps files between two file shares in sync.

Security & Integrity

To protect your data you can enable an AES 128/256 in-flight encryption and/or use Self-Encrypting Drives (SED) with an AES256 bit at-rest encryption. Also built-in is the “Secure shred” feature which will wipe out data by simply overwriting them with zeroes. To ensure data integrity a Healthcheck can be used to verify the health of data as well as metadata.

Networking

To separate management, replication and data traffic even the virtual appliance comes with multiple network interfaces which allows to configure them accordingly, including support for VLAN tagging. It would be also possible to create a bond across those interfaces to increase bandwidth if multiple servers send data to the appliance.DXiInterfaces

VTL & OST

The physical appliances also offer VTL functionality via Fibre Channel. This allows to simulate a couple of tape libraries and drives to integrate the appliance in existing backup infrastructures that depend on tape. With support for the OST protocol backup administrator can offload tasks like the replication of backup data to the appliances without losing the meta-information on the media server. DXiOST

Combined with the DXi Accent protocol the deduplication can be extended to the media server, so that only unique block will be send to the appliance. Another cool feature is the ability to attach a tape library directly to the DXi appliance to use the Direct to Tape (or Path to Tape) feature to copy the backup data onto real tapes. This way the data doesn’t need to be moved back through the media server.

Reporting

An advanced reporting feature can help to analyze what’s going and to charge clients based on their actual usage.DXiAdvReporting

Basically those reports can be generated for example on a share or replication basis but it would be nice to have some sort of multi-tenancy support to be able to create logical groups/tenants. I couldn’t find a way to automate those usage reports to get a monthly mail per tenant.

Probably I’ve missed it but a central management for cloud providers to manage own as well as client appliances would be a nice tool to have.

Alternative – LTFS

Even it’s not directly related to the DXi series, it is worth mentioning that customers who are already using a Scalar tape library, Quantum is offering the so called Scalar LTFS appliance. This appliance offers NAS like access to your tape drives which makes it easy to archive lots of data in a comfortable way.

Summary

Overall the DXi V1000 made a good impression. It is easy to use and it provides a solid performance even on minimal lab hardware. But no matter how good or sometimes bad a product may is, often it comes down to the pricing. Currently I have no prices to compare vs major competitors like EMC, HP or Dell. However I will definitely consider the DXi next time.


Veeam – Error when launching the console – SQL server is not available 2

This is a really short post about a problem which is actually not a problem. Today I’ve worked at a customer site where the Veeam Backup & Replication (v7.0) database was located on a remote SQL server.

When I was logged in as local Administrator, I always was prompted with the following error message when launching the console:

DatabaseUnavailable

The next step was to check the Veeam registry values which seemed to be fine:

HKEY_LOCAL_MACHINE\SOFTWARE\VeeaM\Veeam Backup and Replication\SqlDatabaseName
HKEY_LOCAL_MACHINE\SOFTWARE\VeeaM\Veeam Backup and Replication\SqlInstanceName
HKEY_LOCAL_MACHINE\SOFTWARE\VeeaM\Veeam Backup and Replication\SqlServerName

Then I saw that the user configured to start the Veeam services was a Active Directory service account which actually had proper privileges on the remote DB. After checking the application event log on the SQL server I realized that every time I’ve tried to launch the console, the local Administrator of the backup server attempted to log in which of course got denied.

After logging in (on the backup server) with the service account the console started as usual. So it seems that the console opens a SQL connection to retrieve all the configuration information and since this happens within the corresponding context of the logged in user, the account requires proper privileges on the Veeam DB.

I hope this helps you to save some time!


Veeam v7.0 – Backup Copy and its potential

This time I want to pick up an idea which I saw first over on Timo’s blog and add some more details which the customers and of course even me personally really like.

For now I assume the basic understanding of Veeam’s “Backup Copy” feature is clear, if not you may want to check out this post

So what’s the point of this post?

I just supported a customer to set up the following backup infrastructure and I want to share the experiences we made. The customer has two backup servers, one to use as primary server running the main instance of Veeam Backup & Replication v7.0 R2 to protect a vSphere environment and a second one which acts as target for the Veeam Backup Copy. Both servers are installed with Windows Server 2012 to leverage the built in deduplication in addition to Veeam deduplication to achieve a repository wide deduplication. Of course the secondary server is placed in a second in another fire section.

VeeamBackupCopy1

The primary backup server has four repositories in total configured:

  • Local1
  • Local2
  • Remote1
  • Remote2

Only the two local repositories are being used as target for regular backup jobs and the backup copy job picks up those restore points and “copies” them over to the remote repositories on the second server.

To achieve this I added all repositories as “Windows Server” which will install the Veeam Transport Service on the target system. This was pretty easy to setup and everything worked fine.

By using the direct SAN access via Fibre Channel we could avoid a potential bottleneck. If there is the chance to directly attach your backup server to the SAN I would go for it.

But what if you have to restore a VM or file if your primary backup server is down? Unlikely you think? Just during this setup we had massive issues with faulty hardware on the primary server and had to replace hardware multiple times.

So the plan was (as Timo already described) to install the Veeam Backup & Replication (management) server on the second server as well and to add the “remote” repositories which actually act as target for the backup copy as local repositories.

VeeamBackupCopy2

If you add the repositories the first time you can import the already copied restore points and the server is ready to perform restore operations. But the backup copy is an ongoing process, so to keep the restore points within VBR up to date you will need to use a small PowerShell script. To get a copy of a working script check of Timo’s blog post.

Then I unfortunately run in some problems. As soon as I installed VBR on the second server, the primary server lost its connection to the second server and the backup copy jobs stopped.

The error message from the backup copy job indicated that it could be a firewall related issue

“Error: Client error: Failed to connect to the port [Backup02:2502]

Maximum retry count reached (5 out of 5)

Incremental copy was not processed during the copy interval”

 

As soon as I disabled the local firewall on the second server the connection came back online.

So I did a quick comparison between the firewall rules on the primary serverFirewallRulesPrimary

to those on the secondary server (no Management Server installed):FirewallRulesSecondary

As you can see the “Veeam Backup Transport Service (In)” rule was missing. But just adding this rule (or a custom rule which allows the ports 2500-3000) didn’t fix it, so I enabled the Windows firewall logging to see what else got blocked:

netsh firewall set logging droppedpackets = enable

You may need to create the log first, which is usually located in: C:\Windows\System32\LogFiles\Firewall\pfirewall.log

The log showed the following entries when I tried to connect to the second server:

2014-01-30 14:03:33 DROP TCP <Source IP Backup01> <Dest. IP Backup02> 49662 6160 52 S 623441555 0 8192 – – – RECEIVE

So I manually added another rule to allow port 6160 and that fixed it! Unfortunately I couldn’t test which default Veeam would enable this communication because the server still suffers massive hardware issues L but now the solution works as expected.

A detailed list of all required ports can be found in KB1518

 

However let’s do a final step back to the conventional part of this post, because this setup allows a bunch of tweaks to further customize the solution to your needs:

  • Different deduplication settings on the local and remote repositories to get fast restores and even more restore points.
  • You can combine the Backup Copy with the GFS option to keep full backups based on the GFS principle
  • Add a tape library to bring the backups off site.
  • Or you may want to back up the VBR configuration of the primary server on the second server and vice versa.
  • Also some love to the Hyper-V fans out here, of course this also applies to your preferred platform. You could even speed things up by installing the Transport Service on all Hyper-V nodes.
  • Copy your backups to the cloud, accelerated by the Veeam WAN acceleration. And maybe restore them in the Cloud?

VeeamBackupCopy3

 

Please keep in mind that the second server has only a “passive” connection to the vCenter and doesn’t perform actively backups to not violate any license terms and is only used in case the primary server is not available for restore operations. I hope this helps!