Saturday, 30 April 2016

VDP Deduplication Process

Saving the same data after every iteration of backup is not ideal because the space consumed on you storage increases rapidly. To provide a better storage for backups deduplication technology is used. What this does is, during the first initial full back, the entire contents of the virtual machine is backed up. However, the subsequent backups only save the new data or the changes that has occurred when compared to the previous iteration of backup. This is called as incremental backup. The changed data will be processed by VDP and saved where as pointer files are created to the same/unchanged data that was present in the previous backup. This saves the storage space and also increases the backup efficiency.  

Before we jump deeper into deduplication, let us have a look at the two types of deduplication we have at hand; Fixed Length (Also called as Fixed block) and Variable length (Variable block) deduplication.

This is the raw data that I have at hand right now:
"Welcome to virtuallypeculiar read abot vmware technology"

Fixed length deduplication: I am going to segment this raw data into a data-set defined by a block length of 8. Which is, 8 characters per data-set. The output will look something as:

Variable length deduplication: In this, we do not have a constant set of deduplication block length. The algorithm is going to look at the data set and set logical boundaries for deduplication length. The output will something as:

Now, on a high level basis, my backup software took a backup of the raw data which is saved on my notepad file. Since this is a first backup, the entire text data is saved on the storage using a deduplication technology. 

Next, in the raw data I have a spelling error for the word abot (about), upon noticing this, I will re-open the notepad make the necessary changes and save the file again. When the next iteration of backup runs, it is going to scan for changes in blocks. 

How fixed length deduplication deals with this?

When the new character is added, the data bits are shifted towards the right by 1. The output would be something as:

Now, in these cases there are scenarios where the shifting of data bits causes the shifted data to enter a new 8-character data-set which creates a new storage block to be occupied, just for one character. This reduces the storage efficiency when compared to variable length deduplication.

How variable length deduplication deals with this?

When the changed data is detected the variable length deduplication makes sure that the outcome of the changed data set matches the chunk size or data set size of the previous backup iteration. The output is something as:

Here the red box shows the changed data, and it is seen that it is limited to the same block whereas in fixed length it was seen till the end of the data set.
VDP is based on variable length deduplication, and using an algorithm the logical boundaries are set for the raw data.

Final note, variable length deduplication provides better storage density than fixed length as the changes in data-set is not vast.

How does VDP deduplication work?

Now, since you have a fair understanding of deduplication, we can look into how VDP handles deduplication. Please note, throughout the process, VDP uses only variable block deduplication.

Have a look at the flow chart below for the basic flow of deduplication process:

Before we get to the working of the flow chart let's have a little understanding regarding the various daemons or processes involved in this backup.

MCS (Management Console Server) This is responsible for the management of all your backup request, VDRDB database.

There are 8 internal proxies on the VDP appliance. Each proxy runs a process called avAgent. These query the MCS every 15 seconds for incoming job requests. Once the avAgent receives the backup request it in turn calls the avVcbImage

The avVcbImage is responsible for enabling, browsing backing up and restoring the disks.

The avVvcbImage in turn calls the avTar which is the primary process for backup and restore.

The entire deduplication process occurs inside the VDP appliance. When the backup request comes in, the first check is done on the client side, where the appliance determines if this virtual machine has been backed up or not. The .ctk file created due to CBT feature when a backup is taken records all the changed sector information since the previous backup. When the appliance scans for this, and if the ctk file determines the changes, only those changed data is sent further to the Sticky Byte Factoring. If older data is present, it is going to create pointer files and will be excluded from Sticky byte factoring.

In Sticky Byte Factoring:

The avTar running here is responsible for breaking down the raw data input that we received earlier into data chunks. The data set that is an outcome of this will be anywhere from 1 KB to 64 KB and will average out on a 24 KB set.

The earlier example we considered for variable block deduplication, let's use that data set, represent that in terms of KB of data and re-review the deduplication process.

So here, in the first full backup, the raw data is divided into variable length blocks using VDP algorithm. It produces a set of data chunks anywhere between 1 and 64K. Now, the data in the first two blocks have changed after the backup was performed. Now, in the next iteration of backup, the sticky byte factoring re-syncs the block so the output of new dataset matches the chunks of the previous dataset. So, no matter where the data has changed the avTar creates chunks to match the previous chunk size.


Once the sticky byte factoring divides the raw data into chunks, these will be compressed. The compression ratio will be anywhere between 30 and 50 percent and data that is not favourable for compression will be omitted to prevent performance impact.


The compressed data is then hashed using SHA-1 algorithm. And the hashed data will always output a 20 byte data string. This hash data is unique to each block and serves as a reference for comparison to check if the previous backup has a similar has. If yes, then the similar hashed data are excluded from backup. Hashing does not convert data into hashes, it rather creates hashes for each data block. So at the end of Hashing, you will have your data chunks and hashes corresponding to it. If the hashes are not found in the hash cache, then the cache is updated with the new hash. 

The above hashes are called as atomic hashes, further the atomic hashes are combined to form composites. The composites are further combined to form composite hashes. This process is continued until one single root hash is created.
So in the end, we have the actual data stripes. the atomic hashes, the composite hashes and the root hash all stored in their own locations on the VDP storage disks.

Hope this was helpful. If you have any questions, please feel free to reply.

All images are copyrights of

Friday, 22 April 2016

Cannot Perform A Backup With VDP 5.1

So while performing a backup of any virtual machines in my environment on VDP, the backup task used to fail. So, I created one backup job, with one virtual machine in it and ran the backup task. The create snapshot task completes successfully and you can see the VDP snapshot in the snapshot manager of the virtual machine. Once the snapshot is taken, the next task initiated will be the VDP backup task, and this fails immediately (no percentage completed). It states it failed with miscellaneous errors. 

When you open a SSH to the VDP appliance and browse the backup-job log, from the directory, /usr/local/avamarclient/var-proxy-N, I see the following logging:

2016-04-20 08:28:50 avvcbimage FATAL <5889>: Fatal signal 6 in pid 553
2016/04/20-15:28:50.11148 []  FATAL ERROR: <0001> Fatal signal 6
2016/04/20-15:28:50.11161 []  | 00000000005f39f1
2016/04/20-15:28:50.11162 []  | 00000000005f46b7
2016/04/20-15:28:50.11163 []  | 00000000005f5bdb
2016/04/20-15:28:50.11164 []  | 00000000005f5cce
2016/04/20-15:28:50.11168 []  | 000000000058b800
2016/04/20-15:28:50.11169 []  | 00007ff23c7c16a0
2016/04/20-15:28:50.11170 []  | 00007ff23b0797a9
2016/04/20-15:28:50.11170 []  | 00007ff2308a8900
2016/04/20-15:28:50.11171 []  | 00007ff23087b9ac
2016/04/20-15:28:50.11172 []  | 00007ff2306bf442
2016/04/20-15:28:50.11173 []  | 00007ff2362c472e
2016/04/20-15:28:50.11174 []  | 00007ff2362cff4c
2016/04/20-15:28:50.11174 []  | 00007ff230898d17
2016/04/20-15:28:50.11175 []  | 00007ff230894883
2016/04/20-15:28:50.11176 []  | 00007ff23c7b9696
2016/04/20-15:28:50.11177 []  | 00007ff23b07cd7d
2016/04/20-15:28:50.11184 []  ERROR: <0316> handlefatal exiting thread pid=553, sig=6
2016-04-20 08:28:50 avvcbimage Error <5891>: handlefatal: exiting thread pid=553, sig=6
2016-04-20 08:28:50 avvcbimage Info <16041>: VDDK:2016-04-20T08:28:50.113-07:00 [7FF22EE0D700 panic 'Default']

2016-04-20 08:28:50 avvcbimage Info <16041>: VDDK:-->

2016-04-20 08:28:50 avvcbimage Info <16041>: VDDK:--> Panic: Assert Failed: "_lockToken != __null" @ /build/mts/release/bora-774844/bora/lib/vcbLib/hotAdd.cpp:638

Here the appliance is failing to lock the VMDK which it has to backup and entering a panic state causing the backup to fail. 

The 5.1 VDP has a VDDK version of 5.1. VDDK is a set of C++ libraries that is used to access and create virtual disk. The version of VDDK can be found out from the backup job log (Just search for VDDK and the first few lines will define the version)

You will see the entry for VDDK version as follow (This was on my 6.1.2 VDP)
2016-04-18T20:00:15.978+07:00 avvcbimage Info <16041>: VDDK:VMware VixDiskLib (6.0) Release build-2498720

Now, back to the failed backup. This is a known issue with 5.1 VDDK version and this can be verified from the 5.5.1 (5.5 Update 1) VDDK release notes. (Search for the section: Assert failure causes exit when HotAdd cannot acquire lock)

Upgrade your VDP to 5.8 and we should be good to go as it updates the underlying VDDK version. Re-run a backup after the upgrade and check the backup logs again and you can see the new version of VDDK release and it's corresponding build number.

Tuesday, 19 April 2016

Changing DNS for VDP 6.1.2 post deployment.

Short article for a quick reference to update the DNS name for VDP appliance. While working on a case today I had to change the DNS name from vdp.vapp.local to vdpnew.vapp.local

From the below screenshots, you can see that the "hostname" from VDP appliance console shows the original DNS for the appliance.

The ping to the DNS resolves to the current IP of the appliance.

Now, to update this record:

1. Update the DNS record from your DNS manager. 

2. Now login to the vdp configure page which can be found at the below location:

3. Here under the host-name section we will see the old VDP appliance DNS entry. This has to be updated. Click the gear icon and select Network Settings.

4. You will come across the above screen and update the host-name section with the new host-name that was given for the appliance and click Next and Finish. 

5. Restart the Guest for VDP appliance and we should be good to go from there. 

You can now see the updated DNS for VDP.

That's it!

Wednesday, 13 April 2016

VDP 6.1.2 Plugin Not Available After A Fresh Deployment

So with 6.1.2 and even few older versions, there is an issue seen with most or some of the deployments. Once the new VDP appliance is deployed we do not see the respective plugin in the web client. These are the observations that I came across during troubleshooting:

Couple of things to note in my test:
1. vCenter is an appliance: Version 6.0 U2
2. VDP is 6.1.2
3. Tested with embedded and external PSC deployment
4. Tested with standalone and linked mode deployment. 

Web Client logs: 
Appliance Location: /var/log/vmware/vsphere-client/logs


[2016-04-06T19:27:13.272Z] [ERROR] -extensionmanager-pool-98735 70049419 100449 200340 com.vmware.vise.vim.extension.VcExtensionManager Package com.vmware.vdp2 was not installed! Error expanding zip file found at Make sure it is a valid zip file then logout/login to force another download. error in opening zip file at Method) at<init>(Unknown Source) at<init>(Unknown Source) at<init>(Unknown Source) at com.vmware.vise.vim.extension.VcExtensionManager.downloadPackage( at com.vmware.vise.vim.extension.VcExtensionManager$ at com.vmware.vise.vim.extension.VcExtensionManager$ at Source) at com.vmware.vise.util.concurrent.QueuingCachedThreadPool$ at java.util.concurrent.Executors$ Source) at Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$ Source)

So here it is referencing that the com.vmware.vdp2 package was not installed as it was failed during the unzip operation.

MOB entry for VDP plugins: 
Location: https://vCenter_IP/mob

Plugin report:

Both the required plugin for 6.x are present. There is no requirement to invoke them.

vCenter Serenity Report:
Location: /etc/vmware/vsphere-client/vc-packages/vsphere-client-serenity

For 5.5
Location: /var/lib/vmware/vsphere-client/vc-packages/vsphere-client-serenity

C:\ProgramData\VMware\vSphere Web Client\vc-packages\vsphere-client-serenity

This folder was empty and there was no reference of the VDP plugin here. 

What has to be done next: 
1. Go to the following URL 

2. This is going to download a folder. Extract this and you will get two files. 
3. In the vsphere-client-serenity, create a folder by the name com.vmware.vdp2-<vdp_version> and pasted these two files there.

Find your vdp plugin version number by clicking the com.vmware.vdp2 plugin in mob page and observe the version string.

**Now I have seen a case where the "vc-packages" and "vsphere-client-serenity" is not available at all. In this case, you can manually create these two directories**

4. Restarted web client service
# service-control --stop vsphere-client
# service-control --start vsphere-client
For Windows vCenter, restart the web client service from services.msc

5. Logged into web client. 

The plugin was available and connected successfully. 

I have also come across a situation where once the plugin is available and clicking the Connect option, the screen forever stays grayed out. I am currently testing this out more on my 6.0 U2 and will update a new article soon. 

Update 15-April-2016

If your VDP appliance is residing on a distributed switch, then the web client screen always stays grayed out when you click the "Connect" button.

Migrate the VDP appliance to standard switch and the appliance connects instantaneously.

Update 16-April-2016

If the VMs are residing on distributed switch, then the web client screen grays out when submitting a new backup job.

Workaround: Migrate VMs to standard switch until a fix is released or use a lower version of VDP.

Update 26-April-2016

Hot fix is available in this KB article here.

Saturday, 9 April 2016

Backup VDP Data To An External Drive

Well, you have got your new vSphere Data Protection set up and have kicked off a few successful backups. The backup data is stored in your VDP datastore safely and you can restore it any time you want to. Let's say that your VMFS volume hosting your VDP data drives went corrupt, this means that all your precious backed up data is lost. So most of the enterprises look for means to move those backup disk of VDP to an external drive.

Now, there are two ways of achieving this. 

1. Using a script to make a copy of your data disks of VDP to a tape drive or your local machine. The disadvantage of this is that you will have to power off your backup appliance in order for the disks to be copied over. So, if you have one VDP appliance in your environment, your backup is down until the copy completes. 

2. Deploy another VDP appliance and configure replication between the primary and the secondary appliance. The primary is the one that performs your scheduled backups and then replicates this data to the secondary appliance. So, daily backups occur on the primary machine. Replication occurs on the weekends to the secondary appliance. The script runs on the secondary appliance for the rest of the week, as the secondary is not receiving any replication traffic and neither it is running any backup jobs, it can afford a downtime. 

The script:

#Connects to the vCenter Server hosting the backup appliance
Connect-VIServer -Server <vCenter_IP> -User administrator -Password <Password>
#Shuts down the VDP appliance
Shutdown-VMGuest -VM "<VDP_Appliance_Name>" -Confirm:$false
#Sleep time after which the copy script kicks in. Value is in seconds
Start-Sleep -s 600
#Copy script
Copy-DataStoreItem vmstore:\<Data_centername>\<VDP_Datastore_Name>\"<VDP_Appliance_Name> 6.1"\*.vmdk D:\VDP\vdp01 -Force
#Starts the appliance after the copy is done
Start-VM -VM "vSphere Data Protection 6.1" -Confirm:$false

Here, I am copying the VDP vmdk only files to D drive on my local computer. If you want to copy the remaining files, then simply replicate the #Copy script lines and change *.vmdk to the other file extensions (*.vmx, *.nvram etc)

Sample output during the Power Off:

Sample output during the copy:

Once the copy completes, you can verify the contents of the VM in the local drive/tape connected to your machine. 

**Try at your own risk if you are running a production environment. All the tests for the script was done in my lab**

Friday, 8 April 2016

Cannot Connect VDP To Web Client. SSO Server Could Not Be Found

You have setup a new VDP appliance in your environment or have an existing one. And suddenly you run into issues regarding connecting this appliance to the Web Client. Every time you choose the appliance from the drop-down and hit connect, you run into the error:

"Could not connect to the requested VDP appliance. The SSO Server could not be found. Please make sure the SSO and DNS servers are available on the network and all settings are configured properly"

Pre-checks and solutions:

1. DNS should be working good. Check if you are able to resolve the IP and FQDN of the VDP appliance along with your vCenter server from the machine where you are trying to connect. If DNS resolution is running into issues, then have the DNS issue fixed, and then try the reconnect. 

2. Time on the VDP appliance to ESXi host to your SSO server must be synced. If time is not synced then your will run into NTP error message during the connect. However, please make sure that the time is synced across all the three components. Use the command "date" in VDP to list the current time. 

3. You can try adding the VDP appliance and vCenter IP/FQDN entry in the /etc/hosts file of the VDP appliance and then try a reconnect. 

For me, all of the above pointers were working good, and yet I was unable to connect the appliance to the Web Client. 

Restarted the tomcat service on the VDP appliance and I was able to successfully connect the appliance to the web client. To restart this:
#: --restart

That's pretty much it!

You can refer this blog here for more information and this KB here for further troubleshooting if these steps do not help.

Thursday, 7 April 2016

VDP CLI Commands

Here is a set of commands for various operations that you can perform from the SSH of the VDP appliance. I will update this article as I come across new commands.

dpnctl status all
//Shows output of all the services of the VDP appliance.

//Sample output:

dpnctl: INFO: gsan status: up
dpnctl: INFO: MCS status: up.
dpnctl: INFO: emt status: up.
dpnctl: INFO: Backup scheduler status: up.
dpnctl: INFO: axionfs status: down.
dpnctl: INFO: Maintenance windows scheduler status: enabled.
dpnctl: INFO: Unattended startup status: enabled.
dpnctl: INFO: avinstaller status: up.
dpnctl: INFO: [see log file "/usr/local/avamar/var/log/dpnctl.log"]

//Shows appliance access, status, last checkpoint, last hfs check and deduplication space reports.

//Sample output

Thu Apr  7 20:02:17 IST 2016  [vdp.vcloud.local] Thu Apr  7 14:32:17 2016 UTC (Initialized Mon Apr  4 18:47:03 2016 UTC)
Node   IP Address     Version   State   Runlevel  Srvr+Root+User Dis Suspend Load UsedMB Errlen  %Full   Percent Full and Stripe Status by Disk
0.0  7.2.80-94  ONLINE fullaccess mhpu+0hpu+0hpu   1 false   0.38 3420   102774   0.6%   0%(onl:13 )  0%(onl:12 )  0%(onl:12 )
Srvr+Root+User Modes = migrate + hfswriteable + persistwriteable + useraccntwriteable

System ID: 1459795623@00:50:56:B9:6B:3C

All reported states=(ONLINE), runlevels=(fullaccess), modes=(mhpu+0hpu+0hpu)
System-Status: ok
Access-Status: full

Last checkpoint: cp.20160407033314 finished Thu Apr  7 09:03:33 2016 after 00m 19s (OK)
Last GC: finished Thu Apr  7 08:00:39 2016 after 00m 16s >> recovered 64.05 MB (OK)
Last hfscheck: finished Thu Apr  7 09:03:05 2016 after 02m 01s >> checked 32 of 32 stripes (OK)

Maintenance windows scheduler capacity profile is active.
  The backup window is currently running.
  Next backup window start time: Fri Apr  8 20:00:00 2016 IST
  Next maintenance window start time: Fri Apr  8 08:00:00 2016 IST
//Shows change of data for the list of backed up VMs and a list of top 3 high change VMs in backup jobs.

//Sample output

Date          New Data #BU       Removed #GC    Net Change
----------  ---------- -----  ---------- -----  ----------
2016-04-04     1047 mb 1            0 mb           1047 mb
2016-04-05     3311 mb 4            0 mb 1         3311 mb
----------  ---------- -----  ---------- -----  ----------
Average        2179 mb              0 mb           2179 mb

Top 3 High Change Clients:
Total for all clients                     4358 mb      100.0%
  Windows_2008_UDmaxWmHw0pC0PB5bXgAKg       3306 mb       75.8%
  CentOS7_UDlmW0iZnAAWleQvAdTtog          1052 mb       24.2%

mccli activity show
//Shows status of running backup jobs.

//Sample output

ID               Status  Error Code Start Time           Elapsed     End Time             Type             Progress Bytes New Bytes Client       Domain
---------------- ------- ---------- -------------------- ----------- -------------------- ---------------- -------------- --------- ------------ ----------------------------
9146003975305509 Running 0          2016-04-07 20:05 IST 00h:00m:16s 2016-04-08 20:05 IST On-Demand Backup 0 bytes        0%        Windows 2008 /

mccli activity get-log --id=XXXXXXXXXX
//Shows the logs for the backup job for the backup ID. Backup ID can be found from the mccli activity show

mccli activity cancel --id=XXXXXXXXXX
//To cancel the actively running backup job.

//Sample output

0,22205,Backup cancelled via console
Attribute   Value
----------- ----------------
activity-id 9146003975305509

mccli client show --recursive=true
//Show the registered clients with the VDP.

//Sample output
Client           Domain                       Client Type
---------------- ---------------------------- ------------------------------------
Windows 2008     / Virtual Machine      /                 vCenter
vdp.vcloud.local /clients                     VMware Image Proxy with Guest Backup

mccli backup show --name=/vCenter/VirtualMachines/VirtualMachine_Name --recursive=true
//Shows the successful backup and the restore point created for the same.

//Sample Command
 mccli backup show --name=/ --recursive=true

//Sample Output
Created                 LabelNum Size    Retention Hostname         Location
----------------------- -------- ------- --------- ---------------- --------
2016-04-07 20:31:34 IST 1        16.0 GB DWMY      vdp.vcloud.local Local

mccli client backup-dataset --name=VM_NAME --domain=/VM/DOMAIN
//To start backup of a VM from command line.

//Sample Command:
mccli client backup-dataset --name="Windows 2008" --domain=/

//Sample output

Attribute   Value
----------- -----------------------------------------
client      / 2008
activity-id 9146004043408409
dataset     /VMware Image Dataset

//Shows the checkpoint for the appliance. Valid and Not validated checkpoint

//Sample output

cp.20160407033040 Thu Apr  7 09:00:40 2016   valid rol ---  nodes   1/1 stripes     37
cp.20160407033314 Thu Apr  7 09:03:14 2016   valid --- ---  nodes   1/1 stripes     37

mccli server show-prop
//Shows VDP details. Useful if you cannot connect to VDP from the web client

//Sample output

Attribute                        Value
-------------------------------- ----------------------------
State                            Full Access
Active sessions                  1
Total capacity                   575.9 GB
Capacity used                    5.3 GB
Server utilization               0.9%
Bytes protected                  0 bytes
Bytes protected quota            Not configured
License expiration               Never
Time since Server initialization 2 days 20h:04m
Last checkpoint                  2016-04-07 09:03:14 IST
Last validated checkpoint        2016-04-07 09:00:40 IST
System Name                      vdp.vcloud.local

mccli server show-services
//Another command to check service status

//Sample output

Name                               Status
---------------------------------- ---------------------------
Hostname                           vdp.vcloud.local
IP Address               
Load Average                       0.31
Last Administrator Datastore Flush 2016-04-07 19:45:00 IST
PostgreSQL database                Running
Web Services                       Error
Web Restore Disk Space Available   266,182,888K
Login Manager                      Running
snmp sub-agent                     Disabled
ConnectEMC                         Disabled
snmp daemon                        Disabled
ssh daemon                         Running
Data Domain SNMP Manager           Not Running
Remote Backup Manager Service      Running
RabbitMQ                           Not Running
Replication cron job               Not Running
/                       All vCenter connections OK. --restart / --stop / --start
//Command to restart tomcat service for VDP. If this service is stopped you cannot connect VDP to Web Client or access VDP GUI page on 8543.

//Sample successful start

INFO: Copying CST libs to tomcat
Administrator Server is not running.
Starting the database server.
Waiting for postmaster to start ...Started
No script specified

Database server is still running...
Starting tomcat
Using CATALINA_BASE:   /usr/local/avamar-tomcat
Using CATALINA_HOME:   /usr/local/avamar-tomcat
Using CATALINA_TMPDIR: /usr/local/avamar-tomcat/temp
Using JRE_HOME:        /usr/java/latest
Using CLASSPATH:       /usr/local/avamar-tomcat/bin/bootstrap.jar:/usr/local/avamar-tomcat/bin/tomcat-juli.jar
Using CATALINA_PID:    /usr/local/avamar-tomcat/
Tomcat started.

dpnctl start gsan / mcs / axionfs
//Start each of the services individually

avmaint hfscheck --ava --full
//To run a full integrity check on your VDP appliance

//Sample successful output 


avmaint hfscheckstatus --ava
//To check hfscheck status. This tells the last cp that was created during the integrity check and the status.

//Sample output


Re-register VDP 6.1 to vCenter Server

Sometimes, there might be a need to re-register your VDP appliance to your vCenter server, maybe to use a different user account for registration or some issues with the vCenter. The registration process is quite easy and will not affect any of your backup jobs or the backup data present in your deduplication store. 

To Re-register your VDP appliance to vCenter, follow the below steps:

From the below screenshot you can see the backup job that is already present on my appliance prior to the re-registration. 

Next, you need to go to your vdp-configure page, which is available at the below link.


Login to your appliance with your root credentials and you will come across the below page. Click the gear icon and select vCenter registration. 

Please read the below message. Do not make any changes to vCenter with regards to vCenter host-name, IP, port number. This will cause your backup jobs to be lost. 

However, re-registering with a different user should not cause any issues. 

Provide the new user details and keep your vCenter details the same. Click Next, review the changes and click Finish. 

The below task will be started during the re-registration process. Once the task is done, it will reconnect to web client and log you out of your vdp-configure session. 

Now, login back to your web client, go to vSphere Data Protection. Connect the required appliance to web client and go to the backup tab and you will notice your backup job is still retained. 

That's it!

Tuesday, 5 April 2016

Cannot Open vdp-configure Page Or Check Status of VDP Services

So when you try to open the :8543/vdp-configure page you will receive the message:

"This site can't be reached. ERR_CONNECTION_REFUSED"

When you open SSH to this VDP appliance and check the status of the services, this will also throw up an error. Run the below command to check the VDP service status:
root@vdp:~/#: dpnctl status
The error you will receive is:

mkdtemp: private socket dir: No space left on device.

I tried to run the command to start the webservices, which is:
root@vdp:~/#: --start
Which also failed with an error:

Waiting for postmaster to start ...........Failed to connect DBI:Pg:dbname=postgres;port=5558.ERROR: Failed to start the database.

I ran the below command to check the space on the VDP appliance partition.
root@vdp:~/#: df -h
Here the output I noticed the partition, /dev/sda2 was at 100 percent used.

Run the below command to list directories within each other with largest used space:
root@vdp:~/#: du -h --max-depth=1 <directory>
Upon performing this, I found the below directory occupying nearly 40 percent of space on sda2
root@vdp:~/#: /usr/local/avamar-tomcat-7.0.42/logs
Removed all the old log files from this directory.

Also, if the space does not change even when the logs from the above directory is removed, then you need to check the following directory:
You will see a file which is a manually generated log file. You can go ahead and remove this file. Do not remove any other file in this directory.

Prior to removing the logs, stop the VDP services using the below command:
root@vdp:~/#: dpnctl stop
However, this command also failed due to unavailable space. If this occurs, go ahead and remove the files without stopping the service. I risked this, however, the log files cleared out and space was freed and I was able to start the web services for VDP and login to the GUI of the appliance.


Monday, 4 April 2016

Unable To Register VDP to vCenter in the vdp-configure Page

After deploying the OVF template for a new VDP appliance, we will have to go to the vdp-configure page to get the appliance configured to the vCenter.

Here, in the vCenter registration page, after entering the username, vCenter details and try to test connection, you run into the below error.

"Unable to verify vCenter listening on specified HTTP Port. Please re-check values and try again"

So, here I was trying to configure VDP appliance with a port number of 80 for http and 443 for https.

However, the vCenter is running on a custom port of 81 and 444.

You can login to your vCenter, 6.0, Select the Administration tab > vCenter Server Settings > Advanced Settings.

Here there are two parameters which talks about your vCenter ports. They are:

config.vpxd.rhttpproxy.httpsport 443
config.vpxd.rhttpproxy.httpport 80

443 and 80 are the default ports. If they are different, then we are using custom ports and we need top open this port on the VDP appliance firewall.

You can use telnet to check the connection between the appliance and vCenter.
Run the command from the SSH of the appliance
telnet <server_IP> <port_number>

To perform this:
1. Open a SSH to the VDP appliance.
2. Change your directory to:
#: cd /etc/
3. Open the file "firewall.base" in a vi editor
4. Locate the line:
exec_rule -A OUTPUT -p tcp -m multiport --dport 53,80
5. Add your custom http and https port value here and save the file.
6. Restart the firewall service using the following commands:
#: service avfirewall stop
#: service avfirewall start

Register the appliance again and make sure you give the custom ports in the http and https field during configuration.

That's it!

Saturday, 2 April 2016

Migrate Networking From Distributed Switch To Standard Switch

Written by Suhas Savkoor

In the previous article here, we saw how to migrate ESXi networking from Standard Switch to Distributed Switch. In this one, we will perform the reverse of this.

Step 1:
This is the setup that I have for my vDS after I had it migrated.

Here I have 2 portgroups, one for my virtual machine and one for my vmk management port-group. And both of these are connected to two uplinks, vmnic0 and vmnic1

Step 2:
Before creating a standard switch, I will be removing one of the vmnic (Physical Adapter) from the vDS as I do not have any free uplinks to add to the standard switch. Select Manage Physical Adapters and Remove the required uplink.

Step 3:
Now let's go ahead and create a new Standard Switch. Select the vSphere Standard Switch and click Add Networking

Step 4:
Choose Virtual Machine as the port-group type.

Step 5:
Select the available uplink that needs to be connected to this standard switch and click Next

Step 6:
Provide a Network Label to the virtual machine port-group on the standard switch.

Review the settings and complete the create and now you will have one Standard Switch with one virtual machine port-group connected to an uplink. It's now time to begin the migration.

Step 7:
Go back to distributed switches section and select Manage Virtual Adapters

Step 8:
Select the required vmk and click Migrate

Step 9:
Select the required vSwitch as to where you want to migrate this port-group to.

Step 10:
Provide a Network label for this vmk port-group on the standard switch. If you are using any VLAN for the vDS port-group for this vmk, specify the same in the VLAN section to replicate this on the standard switch. Else the migration fails.

Review and complete and you have the management vmk migrated off the distributed switch to the standard switch.

Step 11:
To migrate virtual machine's networking, go to Home > Networking > Right click the vDS and select Migrate Virtual Machine Networking

Step 12:
The source would be the VM port-group on the vDS in my case is, dvPortGroup ad the destination is the standard switch port-group which we created recently, VSS VM Portgroup

Step 13:
Select the virtual machines that you want to migrate.

Review and finish and once the migrate completes, you can now check the standard switch configuration to verify everything is migrated successfully.

Well, that's it!

Migrate Networking From Standard Switch To Distributed Switch

In this article, let's see how to migrate your ESXi host networking from vSphere Standard Switch to vSphere Distributed Switch.

Step 1:
Here we see the current configuration of my standard switch for one of my ESXi host.

I have one standard switch, with two portgroups; one for virtual machines and other one is the management network. I have simplified the networking by eliminating additional vmkernel portgroups for vMotion, FT, iSCSI as the process to move them would be the same. I have one uplink given to this standard switch, vmnic1. 

Step 2:
Let's go ahead and create a distributed switch. Go to Home and select Networking. Right click the required Datacenter and click New vSphere Distributed Switch.

Step 3:
Select the version of the distributed switch that you are going to create

Step 4:
Provide a name to this distributed switch and if you want to alter the number of uplink ports to this switch, you can do the same in the Number of Uplink Ports section. 

Step 5:
I am going to add hosts later as I like to review and make sure I got the setup right before I start moving anything off my standard switch. 

Review your settings and Finish in the Ready to Complete section. 

Step 6:
Navigate back to the Networking section, now you can see your distributed switch created under the specified Datacenter. Right click this switch and select Add Host

Step 7:
Select the host that you want to add. Now, you can see that I have two uplinks in the menu for this host. vmnic0 and vmnic1. You need to make sure that you have one free uplink when you add the host to the distributed switch. This is because, when you are migrating your portgroups off standard switches and you do not have any uplinks on the vDS, your networks are going to be disconnected. 
Here, I will choose the free unused adapter, vmnic0, to be added to the vDS.

Step 8:
As seen in the standard switch configuration, I had one vmkernel port-group, vmk0. I am not going to migrate this port group right now. You can do it at this stage by simply using the drop-down under Destination Port Group and selecting the required port-group on the vDS as to where your management network must migrate to. 

Step 9:
I am neither moving any virtual machine networking as well because I will be doing both of these steps later. Review your settings and complete the host add to vDS.

Step 10:
Now, we will migrate the VMkernel from standard switch to the vDS. Select the Host and click the Configuration tab and browse to Networking > vSphere Distributed Switch. Click Manage Virtual Adapters.

Step 11:
Click Add to check the required vmk.

Step 12:
Select Migrate existing virtual adapter as we already have the vmk in the standard switch.

Step 13:
Select the required port group and the destination port-group in vDS under the section "Port Group"

Review the settings and complete the migrate. It will take a couple of seconds to finish the migrate. You can also do a continuous ping to the host to check the network connectivity. Once migrated, you can review your vDS diagram.

Step 14:
Next, we will migrate the virtual machine networking from standard switch to the vDS. Go back to Home and select Networking. Right click the respective distributed switch and select Migrate Virtual Machine Networking.

Step 15:
The source network is your standard switch networking and from the drop-down select the port-group on the standard switch where the virtual machines reside. In my case, the port-group on the standard switch is called, VM Network. The destination network port-group is on the vDS and I want to migrate the VMs to a port-group called dvPortGroup on the vDS.

Step 16:
Select the virtual machine you want to migrate on this port-group in the next section. 

Review changes and finish the migration. Once Migrated, go back to your distributed switch under the ESXi host and cluster section and review the final configuration. 

That's pretty much it. If you have additional portgroups you will have to repeat the process. If your port-group have VLAN IDs, then you will have to create a port-group on the vDS with the same VLAN ID, else the migration will fail.
If you are migrating iSCSI with port binding, then you will have to remove the port binding and then migrate the iSCSI and then configure port binding post migration.