Saturday, 29 October 2016

Migrating VDP From 5.8 and 6.0 To 6.1.x With Data Domain

You cannot upgrade a vSphere Data Protection appliance from 5.8.x and 6.0.x to 6.1.x due to the difference in the underlying SUSE Linux version. Since the earlier versions of vSphere Data Protection used SLES 11 SP1 and the 6.1.x uses SLES 11 SP3, we will be performing the migrate.

This article only discusses about migrating a VDP appliance from 5.8.x and 6.0.x with a data domain attached. If you had a VDP appliance without a data domain, we would choose the "Migrate" option in the vdp-configure wizard during the setup of the new 6.1.x appliance. However, this is not the path we will follow when the destination storage is an EMC Data Domain. A VDP appliance with Data Domain migration would be done by a process called as checkpoint restore. Let's discuss these steps below...

For this instance let's consider the following setup:
1. A vSphere Data Protection 5.8 appliance
2. A Virtual Edition of EMC Data Domain Appliance (Process is still the same for physical as well)
3. The 5.8 VDP was deployed as a 512GB deployment.
4. The IP address of this VDP appliance was 192.168.1.203
5. The IP address of the Data Domain appliance is 192.168.1.200

Pre-requisites:
1. In the point (3) above you saw that the 5.8 VDP appliance was setup with a 512 GB local drives. The first question that comes here is, why have a local drive when the backups are residing on the Data Domain?
A vSphere Data Protection appliance with a Data Domain would still have a local VMDK is to store the meta-data of the client backups. The actual data of the client is deduplicated and stored on the DD appliance and the meta-data of this backup is stored under the /data0?/cur directory on the VDP appliance. So, if your source appliance was of 512 GB deployment, then the destination has to be either equal to or greater than the source deployment.

2. The IP address, DNS name, domain and all other networking configuration of the destination appliance should be same as the source.

3. It is best to keep the same password on the destination appliance during the initial setup process.

4. On the source appliance make sure the Checkpoint Copy is Enabled. To verify this, go to https://vdp-ip:8543/vdp-configure page, select the Storage tab, click the Gear Icon and click Edit Data Domain. The first page displays this option. If this is not checked, then the checkpoint on the source appliance will not be copied over to the Data Domain, and you will not be able to perform a checkpoint restore.

The migration process:
1. Take a SSH to the source VDP appliance and run the below command to get the checkpoint list:
# cplist

The output would be similar to:
cp.20161011033032 Tue Oct 11 09:00:32 2016   valid rol ---  nodes   1/1 stripes     25
cp.20161011033312 Tue Oct 11 09:03:12 2016   valid --- ---  nodes   1/1 stripes     25

Make a note of this output.

2. Run the below command to obtain the Avamar System ID:
# avmaint config --ava | grep -i "system"
The output would be similar to:
  systemname="vdp58.vcloud.local"
  systemcreatetime="1476126720"
  systemcreateaddr="00:50:56:B9:3E:6D"

Make a note of this output as well.  1476126720 would be the Avamar System ID. This is used to determine which mTree this VDP appliance corresponds to on the Data Domain.

3. Run the below command to obtain the hashed Avamar Root Password. This would be to test the GSAN login if the migration fails. This will be used for VMware Support, so you can skip this step. 
# grep ap /usr/local/avamar/etc/usersettings.cfg
The output would be similar to:
password=6cbd70a95847fc58beb381e72600a4cb33d322cc3d9a262fdc17acdbeee80860a285534ab1427048

4. Power off the source appliance

5. Deploy VDP 6.1.x appliance via the OVF template, provide the same networking details during the ova deployment and power on the 6.1.x appliance once the ova deployment completes successfully.

6. Go to the https://vdp-ip:8543/vdp-configure page and complete the configuration process for the new appliance. As mentioned above, during the "Create Storage" section in the wizard specify the local storage space, either equal to or greater than the source VDP appliance system. Once the appliance configuration completes, it will reboot the new 6.1.x system.

7. Once the reboot is completed, open a SSH to the 6.1.x appliance and run the below command to list the available checkpoints on the data domain.
# ddrmaint cp-backup-list --full --ddr-server=<data-domain-IP> --ddr-user=<ddboost-user-name> --ddr-password=<ddboost-password>

Sample command from my lab:
# ddrmaint cp-backup-list --full --ddr-server=192.168.1.200 --ddr-user=ddboost-user --ddr-password=VMware123!
The output would be similar to:
================== Checkpoint ==================
 Avamar Server Name           : vdp58.vcloud.local
 Avamar Server MTree/LSU      : avamar-1476126720
 Data Domain System Name      : 192.168.1.200
 Avamar Client Path           : /MC_SYSTEM/avamar-1476126720
 Avamar Client ID             : 200e7808ddcde518fe08b6778567fa4f397e97fc
 Checkpoint Name              : cp.20161011033032
 Checkpoint Backup Date       : 2016-10-11 09:02:07
 Data Partitions              : 3
 Attached Data Domain systems : 192.168.1.200

The highlighted parts are what we need. The avamar-1476126720 would be the Avamar mTree on the data domain. We received this system ID earlier in this article. The checkpoint cp.20161011033032 was also a checkpoint on the source VDP appliance which was copied over to the data domain.

8. Now, we will perform a cprestore to this checkpoint. The command to perform the cprestore is:
# /usr/local/avamar/bin/#: cprestore --hfscreatetime=<avamar-ID> --ddr-server=<data-domain-IP> --ddr-user=<ddboost-user-name> --cptag=<checkpoint-name>

Sample command from my lab:
# /usr/local/avamar/bin/#: cprestore --hfscreatetime=1476126720 --ddr-server=192.168.1.200 --ddr-user=ddboost-user --cptag=cp.20161011033032
Where, 1476126720 is the Avamar System ID and cp.20161011033032 is a valid checkpoint. Do not rollback if the checkpoint is not valid. If the checkpoint is not validated, then on the source VDP appliance you will have to run an integrity check to generate a valid checkpoint and copy this over to the Data Domain system.

The output would be:
Version: 1.11.1
Current working directory: /space/avamar/var
Log file: cprestore-cp.20161011033032.log
Checking node type.
Node type: single-node server
Create DD NFS Export: data/col1/avamar-1476126720/GSAN
ssh ddboost-user@192.168.1.200 nfs add /data/col1/avamar-1476126720/GSAN 192.168.1.203 "(ro,no_root_squash,no_all_squash,secure)"
Execute: ssh ddboost-user@192.168.1.200 nfs add /data/col1/avamar-1476126720/GSAN 192.168.1.203 "(ro,no_root_squash,no_all_squash,secure)"
Warning: Permanently added '192.168.1.200' (RSA) to the list of known hosts.
Data Domain OS
Password:

Enter the data domain password when prompted. Once the password is authenticated, the cprestore will start. It is going to copy the meta data of the backups for the displayed checkpoint on to the 6.1.x appliance. 

The output would be similar to:
[Thu Oct  6 08:24:44 2016] (22497) 'ddnfs_gsan/cp.20161011033032/data01/0000000000000015.chd' -> '/data01/cp.20161011033032/0000000000000015.chd'
[Thu Oct  6 08:24:44 2016] (22498) 'ddnfs_gsan/cp.20161011033032/data02/0000000000000019.wlg' -> '/data02/cp.20161011033032/0000000000000019.wlg'
[Thu Oct  6 08:24:44 2016] (22497) 'ddnfs_gsan/cp.20161011033032/data01/0000000000000015.wlg' -> '/data01/cp.20161011033032/0000000000000015.wlg'
[Thu Oct  6 08:24:44 2016] (22499) 'ddnfs_gsan/cp.20161011033032/data03/0000000000000014.wlg' -> '/data03/cp.20161011033032/0000000000000014.wlg'
[Thu Oct  6 08:24:44 2016] (22498) 'ddnfs_gsan/cp.20161011033032/data02/checkpoint-complete' -> '/data02/cp.20161011033032/checkpoint-complete'
[Thu Oct  6 08:24:44 2016] (22499) 'ddnfs_gsan/cp.20161011033032/data03/0000000000000016.chd' -> '/data03/cp.20161011033032/0000000000000016.chd'

This would keep going on until all the meta-data is copied over. The length of cprestore process would depend on the amount of backup data. Once the process is complete you will see the below message.

Restore data01 finished.
Cleanup restore for data01
Changing owner/group and permissions: /data01/cp.20161011033032
PID 22497 returned with exit code 0
Restore data03 finished.
Cleanup restore for data03
Changing owner/group and permissions: /data03/cp.20161011033032
PID 22499 returned with exit code 0
Finished restoring files in 00:00:04.
Restoring ddr_info.
Copy: 'ddnfs_gsan/cp.20161011033032/ddr_info' -> '/usr/local/avamar/var/ddr_info'
Unmount NFS path 'ddnfs_gsan' in 3 seconds
Execute: sudo umount "ddnfs_gsan"
Remove DD NFS Export: data/col1/avamar-1476126720/GSAN
ssh ddboost-user@192.168.1.200 nfs del /data/col1/avamar-1476126720/GSAN 192.168.1.203
Execute: ssh ddboost-user@192.168.1.200 nfs del /data/col1/avamar-1476126720/GSAN 192.168.1.203
Data Domain OS
Password:
kthxbye

Once the data domain password is entered, the cprestore process completes with a kthxbye message.

9. Run the # cplist command on the 6.1.x appliance and you should notice that the checkpoint that was displayed in the cpbackup list is now listing under the 6.1.x checkpoints:

cp.20161006013247 Thu Oct  6 07:02:47 2016   valid hfs ---  nodes   1/1 stripes     25
cp.20161011033032 Tue Oct 11 09:00:32 2016   valid rol ---  nodes   1/1 stripes     25

The cp.20161006013247 is the 6.1.x appliance's local checkpoint and the cp.20161011033032 is the checkpoint of source appliance which was copied over from the data domain during the cprestore.

10. Once the restore is complete, we need to perform a rollback to this checkpoint. So first, you will have to stop all core services on the 6.1.x appliance using the below command:
# dpnctl stop
11. Initiate the force rollback using the below command:
# dpnctl start --force_rollback

You will see the following output:
Identity added: /home/dpn/.ssh/dpnid (/home/dpn/.ssh/dpnid)
-  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
Action: starting all
Have you contacted Avamar Technical Support to ensure that this
  is the right thing to do?
Answering y(es) proceeds with starting all;
          n(o) or q(uit) exits
y(es), n(o), q(uit/exit):

Select yes (y) to initiate the rollback. The next set of output you will see is:

dpnctl: INFO: Checking that gsan was shut down cleanly...
-  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
Here is the most recent available checkpoint:
  Tue Oct 11 03:30:32 2016 UTC Validated(type=rolling)
A rollback was requested.
The gsan was shut down cleanly.

The choices are as follows:
  1   roll back to the most recent checkpoint, whether or not validated
  2   roll back to the most recent validated checkpoint
  3   select a specific checkpoint to which to roll back
  4   restart, but do not roll back
  5   do not restart
  q   quit/exit

Choose option 3 and the next set of output you will see is:

Here is the list of available checkpoints:

     2   Thu Oct  6 01:32:47 2016 UTC Validated(type=full)
     1   Tue Oct 11 03:30:32 2016 UTC Validated(type=rolling)

Please select the number of a checkpoint to which to roll back.

Alternatively:
     q   return to previous menu without selecting a checkpoint
(Entering an empty (blank) line twice quits/exits.)

So in the earlier cplist command you will notice that the cp.20161011033032 had a time-stamp of Oct 11. So choose option (1) and the next output you will see is:
-  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
You have selected this checkpoint:
  name:       cp.20161011033032
  date:       Tue Oct 11 03:30:32 2016 UTC
  validated:  yes
  age:        -7229 minutes

Roll back to this checkpoint?
Answering y(es)  accepts this checkpoint and initiates rollback
          n(o)   rejects this checkpoint and returns to the main menu
          q(uit) exits

Verify if this indeed the checkpoint and proceed yes (y) upon confirmation. The GSAN and MCS rollback begins and you will notice this in the console:

dpnctl: INFO: rolling back to checkpoint "cp.20161011033032" and restarting the gsan succeeded.
dpnctl: INFO: gsan started.
dpnctl: INFO: Restoring MCS data...
dpnctl: INFO: MCS data restored.
dpnctl: INFO: Starting MCS...
dpnctl: INFO: To monitor progress, run in another window: tail -f /tmp/dpnctl-mcs-start-output-24536
dpnctl: WARNING: 1 warning seen in output of "[ -r /etc/profile ] && . /etc/profile ; /usr/local/avamar/bin/mcserver.sh --start"
dpnctl: INFO: MCS started.

**If this process fails, open a ticket with VMware support. I cannot provide the troubleshooting steps for this as this is confidential. Request / Add information in your support ticket to contact me if needed for the engineer assigned to run a check past me**

If the rollback goes through successfully you might be presented with an option to restore the tomcat database.

Do you wish to do a restore of the local EMS data?

Answering y(es) will restore the local EMS data
          n(o) will leave the existing EMS data alone
          q(uit) exits with no further action.

Please consult with Avamar Technical Support before answering y(es).

Answer n(o) here unless you have a special need to restore
  the EMS data, e.g., you are restoring this node from scratch,
  or you know for a fact that you are having EMS database problems
  that require restoring the database.

y(es), n(o), q(uit/exit):

I would choose no if my database is not causing issues in my environment. Post this, the remaining services will be started. The output:

dpnctl: INFO: EM Tomcat started.
dpnctl: INFO: Resuming backup scheduler...
dpnctl: INFO: Backup scheduler resumed.
dpnctl: INFO: AvInstaller is already running.
dpnctl: INFO: [see log file "/usr/local/avamar/var/log/dpnctl.log"]

That should be pretty much it. When you login to https://vdp-ip:8543/vdp-configure page, you should be able to see the Data Domain automatically in the Storage Tab. If not, open a support ticket with VMware

There are couple of post-migration steps:
1. If you are using internal proxy, un-register the proxy and re-register it back from the VDP configure page.
2. External proxies (if used) will be orphaned, so you will have to delete the external proxies, change the VDP root password and re-add the external proxy
3. If you are using Guest Level backups, then the agents for SQL, Exchange, Sharepoint has to be re-installed. 
4. If this appliance is replicating to another VDP appliance, then the replication agents need to be re-registered. Follow the below 4 commands in the same order to perform this:
# service avagent-replicate stop
# service avagent-replicate unregister 127.0.0.1 /MC_SYSTEM
# service avagent-replicate register 127.0.0.1 /MC_SYSTEM
# service avagent-replicate start

And that should be it...

Friday, 28 October 2016

VDP Stuck In A Configuration Loop

There have been a few cases logged with VMware where the newly deployed VDP appliance gets stuck in a configuration loop. Not to worry, there is now a fix for this. 

A little insight to what this is: So, we will go ahead and deploy a VDP (6.1.2 in my case) as an ova template. The deployment goes through successfully, and then we power On the VDP appliance which too completes successfully. Then, we go to the https://vdp-ip:8543/vdp-configure page and run through the configuration wizard. Everything goes here as well, the configuration wizard completes and requests you to reboot the appliance. Once the appliance is rebooted, it's going to make certain changes to the appliance, configure alarms and initialize core services. There will be a task called as "VDP: Configure Appliance" which will be initiated. Here, this task gets stuck somewhere around 45 to 70 percent. The appliance will boot up completely, however, when you go back to the vdp-configure page, you will notice that it is taking you through the configuration wizard again. You can run up to the configure storage section post which you will receive an error, as the appliance is already configured with the storage. And no matter which browser or how many times you access this vdp-configure page, you will be taken back to the configuration wizard. This will end up as an infinite loop.

This issue is mainly and mostly (almost certainly) seen only on vCenter 5.5 U3e release. This is because, the VDP uses JSAFE/BSAFE Java libraries and these do not go well with the vCenter SSL ciphers in the 5.5 U3e. To fix this, we switch from JSAFE to Java JCE libraries on the VDP appliance.

Before, we get to this, you can visit the vdr-server.log at the time of the issue (/usr/local/avamar/var/vdr/server_logs) to verify the following:

2016-10-29 01:15:40,676 INFO  [Thread-7]-vi.ViJavaServiceInstanceProviderImpl: vcenter-ignore-cert ? true
2016-10-29 01:15:40,714 WARN  [Thread-7]-vi.VCenterServiceImpl: No VCenter found in MC root domain
2016-10-29 01:15:40,714 INFO  [Thread-7]-vi.ViJavaServiceInstanceProviderImpl: visdkUrl = https:/sdk
2016-10-29 01:15:40,715 ERROR [Thread-7]-vi.ViJavaServiceInstanceProviderImpl: Failed To Create ViJava ServiceInstance owing to Remote VCenter connection error
java.rmi.RemoteException: VI SDK invoke exception:java.lang.IllegalArgumentException: protocol = https host = null; nested exception is:
        java.lang.IllegalArgumentException: protocol = https host = null
        at com.vmware.vim25.ws.WSClient.invoke(WSClient.java:139)
        at com.vmware.vim25.ws.VimStub.retrieveServiceContent(VimStub.java:2114)
        at com.vmware.vim25.mo.ServiceInstance.<init>(ServiceInstance.java:117)
        at com.vmware.vim25.mo.ServiceInstance.<init>(ServiceInstance.java:95)
        at com.emc.vdp2.common.vi.ViJavaServiceInstanceProviderImpl.createViJavaServiceInstance(ViJavaServiceInstanceProviderImpl.java:297)
        at com.emc.vdp2.common.vi.ViJavaServiceInstanceProviderImpl.createViJavaServiceInstance(ViJavaServiceInstanceProviderImpl.java:159)
        at com.emc.vdp2.common.vi.ViJavaServiceInstanceProviderImpl.createViJavaServiceInstance(ViJavaServiceInstanceProviderImpl.java:104)
        at com.emc.vdp2.common.vi.ViJavaServiceInstanceProviderImpl.createViJavaServiceInstance(ViJavaServiceInstanceProviderImpl.java:96)
        at com.emc.vdp2.common.vi.ViJavaServiceInstanceProviderImpl.getViJavaServiceInstance(ViJavaServiceInstanceProviderImpl.java:74)
        at com.emc.vdp2.common.vi.ViJavaServiceInstanceProviderImpl.waitForViJavaServiceInstance(ViJavaServiceInstanceProviderImpl.java:212)
        at com.emc.vdp2.server.VDRServletLifeCycleListener$1.run(VDRServletLifeCycleListener.java:71)
        at java.lang.Thread.run(Unknown Source)

Caused by: java.lang.IllegalArgumentException: protocol = https host = null
        at sun.net.spi.DefaultProxySelector.select(Unknown Source)
        at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(Unknown Source)
        at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source)
        at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(Unknown Source)
        at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(Unknown Source)
        at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(Unknown Source)
        at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(Unknown Source)
        at com.vmware.vim25.ws.WSClient.post(WSClient.java:216)
        at com.vmware.vim25.ws.WSClient.invoke(WSClient.java:133)
        ... 11 more

2016-10-29 01:15:40,715 INFO  [Thread-7]-vi.ViJavaServiceInstanceProviderImpl: Retry ViJava ServiceInstance Acquisition In 5 Seconds...
2016-10-29 01:15:45,716 INFO  [Thread-7]-vi.ViJavaServiceInstanceProviderImpl: vcenter-ignore-cert ? true
2016-10-29 01:15:45,819 WARN  [Thread-7]-vi.VCenterServiceImpl: No VCenter found in MC root domain

The mcserver.out log file should show the below:

Caught Exception : Exception : org.apache.axis.AxisFault Message : ; nested exception is:
javax.net.ssl.SSLHandshakeException: Unsupported curve: 1.2.840.10045.3.1.7 StackTrace : AxisFault
faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server.userException faultSubcode:
faultString: javax.net.ssl.SSLHandshakeException: Unsupported curve: 1.2.840.10045.3.1.7 faultActor:
faultNode:
faultDetail:
{http://xml.apache.org/axis/}stackTrace:javax.net.ssl.SSLHandshakeException: Unsupported curve: 1.2.840.10045.3.1.7

To fix this:

1. Discard the newly deployed appliance completely. 
2. Deploy the VDP appliance again. Go through the ova deployment and power on the appliance. Stop here, do not go to the vdp-configure page.

3. To enable the Java JCE library we need to add a particular line in the mcsutils.pm file under the $prefs variable. The line is exactly as below:

. "-Dsecurity.provider.rsa.JsafeJCE.position=last "

4. vi the following file;
# vi  /usr/local/avamar/lib/mcsutils.pm
The original content would look like:

my $rmidef = "-Djava.rmi.server.hostname=$rmihost ";
   my $prefs = "-Djava.util.logging.config.file=$mcsvar::lib_dir/mcserver_logging.properties "
             . "-Djava.security.egd=file:/dev/./urandom "
             . "-Djava.io.tmpdir=$mcsvar::tmp_dir "
             . "-Djava.util.prefs.PreferencesFactory=com.avamar.mc.util.MCServerPreferencesFactory "
             . "-Djavax.xml.parsers.DocumentBuilderFactory=org.apache.xerces.jaxp.DocumentBuilderFactoryImpl "
             . "-Djavax.net.ssl.keyStore=" . MCServer::get( "rmi_ssl_keystore" ) ." "
             . "-Djavax.net.ssl.trustStore=" . MCServer::get( "rmi_ssl_keystore" ) ." "
             . "-Dfile.encoding=UTF-8 "
             . "-Dlog4j.configuration=file://$mcsvar::lib_dir/log4j.properties ";  # vmware/axis

After editing it would look like:

 my $rmidef = "-Djava.rmi.server.hostname=$rmihost ";
   my $prefs = "-Djava.util.logging.config.file=$mcsvar::lib_dir/mcserver_logging.properties "
             . "-Djava.security.egd=file:/dev/./urandom "
             . "-Djava.io.tmpdir=$mcsvar::tmp_dir "
             . "-Djava.util.prefs.PreferencesFactory=com.avamar.mc.util.MCServerPreferencesFactory "
             . "-Djavax.xml.parsers.DocumentBuilderFactory=org.apache.xerces.jaxp.DocumentBuilderFactoryImpl "
             . "-Djavax.net.ssl.keyStore=" . MCServer::get( "rmi_ssl_keystore" ) ." "
             . "-Djavax.net.ssl.trustStore=" . MCServer::get( "rmi_ssl_keystore" ) ." "
             . "-Dfile.encoding=UTF-8 "
             . "-Dsecurity.provider.rsa.JsafeJCE.position=last "
             . "-Dlog4j.configuration=file://$mcsvar::lib_dir/log4j.properties ";  # vmware/axis

5. Save the file
6. There is no use of restarting mcs using mcserver.sh --restart, as the VDP appliance is not yet configured and hence the core services are not yet initialized. 
7. Reboot the appliance.
8. Once the appliance is booted up, go to the configure page and begin the configuration and this should avoid the configuration loop issue.

If the VDP was already deployed and the vCenter was upgraded later, then you can follow the same steps until 6. Instead of rebooting the VDP this time, we should be good to restart the MCS using the mcserver.sh --restart --verbose command.

That's it. A permanent fix is in talks with engineering for the future VDP release.

Update:
A permanent fix is in 6.1.3 version of VDP.

Tuesday, 25 October 2016

MCS Fails To Start On VDP. ERROR: gsan rollbacktime: xxxxxxx does not match stored rollbacktime: xxxxxxxx

Recently while working on a case, I came across the following issue. The MCS service was not coming up on a newly deployed VDP with existing drives. If I tried to start the MCS manually, the error received during this process was:

root@vdp58:#: dpnctl start mcs

Identity added: /home/dpn/.ssh/dpnid (/home/dpn/.ssh/dpnid)
dpnctl: INFO: Starting MCS...
dpnctl: INFO: To monitor progress, run in another window: tail -f /tmp/dpnctl-mcs-start-output-26291
dpnctl: ERROR: error return from "[ -r /etc/profile ] && . /etc/profile ; /usr/local/avamar/bin/mcserver.sh --start" - exit status 1
dpnctl: ERROR: 1 error seen in output of "[ -r /etc/profile ] && . /etc/profile ; /usr/local/avamar/bin/mcserver.sh --start"
dpnctl: INFO: [see log file "/usr/local/avamar/var/log/dpnctl.log"]

And if I tailed the log that was displayed during the start attempt:
tail -f /tmp/dpnctl-mcs-start-output-26291

The actual error message was displayed:
ERROR: gsan rollbacktime: 1475722913 does not match stored rollbacktime: 1475722911

This occurs when GSAN has rolled back to a particular checkpoint but the MCS has not. 
Since these are not on the same rollbacktime the MCS service will not start. 

There are a couple of fixes available for this, and I would recommend you to start in the following order.

Fix 1:
Restore MCS

Run the below command to being MCS restore:
# dpnctl start mcs --force_mcs_restore
In most cases, this too fails. For me, it did, with the error:

root@vdp58:#: dpnctl start mcs --force_mcs_restore

Identity added: /home/dpn/.ssh/dpnid (/home/dpn/.ssh/dpnid)
dpnctl: INFO: Restoring MCS data...
dpnctl: ERROR: 1 error seen in output of "[ -r /etc/profile ] && . /etc/profile ; echo 'Y' | /usr/local/avamar/bin/mcserver.sh --restore --id='root' --hfsport='27000' --hfsaddr='192.168.1.203' --password='*************'"
dpnctl: ERROR: MCS restore did not succeed, so not restarting MCS
dpnctl: INFO: [see log file "/usr/local/avamar/var/log/dpnctl.log"]

If this worked for you and the MCS is restored and started successfully, then stop here. Else, move further. 

Fix 2:
Restore MCS to an older Flush

Basically, your MCS data is constantly backed up, and this is what is called as MCS flush. This is to protect the MCS from server or any hardware failures.
MCS flushes its data to the avamar server every 60 minutes as a part of system checkpoints. This is why, I would recommend you to roll back to a MCS flush which has a valid local checkpoint on that VDP server. So the more older MCS flush you roll back to, the more MCS data is lost. 

The local checkpoints in my case were:

root@vdp58:#: cplist

cp.20161020033059 Thu Oct 20 09:00:59 2016   valid rol ---  nodes   1/1 stripes     25
cp.20161020033339 Thu Oct 20 09:03:39 2016   valid --- ---  nodes   1/1 stripes     25

To list your MCS Flush, run the below command:
avtar --archives --path=MC_BACKUPS
The output is similar to:

   Date      Time    Seq       Label           Size     Plugin    Working directory         Targets
 ---------- -------- ----- ----------------- ---------- -------- --------------------- -------------------
 2016-10-20 15:25:20   372                      369201K Linux    /usr/local/avamar     var/mc/server_data
 2016-10-20 14:45:20   371                      368582K Linux    /usr/local/avamar     var/mc/server_data
 2016-10-20 13:45:18   370                      367645K Linux    /usr/local/avamar     var/mc/server_data
 2016-10-20 12:45:17   369                      366716K Linux    /usr/local/avamar     var/mc/server_data
 2016-10-20 11:45:19   368                      365779K Linux    /usr/local/avamar     var/mc/server_data
 2016-10-20 10:45:17   367                      364842K Linux    /usr/local/avamar     var/mc/server_data
 2016-10-20 09:45:17   366                      363762K Linux    /usr/local/avamar     var/mc/server_data

Here the numbers 372, 371.....are the MCS Flush labels. This list keeps going on till the day where the VDP appliance was deployed. 

I will rollback my appliance to Label 366

The command would be:
mcserver.sh --restore --labelnum=<flush_ID>
In my case:
mcserver.sh --restore --labelnum=366
This will start a small interactive script, where you need to accept the restore, provide the VDP IP to proceed further. Sample output:

root@vdp58:#: mcserver.sh --restore --labelnum=366

mcserver.sh must be run as admin, please login as admin and retry
root@vdp58:/usr/local/avamar/var/log/#: su admin
admin@vdp58:/usr/local/avamar/var/log/#: mcserver.sh --restore --labelnum=366
=== BEGIN === check.mcs (prerestore)
check.mcs                        passed
=== PASS === check.mcs PASSED OVERALL (prerestore)
--restore will modify your Administrator Server database and preferences.
Do you want to proceed with the restore Y/N? [Y]: y
Enter the Avamar Server IP address or fully qualified domain name to
restore from (i.e. dpn.your_company.com): 192.168.1.203
Enter the Avamar Server IP port to restore from [27000]:

The port will be default 27000. Post this, you will see a long list of logging of the mcsrestore task.
This is going to make certain changes to your MCS database.

If the restore to an older flush completes successfully, then start the MCS using:
mcserver.sh --start --verbose
This started the MCS successfully for me.

Now, I have also worked on a case, where the mcserver restore to an older flush completed with error / warnings causing the mcserver.sh --start to fail with the same error:

ERROR: gsan rollbacktime: 1475722913 does not match stored rollbacktime: 1475722911

You can try rolling to an even older MCS Flush and see how that goes. But, the chances are less that the MCS will ever come up. 

So if this fails, move to the next step:

Fix 3:
Update the MCS Database Manually. 

The last fix for this is to manually update the MCS database with the correct rollbacktime.

**This is a very tricky fix, and is not a best practice or a recommended method. If you are running a lab environment, then go ahead and try this. If you have production data at stake, stop! Involve EMC to check for other alternatives**

With that out of the way, the final fix would be in the order.

1. Connect to the MCS database. 

VDP is a SUSE box, and it runs a PostgreSQL database. The command would be the same as any to connect to the psql DB:
psql -p 5555 -U admin mcdb

The port for MCS database is 5555
We are connecting with admin user as we want to make certain changes on the MCS database. If you want to be in a view only mode then use the "viewuser" to connect to "mcdb"

2. Once you connect, you see the following message:

admin@vdp58:#: psql -p 5555 -U admin mcdb

Welcome to psql 8.3.23, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
       \h for help with SQL commands
       \? for help with psql commands
       \g or terminate with semicolon to execute query
       \q to quit

3. Run \d to list the MCS tables. The one we are interested in is "property_value"

4. Run the below query to list all the contents of this table:
select * from property_value;
The output is similar to:

      property       |            value
---------------------+------------------------------
 morning_cron_start  | -1
 evening_cron_start  | -1
 mcsnmp_cron_start   | 1
 clean_db_cron_start | 3
 rollbacktime        | 1475722911
 systemid            | 1476126720@00:50:56:B9:3E:6D
 hfscreatetime       | 1476126720
 systemname          | vdp58.vcloud.local
 restoredFlushTime   | 2016-10-10 19:45:00 PDT
 license_period_day  | 14
 license_buffer_pct  | 10
(11 rows)

The row that we are interested in is rollbacktime. Here we see the rollbacktime is 1475722911 which is not matching the GSAN rollback time of 1475722913

5. To update this, run the below query:
update property_value set value = <GSAN_rollbacktime> where property = 'rollbacktime';
So my query would look like:
update property_value set value = 1475722913 where property = 'rollbacktime'; 
Verify if the rollbacktime parameter is updated with the correct GSAN rollbacktime. 

6. Switch to admin mode of VDP appliance (su admin) and then start the MCS using:
mcserver.sh --start --verbose
This has to start the MCS as we have force synced the MCS. 


If this does not work too, then I do not know what else will. 

Saturday, 22 October 2016

VDP Reports Incorrect Information About Protected Clients

When you connect to vSphere Data Protection in your web client, switch to the Reports tab and select Unprotected Clients, you will see a list of VMs that are not protected by VDP. When I say not protected by VDP, it means that they are not added to any backup jobs in that particular appliance. 

In some cases, you will see the virtual machine is still listed under the Unprotected Client section when the VM is already added in the backup job. This mostly occurs when a rename operation is done on the virtual machine. When a rename is done on the virtual machine, the backup job picks up the new name. The Unprotected Clients under Restore tab will not pick this up. 

Here is the result of a small test.

1. I have a backup job called "Windows" and a VM called "Windows_With_OS" is added under it. 


2. In the Unprotected Client section, you can see that this "Windows_With_OS" VM is not listed as it is already protected. 


3. Now, I will re-name this virtual machine in my vSphere Client to "Windows_New"


4. Back in the vSphere Data Protection, you can see the name is updated in the backup job list, but not in the Reporting Tab.


You can see that Windows_New is now coming up under Unprotected Clients even though it is already protected. (Ignore the vmx file name as this is renamed for other purposes)


This is an incorrect report and the VDP appliance should sync these changes automatically with vCenter naming changes. You can restart services, proxy, the entire appliance too and it will not fix this reporting. 

This can be also confirmed from the virtual name report in MCS and GSAN. To check this:

1. Open a SSH / Putty to the VDP appliance. Login as admin and elevate to root.
2. Run the below command:
# mccli client show --recursive=true


So if you observer here, the MCS still picks up the old virtual machine name. (mccli is only for MCS related information)

3. If you check what the GSAN shows, run the below command:
# avmgr getl --path=/vCenter-IP/Virtual-Machine-Domain.
The vCenter IP and VM domain can be found from the above mccli command which in my case is /192.168.1.1/VirtualMachines. The output is:


The avgmr is only for GSAN related information and also shows the Client ID is for the VM with the older name. 

So your vdr server naming is out of sync with the MCS and GSAN sync. 

The solution:

You will have to force sync the naming changes between the Avamar server and the vCenter Server. To do this, you will need the proxycp.jar file which can be downloaded from here

A brief about proxycp.jar, this is a java archive file which contains a set of built in commands that can be used to automate or run a specific set of tasks from the command line. Some of the things would require changes from multiple locations and numerous files, and the proxycp.jar will help you do these things by running the required commands.

1. So once you download the proxycp.jar file, open a WinSCP to the VDP appliance and copy this file into your /root or preferably /tmp  folder. 

2. Then SSH into your VDP appliance and change directory to where the proxycp.jar file is and run the following command
# java -jar proxycp.jar --syncvmnames
The output:


The In Sync column was false for the renamed virtual machine, and the "syncvmnames" switch updated this value.

3. Now if I go back to the Unprotected Client's list, this VM is no longer listed and if you run the mccli and avmgr command mentioned earlier will show the updated name.

If something is a bit off for you in this case, feel free to comment.

Wednesday, 12 October 2016

vSphere Data Protection /data0? Partitions Are 100 Percent.

VDP can be connected to a data domain or a local deduplication store to contain all the backup data. This article discusses in specific when VDP is connected to a data domain. As far as the deployment process goes, a VDP with data domain attached to it, would still have a local data partition as well. The sda mount is for your OS partitions and sdb, sdc and so on are for your data partitions (Hard Disk1, 2, 3..and so on).

These partitions, data01, data02....(Grouped as data0?) contain the metadata of the backups that is stored on the data domain. So, if you cd to /data01/cur and do a list "ls", you will see the metadata stripes.

0000000000000000.tab 
0000000000000008.wlg 
0000000000000012.cdt 
0000000000000017.chd

Before, we get into the cause of this issue, let's have a quick look at what a retention policy is. When you create a backup job for a client / group of clients, you will define a retention policy for the restore points. The retention policy tells, how long you need your restore points to be saved after a backup. The default is 60 days and can be adjusted as per need. 

Once the retention policy is reached, that restore point which has reached its expiration date will be deleted. Then, during the maintenance window, the Garbage Collection (GC), will be executed, which will perform the space reclamation. If you run, status.dpn, you will notice, "Last GC" and amount of space that was reclaimed. 

Space reclamation by GC is done only on the data0? partitions. So, if your data0? partitions are 100 percent, then there are few explanations. 

1. Your retention period for all backup is set to "Never Expire", which is not recommended to be set.
2. The GC was not executed at all during the maintenance window. 

If you set the backups to never expire, then go ahead and set an expiration date for it, otherwise your data0? partitions will frequently enter 100 percent space usage. 

To check if your GC was executed successfully or not, run the below command:
# status.dpn
The output, you should look at is the "Last GC". You will either see an error here such as DDR_ERROR or Last GC was executed somewhere weeks back. 

Also, if you login to vdp-configure page, you should notice that your maintenance services are not running. If this is the case, then your space reclamation task will not run, and if your space reclamation task is not running, then those metadata for expired backup are not cleared. 

To understand why this happens, let's have a basic look at how MCS talks to data domain. Your MCS will be running on your VDP appliance. If there is a data domain attached to the appliance, the MCS will be querying the data domain via the DD SSH keys

This means, we have a private-public key combination on the VDP appliance and the data domain system. When there is a public-private key combination, there is no need as password authentication for MCS to connect to data domain. Your MCS will use it's private key and Data Domains public key, and similarly, the data domain will use it's private key and VDP's public key to communicate. 

You can do a simple test to see if this working by performing the below steps:

1. On the VDP appliance load and add the private key. 
# ssh-agent bash 
# ssh-add ~admin/.ssh/ddr_key
2. Once the key is added, you can login to Data Domain from the VDP SSH directly without a password. This is how the MCS works too. 
# ssh sysadmin@192.168.1.200
Two outcomes here: 

1. If there is no prompt to enter a password, it will directly connect you to the Data Domain console, and we are good to go. 

2. It will prompt you to enter a passphrase and/or a password to login to Data Domain. If you run into this issue, then it means that the SSH public keys for VDP are not loaded / unavailable on the Data Domain end.

For this issue, we will be most likely running into Outcome (2)

How to verify public key availability on data domain end:

1. On the VDP appliance run the following command to list the public key:
# cat ~admin/.ssh/ddr_key.pub
The output would be similar to:

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAw7XWjEK0jVPrT0z6JDmdKUDLfvvoizdzTpWPoCWNhJ/LerUs9L4UkNr0Q0mTK6U1tnlzlQlqeezIsWvhYJHTcU8rh
yufw1/YZLoGeA0tsHl6ruFAeCIYuf5+mmLXluPhYrjGMdsDa6czjIAtoA4RMY9WjAtSOPX3L2B73Wf3BScigzC/D83aX8GnaldwQU88qkfmhN+dpy2IdxiFm4
hnK+2m4XMtveBTq/8/7medeBTMXYYe7j7DVffViU4DizeEpGj2TBxHIe2dGe0epFDDc9wpa8W5a/XPOeiz4WelHfKtqS1hYUpFEQWXUOngwjDPpqG+6k1t
1HoOp/+OVC3lGw== admin@vmrel-ts2014-vdp

2. On the Data Domain end, run the following command:
# adminaccess show ssh-keys user <ddbost-user>
You can enter your custom ddboost user or sysadmin, if this was itself promoted to ddboost user. 

In our case, we should not see the above mentioned public key in the list. 
The DD will have its private key and VDP will have its private key. The public key of the VDP is not available on the data domain end, which leads to password request when connecting from SSH of VDP to DD. Due to this, the GC will not run as MCS will be waiting for a manual password entry. 

To fix this:

1. Copy the public key of the VDP appliance obtained from the "cat" command mentioned earlier. Copy the entire thing starting from and including ssh-rsa to the end, including -vdp
Make sure no spaces are copied, else this will not work. 

2. Login to DD with sysadmin and run the following command:
# adminaccess add ssh-keys user <ddboost-user>
You will see a prompt like below:

Enter the key and then press Control-D, or press Control-C to cancel.

Then, enter the copied key and Press Ctrl+D (You will see the "key accepted" message)

ssh-user AAAAB3NzaC1yc2EAAAABIwAAAQEAw7XWjEK0jVPrT0z6JDmdKUDLfvvoizdzTpWPoCWNhJ/LerUs9L4UkNr0Q0mTK6U1tnlzlQlqeezIsWvhYJHTcU8rh
yufw1/YZLoGeA0tsHl6ruFAeCIYuf5+mmLXluPhYrjGMdsDa6czjIAtoA4RMY9WjAtSOPX3L2B73Wf3BScigzC/D83aX8GnaldwQU88qkfmhN+dpy2IdxiFm4
hnK+2m4XMtveBTq/8/7medeBTMXYYe7j7DVffViU4DizeEpGj2TBxHIe2dGe0epFDDc9wpa8W5a/XPOeiz4WelHfKtqS1hYUpFEQWXUOngwjDPpqG+6k1t
1HoOp/+OVC3lGw== admin@vmrel-ts2014-vdpSSH key accepted.

3. Now test the login from VDP to DD using ssh sysadmin@192.168.1.200 and you should be directly connected to the data domain.

Even though, we have re-established the MCS connectivity to DD, we will have to now manually run a garbage collection to force clear the expired metadata. 

You have to first stop the backup scheduler and maintenance service else you will receive the below error when trying to run GC:
ERROR: avmaint: garbagecollect: server_exception(MSG_ERR_SCHEDULER_RUNNING)

To stop the backup scheduler and maintenance service:
# dpnctl stop maint
# dpnctl stop sched
Then, run the below command to force start a GC:
# avmaint garbagecollect --timeout=<how many seconds should GC run> --ava
4. run df -h again, and the space has to be reduced considerably provided all the backups have a good retention policy set.


**If you are unsure about this process, open a ticket with VMware to drive this further**

Wednesday, 5 October 2016

Understanding VDP Backups In A Data Domain Mtree

This is going to be a high level view of how to find out which backups on data domain relate to which clients on the VDP. Previously, we saw how to deploy and configure a data domain and connect it to a vSphere Data Protection 5.8 appliance. I am going to discuss what is an Mtree and certain information about it, as this is needed for the next article, which would be migration of VDP from 5.8/6.0 to 6.1 

In a data domain file system, a Mtree is created to store the files and checkpoint data of VDP and data domain snapshots under the avamar ID node of the respective appliance. 

Now, on the data domain appliance, the below command needs to be executed to display the mtree list. 
# mtree list
The output is seen as:

Name                                             Pre-Comp (GiB)   Status
----------------------------   ----------------------------------  ------
/data/col1/avamar-1475309625       32.0                      RW
/data/col1/backup                             0.0                        RW
----------------------------   ---------------------------------   ------

So the avamar node ID is 1475309625. To confirm this is the same mtree node created for your VDP appliance, run the following on the VDP appliance:
# avmaint config --ava | grep -i "system"
The output is:
  systemname="vdp58.vcloud.local"
  systemcreatetime="1475309625"
  systemcreateaddr="00:50:56:B9:54:56"

The system create time is nothing but the avamar ID. Using these two commands you can confirm which VDP appliance corresponds to which mtree on the data domain. 

Now, I mentioned earlier that mtree is where your backup data, VDP checkpoints and other related files reside. So, the next question is how to see what are the directories under this mtree? To check the directories under mtree, run the following command in the data domain appliance.
 # ddboost storage-unit show avamar-<ID>
The output is:

Name                Pre-Comp (GiB)   Status   User
-----------------   --------------   ------   ------------
avamar-1475309625             32.0   RW       ddboost-user
-----------------   --------------   ------   ------------
cur
ddrid
VALIDATED
GSAN
STAGING
  • The cur directory is also called as the "current" directory where all your backup data is stored. 
  • The validated folder is where the validated checkpoints reside
  • The GSAN folder contains the checkpoint that was copied over from the VDP appliance. Remember, in the previous article we checked the option for "Enable Checkpoint Copy". This is what copies over the daily generated checkpoints on VDP to the data domain. Why this is required? We will look into this in much detail during the migrate operation. 
  • The STAGING folder is where all your "in-progress" backup job is saved. Once the backup job completes successfully, they will be moved to the cur directory. If the backup job fails, it will remain in the STAGING folder and will be cleared out during the next GC on the data domain.
Now, as mentioned before, DDOS does not have complete commands that is available in Linux, which is why you will have to enter SE (System Engineering) mode and enable bash shell to obtain the superset of commands to browse and modify directories. 

Please note: This is meant to be handled by a EMC technician only. All the information I am displaying here is purely from my lab. Try this at your own risk. If you are uncomfortable, stop now, and involve EMC support. 

To enable the bash shell, we will have to first enter the SE mode. To do this, we will need the password for SE which would be your system serial number. This can be obtained from the below command:
# system show serialno
The output is similar to:
Serial number: XXXXXXXXXXX

Enable SE mode using the below command and enter the Serial number as the password when prompted for:
 # priv set se
Once the password is provided you can see the user sysadmin@data-domain has changed to SE@data-domain

Now, we need to enable the bash shell for your data domain. Run these commands in the same order:

1. Display the OS information using:
# uname
You will see:
Data Domain OS 5.5.0.4-430231

2. Enable the File system using:
# fi st
You will see:
The filesystem is enabled and running.

3. Run the below command to show the filesystem space:
 # filesys show space
You will see

Active Tier:
Resource           Size GiB   Used GiB   Avail GiB   Use%   Cleanable GiB*
----------------   --------   --------   ---------   ----   --------------
/data: pre-comp           -       32.0           -      -                -
/data: post-comp      404.8        2.0       402.8     0%              0.0
/ddvar                 49.2        2.3        44.4     5%                -
----------------   --------   --------   ---------   ----   --------------
 * Estimated based on last cleaning of 2016/10/04 06:00:58.

4. Press "Ctrl+C" three times and then type shell-escape
This enters you to the bash shell and you will see the following screen.

*************************************************************************
****                            WARNING                              ****
*************************************************************************
****   Unlocking 'shell-escape' may compromise your data integrity   ****
****                and void your support contract.                  ****
*************************************************************************
!!!! datadomain YOUR DATA IS IN DANGER !!!! #

Again, proceed at your own risk and 100^10 percent, involve EMC when you do this. 

You saw the mtree was located at the path /data/col1/avamar-ID. The data partition is not mounted by default and needs to be mounted and unmounted manually. 

To mount the data partition run the below command:
# mount localhost:/data /data
This will return to the next line and will not show any output. Once the partition has been mounted successfully, you can then use your regular Linux commands to browse the mtree. 

So, a cd to the /data/col1/avamar-ID will show the following:

drwxrwxrwx  3 ddboost-user users 167 Oct  1 01:46 GSAN
drwxrwxrwx  3 ddboost-user users 190 Oct  2 02:40 STAGING
drwxrwxrwx  9 ddboost-user users 563 Oct  3 20:33 VALIDATED
drwxrwxrwx  4 ddboost-user users 279 Oct  2 02:40 cur
-rw-rw-rw-  1 ddboost-user users  40 Oct  1 01:43 ddrid

As mentioned, before the "cur" directory has all your successfully backed up data. If you change your directory to cur and do a "ls" you will find the following:

drwxrwxrwx  4 ddboost-user users 229 Oct  2 07:30 5890c0677a03211b49a9cf08bf1dcebd2d7cd77d

Now, this is the Client ID of the client (VM) that was successfully backed up by VDP.
To find which client on VDP corresponds to which CID on the data domain, we have 2 simple commands. 

To understand this, I presume you have a fair idea of what MCS and GSAN is on vSphere Data Protection. Your GSAN node is responsible for storing all the actual backup data if you have a local vmdk storage. If your VDP is connected to the data domain, then GSAN only holds the meta data of the backup and not the actual backup data (As this will be on the data domain) 
The MCS in brief is what waits for the work-order and calls in the avagent and avtar to perform the backup. The MCS if it understands there is a data domain connected to it, then, using the DD public-private key combination (Also called SSH keys) will talk to DD to perform the regular maintenance tasks. 

So, first, we will run the avmgr command (avmgr command is only for GSAN and will not work if GSAN is not running), to display the client ID on the GSAN node. The command would be:
# avmgr getl --path=/VC-IP/VirtualMachines
The output is:

1  Request succeeded
1  RHEL_UDlVr74uB7JdXN8jgjRLlQ  location: 5890c0677a03211b49a9cf08bf1dcebd2d7cd77d      pswd: 0d0d7c6b09f2a2234c108e4f0647c277e8bf2562

The one highlighted in red is nothing but the Client ID on the GSAN for the client RHEL (a virtual machine)

Then, we will run the mccli command (mccli command is only for MCS and needs MCS to be up and running) to display the client ID on the MCS server. The command would be:
# mccli client show --domain=/VC-IP/VirtualMachines --name="Client_name"
For example,
# mccli client show --domain=/192.168.1.1/VirtualMachines -name="RHEL_UDlVr74uB7JdXN8jgjRLlQ"
The output is a pretty detailed one, what we are interested is in this particular line:
CID                      5890c0677a03211b49a9cf08bf1dcebd2d7cd77d

So, we see the client ID on data domain = client ID on the GSAN = client ID on the MCS

Here, if your client ID on GSAN does not match the client ID on MCS, then your full VM restore and File Level Restores will not work. We will have this CID to be corrected in case of a mismatch to get the restores working. 

Now, back to the data domain end, we were under the cur directory, right? Next, I will change directory to the CID

# cd 5890c0677a03211b49a9cf08bf1dcebd2d7cd77d

I will then do another "ls" to list the sub directories under it, and you may or may not notice the following:

drwxrwxrwx  2 ddboost-user users 1.2K Oct  2 02:55 1D21C9327C2E4C6
drwxrwxrwx  2 ddboost-user users 1.4K Oct  2 07:30 1D21CB99431214C

If you have one folder which a sub client ID, then it means there has been only one backup executed and completed successfully for the virtual machine. If you see multiple folder, then it means there has been multiple backups completed for this VM. 

To find out which backup was done first and which were the subsequent backups, we will have to query the GSAN, as you know, the GSAN holds the meta-data of the backups. 

Hence, on the VDP appliance, run the below command:
# avmgr getb --path=/VC-IP/VirtualMachines/Client-Name --format=xml
For example:
# avmgr getb --path=/192.168.1.1/VirtualMachines/RHEL_UDlVr74uB7JdXN8jgjRLlQ --format=xml
The output will be:

<backuplist version="3.0">

  <backuplistrec flags="32768001" labelnum="2" label="RHEL-DD-Job-RHEL-DD-Job-1475418600010" created="1475418652" roothash="505f1aba07f19d64df74670afa59ed39a3ece85d" totalbytes="17180938240.00" ispresentbytes="0.00" pidnum="1016" percentnew="0" expires="1476282600" created_prectime="0x1d21cb99431214c" partial="0" retentiontype="daily,weekly,monthly,yearly" backuptype="Full" ddrindex="1" locked="1"/>
  
  <backuplistrec flags="16777217" labelnum="1" label="RHEL-DD-Job-1475401181065" created="1475402150" roothash="22dc0dddea797d909a2587291e0e33916c35d7a2" totalbytes="17180938240.00" ispresentbytes="0.00" pidnum="1016" percentnew="0" expires="1476265181" created_prectime="0x1d21c9327c2e4c6" partial="0" retentiontype="none" backuptype="Full" ddrindex="1" locked="0"/>
</backuplist>

Looks confusing? Maybe, let's look at specific fields:

labelnum field shows the order of the backups. 
labelnum=1 means first backup, 2 means second and so on.

roothash is the hash value of the backup job. Next time you run incremental backup, it will check for the existing hashes, and ddboost will only backup the new hashes. The atomic hashes are then combined to form one unique root hash. So, root hash for each backup is unique. 

created_prectime is the main thing what we need. This is what we called as the sub client ID. 
For labelnum=1, we see the sub CID is 0x1d21c9327c2e4c6
For labelnum=2, we see the sub CID is 0x1d21cb99431214c

Now, let's go further into the CID. For example if I cd into the 0x1d21c9327c2e4c6 and perform a "ls" I will see the following:

-rw-rw-rw-  1 ddboost-user users  485 Oct  2 02:40 1188BE924964359A5C8F5EAEF552E523FBA83566
-rw-rw-rw-  1 ddboost-user users 1.1K Oct  2 02:40 140A189746A6EC3C49D24EA43A7811205345F1F4
-rw-rw-rw-  1 ddboost-user users 3.8K Oct  2 02:40 2CE724F2760C46CB67F679B76657C23606C06869
-rw-rw-rw-  1 ddboost-user users 2.5K Oct  2 02:40 400206DF07A942C066971D84F0CF063D2DE50F08
-rw-rw-rw-  1 ddboost-user users 1.0M Oct  2 02:55 4F50E1E506477801D0A566DEE50E5364B0F04BF0
-rw-rw-rw-  1 ddboost-user users  451 Oct  2 02:55 79DDA236EEEF192EED66CF605CD710B720A41E1F
-rw-rw-rw-  1 ddboost-user users 1.1K Oct  2 02:55 AFB6C8621EB6FA86DD8590841F80C7C78AC7BEEC
-rw-rw-rw-  1 ddboost-user users 1.9K Oct  2 02:40 B17DD9B7E8B2B6EE68294248D8FA42A955539C4C
-rw-rw-rw-  1 ddboost-user users  16G Oct  2 02:55 B212DB46684FFD5AFA41B87FD71A44469B04A38C
-rw-rw-rw-  1 ddboost-user users   15 Oct  2 02:40 D2CFFD87930DAEABB63EAEAA3C8C2AA9554286B5
-rw-rw-rw-  1 ddboost-user users 9.4K Oct  2 02:40 E2FF0829A0F02C1C6FA4A38324A5D9C23B07719B
-rw-rw-rw-  1 ddboost-user users 3.6K Oct  2 02:55 ddr_files.xml

Now there is a main file (record file) called ddr_files.xml. This file will have all the information regarding what the other files are for in this directory.

So if I take the first Hex number and grep for it in the ddr_files.xml I see the following;
# grep -i 1188BE924964359A5C8F5EAEF552E523FBA83566 ddr_files.xml
The interested output is:
clientfile="virtdisk-descriptor.vmdk"

So this a vmdk file that was backed up.

Similarly,
# grep -i 400206DF07A942C066971D84F0CF063D2DE50F08 ddr_files.xml
The interested output is:
clientfile="vm.nvram"

And one more example:
# grep -i 4F50E1E506477801D0A566DEE50E5364B0F04BF0 ddr_files.xml
The interested output is:
clientfile="virtdisk-flat.vmdk"

So if your VM file IDs are not populated correctly in the ddr_files.xml, then again your restores will not work. Engage EMC to get this corrected, because I am stressing again, do not fiddle with this in your production environment.

That's pretty much it for this. If you have questions feel free to comment or in-mail. The next article is going to be about Migrating VDP 5.8/6.0 to 6.1 with a data domain.