Wednesday, 28 June 2017

Shell Script To Create VMs From Command Line

While I agree PowerCLI API's are the right way to deploy multiple VMs on an ESXi, I had to fallback to bash script for a project that I have been working on lately. The script is pretty simple. It is divided into 6 functions.

Function 1 is for a VMX file template which has variable input obtained from the remaining functions
Function 2 is for creating a VMDK of required size and provision type
Function 3 is for MAC address generation
Function 4 is VM Uid and VC Uid generation
Function 5 is VM Registration
Function 6 is for VM Power On

Let's have a look at this:

While ESXi does not run "bash" I had to go with the #!/bin/sh shebang to define the interpreter.
The VMX file function has a pre created template with certain options that has a variable input which can be modified by the user later while executing the script:

Create_VM ()
read -p "Enter the VM name: " VM_name
read -p "Enter the path of the datastore. /vmfs/volumes/<storage-name>/: " datastore_name
cd /vmfs/volumes/$datastore_name
mkdir $VM_name && cd $VM_name && touch $VM_name.vmx
read -p "Enter the Hardware version for the VM: " HW_version
read -p "Enter the Memory required for the VM: " Memory
read -p "Enter the network type, e1000 / VMXNET3: " Net_type
read -p "Enter the VM Port group name: " Port_group
# VMX File Entries
cat << EOF > $VM_name.vmx
.encoding = "UTF-8"
config.version = "8"
virtualHW.version = "$HW_version"
nvram = "$VM_name.nvram"
pciBridge0.present = "TRUE"
svga.present = "TRUE"
pciBridge4.present = "TRUE"
pciBridge4.virtualDev = "pcieRootPort"
pciBridge4.functions = "8"
pciBridge5.present = "TRUE"
pciBridge5.virtualDev = "pcieRootPort"
pciBridge5.functions = "8"
pciBridge6.present = "TRUE"
pciBridge6.virtualDev = "pcieRootPort"
pciBridge6.functions = "8"
pciBridge7.present = "TRUE"
pciBridge7.virtualDev = "pcieRootPort"
pciBridge7.functions = "8"
vmci0.present = "TRUE"
hpet0.present = "TRUE"
memSize = "$Memory"
scsi0.virtualDev = "lsisas1068"
scsi0.present = "TRUE"
ide1:0.startConnected = "FALSE"
ide1:0.deviceType = "cdrom-raw"
ide1:0.clientDevice = "TRUE"
ide1:0.fileName = "emptyBackingString"
ide1:0.present = "TRUE"
floppy0.startConnected = "FALSE"
floppy0.clientDevice = "TRUE"
floppy0.fileName = "vmware-null-remote-floppy"
ethernet0.virtualDev = "$Net_type"
ethernet0.networkName = "$Port_group"
ethernet0.checkMACAddress = "false"
ethernet0.addressType = "static"
ethernet0.Address = "$final_mac"
ethernet0.present = "TRUE"
scsi0:0.deviceType = "scsi-hardDisk"
scsi0:0.fileName = "$VM_name.vmdk"
scsi0:0.present = "TRUE"
displayName = "$VM_name"
guestOS = "windows8srv-64"
disk.EnableUUID = "TRUE"
toolScripts.afterPowerOn = "TRUE"
toolScripts.afterResume = "TRUE"
toolScripts.beforeSuspend = "TRUE"
toolScripts.beforePowerOff = "TRUE"
uuid.bios = "$uuid"
vc.uuid = "$vcid"
ctkEnabled = "TRUE"
scsi0:0.ctkEnabled = "TRUE"

The create VMDK is simple which uses the vmkfstools -C to get the job done.

Create_VMDK ()
read -p "Enter disk format. thin / zeroedthick / eagerzeroedthick: " format
read -p "Enter size: " size
vmkfstools -c "$size"G -d $format $VM_name.vmdk

The MAC address generation keeps a static MAC by modifying the VMX and the constant VMware defined prefix with a random generated number for the last octet.

MAC_address ()
mac=$(awk -v min=1000 -v max=9000 'BEGIN{srand(); print int(min+rand()*(max-min+1))}' | sed -e 's/.\{2\}/&:/g;s/.$//')

The similar algorithm is applied for VC UUid generation where the post digits are constant only the first octet is changed.

UUID_generate ()
uuid_postfix="1a c2 4e fe 1a 8c d2-db 90 02 81 ce d8 31 15"
vcid_postfix="1a c9 91 4b 4a b9 93-79 23 12 1f b2 c5 37 f8"
uuid_prefix=$(awk -v min=10 -v max=99 'BEGIN{srand(); print int(min+rand()*(max-min+1))}')
vcid_prefix=$(awk -v min=10 -v max=99 'BEGIN{srand(); print int(min+rand()*(max-min+1))}')
uuid="$uuid_prefix $uuid_postfix"
vcid="$vcid_prefix $vcid_postfix"

The complete source code{} can be accessed here:

A while loop is defined if a user wants to deploy multiple VMs.

Well, that's pretty much it.

Monday, 19 June 2017

Unable To Configure ESXi Syslog In Log Insight 4.x: Details: Client received SOAP Fault from server

When you try to configure syslog for ESXi host under /admin > vSphere (Integration) you might run into the below error:

Syslog configuration failed. See for manual configuration. (Details: Client received SOAP Fault from server: A general system error occurred: Internal error Please see the server log to find more detail regarding exact cause of the failure)

If you look at the ESXi host syslog field, under host > Configuration > Advanced Settings > Syslog you will notice either this field is empty or incorrectly configured. Populate it with the IP of your log insight machine, should look something like below. Click OK to save the settings.

If it is udp, it should be:

For tcp it should be:

Save the settings, also make sure syslog Firewall is open under Security Profile. Once confirmed, you can then proceed to reconfigure the syslog via Log Insight and it should not complete successfully.

You should be then able to view events under your Log Insight Dashboards.

Hope this helps!

Saturday, 17 June 2017

Configuring Log Insight 4.3 For A Fresh Deployment

VMware vRealize Log Insight is a product to collect logs from various solution and helps administrators to filter and analyze it. It helps for monitoring environments and performing security audits for each configured solution. The Log Insight is deployed as a virtual appliance from an ova template. 

I will skip the ova deployment part as most of you are familiar with how the ova deployment goes. Once the ova deployment completes and the appliance is powered On, it will perform certain initialization tasks and then restart once again. Once the restart completes, you are all set to configure this appliance. 

Log Insight has a HTML5 based client to configure and administer the solution. To access this page for configuration, go to:


This will bring you to a following page:

Click Next to start the configuration.

We will be configuring a new deployment of Log Insight, so click Start New Deployment

Provide the admin password and an optional administrator email for notifications.

Enter the License Key for the product and click Add License. If the license is valid you will get a table confirming the same. Click Save And Continue. If you do not have a license click Skip

In the General Configuration, provide an email ID for System alerts and notifications. Click Save And Continue

Configure a Time Server for your Log Insight appliance. If you have a NTP server, drop down Sync Server Time With and select NTP, and provide the NTP server address. You will have to click Test to validate the NTP server. If you do not have an NTP server, you can sync your time with the ESXi host. Click Save and Continue.

For system notifications to be forwarded SMTP has to be configured. Enter the SMTP server and the email address you would like to send notifications to and click Send Test Email. Once you confirm the test email was sent successfully click Save And Continue.

And with that the basic setup of your Log Insight is completed. Click Finish to proceed further.
Next, you can perform integration of solutions like vCenter sever to forward their logs to this Log Insight appliance.

Hope this was helpful.

Bash Script To List Number Of Backups In VDP For Each Client

There might be situations where you have like a handful of VMs with multiple restore data, and you want to know how many restore data is available for each of these clients. Perhaps, you would like to determine if you could perform some maintenance and get rid of few of them that are having a large number of backups to free up some space.

If you connect to VDP from the regular Web Client plugin, you will have to select each of the VM and then scroll to count, note that the GUI does not include a section to number to list. This would be a tedious task if you have 20+ VMs with backups in a varying range of 10-20+

You can use this below simple bash script to get this done. This will basically list out all the clients protected by VDP and number of backups present within each client.


# Print purpose of script
echo -e "\nAvailable number of backups for each client protected by VDP\n"

# Save vCenter hostname to a variable
vcenter_name=$(cat /usr/local/vdr/etc/vcenterinfo.cfg | grep vcenter-hostname | cut -d '=' -f 2)

# List clients in GSAN and save to variable
client_list=$(avmgr getl --path=/$vcenter_name/VirtualMachines | awk '{print $2}' | tail -n+2)

# List Backups for each of the registered client
count=1 # For Sl.No increment

for i in $client_list
# Begin For
        number_of_backup=$(avmgr getb --path=/$vcenter_name/VirtualMachines/$i | tail -n+2 | wc -l)
        printf "\n$count. For $(echo $i | cut -d '_' -f 1) the number of backups available are: $number_of_backup"
        ((count++)) #Increment Sl.No

# Done with For


You should see an output similar to:

Available number of backups for each client protected by VDP

1. For VM-A the number of backups available are: 3
2. For VM-B the number of backups available are: 3
3. For VM-C the number of backups available are: 3
4. For window the number of backups available are: 8

If you think, you would like some more information along with this, then leave a comment. I will further develop this script as needed. Hope this helps.

Friday, 16 June 2017

Deploy vSphere Data Protection 6.1 Using OVF Tool + Automating The Deployment With Bash

I am going to cover two things in this post rather than splitting it up because it is fairly simple. Before we get started, we had seen deploying VDP from a tool called govc CLI (Thanks to William Lam for a good article on this tool)

Govc CLI VDP deployment:

Govc CLI:

Another way to deploy this is using our good old OVF tool, which is available for Windows and for Linux (For which we will be bash scripting)

Deploying using OVF tool for Windows:

1. Download OVF tool 4.2 from the below link:

2. Install the tool on a windows box

3. Open a command prompt and change your directory to install location of OVF Tool.

The syntax for deploying VDP would be:

ovftool.exe --acceptAllEulas -ds="<Datastore-name>" --net:"Isolated Network"="<vm-portgroup-name>" --prop:"vami.gateway.vSphere_Data_Protection_6.1"="<gateway-address>" --prop:"vami.DNS.vSphere_Data_Protection_6.1"="<dns-address>" --prop:"vami.ip0.vSphere_Data_Protection_6.1"="<ip-address>" --prop:"vami.netmask0.vSphere_Data_Protection_6.1"="<subnet-mask>" <Location-of-the-file>\vSphereDataProtection-6.1.3.ova vi://<vcenter-ip/fqdn>/<data-center-name>/host/<cluster-name>

So a sample command would look like:

ovftool.exe --acceptAllEulas -ds="Local_Storage_1_39" --net:"Isolated Network"="VM Network 2" --prop:"vami.gateway.vSphere_Data_Protection_6.1"="10.x.x.x" --prop:"vami.DNS.vSphere_Data_Protection_6.1"="10.x.x.x" --prop:"vami.ip0.vSphere_Data_Protection_6.1"="10.x.x.x" --prop:"vami.netmask0.vSphere_Data_Protection_6.1"="255.x.x.x" C:\Users\Administrator\Downloads\vSphereDataProtection-6.1.3.ova vi://vcenter-prod.happycow.local/HappyCow-DC/host/HappyCow-Cluster

This will then prompt you to enter SSO username and SSO password and once authenticated it will proceed with the deployment.

Deploying Using Bash Script with Linux.

If you have a Linux based environment, then I have automated this process using bash and the script can be downloaded from the below link:

Please go through the ReadMe Before performing any deployments.

A copy of ReadMe:

1. Place the VDP OVF in the /root folder of the Linux appliance
2. Download the script and place it in the /root folder
3. If the Linux VM has internet access, then the script will download and install ovf tool If not, it will exit the script and you will have to manually install ovf tool on linux and then run the script
4. If the script detects the ovftool is already installed it will proceed further with the deployment of VDP where user inputs are requested.

Note the script and deployment is only for VDP 6.1. Hope this helps!

Wednesday, 14 June 2017

Automating vSphere Replication Deployment Using Bash Script

If you ever run into issues with vSphere Replication deployment in 6.5 or any other versions, you can use ovf tool to perform this deployment. Ovf tool might look a little complex to perform the deployment as the commands are quite big.

For windows version refer the below link:

If you have a Linux based environment you can use this shell script I wrote to automate the process. (User intervention needed to enter environment details, duh!)

To download the script and the ReadMe (Before running anything) refer:

I will be making a few changes to this in the coming days, but the base functionality will remain more or less the same.

ReadMe along with Change Log for 2017-06-16

1. Download the script
2. Place it in the /root directory of the Linux machine
3. Provide execute permissions for the script
chmod a+x
4. Mount the VR ISO to the Linux VM using vSphere / Web Client
5. Execute the script

The script tests for network connectivity.
if successful, then it will download the 4.2 version of ovftool and install it and then prompt the user for details

if unsucessful, then the script will exit and the ovftool has to be installed manually on the linux, or use windows method to deploy.

If you do not have OVF tool on linux and you also do not have internet access from Linux, then download OVF tool 4.2 manually from VMware website (.bundle file for Linux)
Put this file in /root directory of the Linux machine.
Install the OVF tool using:
# sudo /bin/sh VMware-ovftool-4.2.0-4586971-lin.x86_64.bundle

Once installed, then run the script. The script then checks if OVF tool is present. If present, it will continue further.
This is updated as some of the Linux boxes will not have internet access.

Hope this helps.

VDP 6.1.4 Backup Failures For ESXi 5.1

When you backup a VM residing on ESXi 5.1 which does not share the storage with the ESXi hosting the VDP appliance, the backup will use nbd / nbdssl protocol. With VDP 6.1.4 these backups will fail.

In the backup log you will notice the following:

2017-06-14T09:21:07.869+05:00 avvcbimage Info <14700>: submitting pax container in-use block extents:
2017-06-14T09:21:07.899+05:00 avvcbimage Warning <16041>: VDDK:[NFC ERROR] NfcFssrvrProcessErrorMsg: received NFC error 2 from server: Illegal message during fssrvr session, id = 46
 2017-06-14T09:21:07.900+05:00 avvcbimage Info <16041>: VDDK:DISKLIB-LIB   : RWv failed ioId: #1 (290) (34) .
 2017-06-14T09:21:07.900+05:00 avvcbimage Info <16041>: VDDK:VixDiskLib: Detected DiskLib error 290 (NBD_ERR_GENERIC).
2017-06-14T09:21:07.900+05:00 avvcbimage Info <16041>: VDDK:VixDiskLib: VixDiskLib_Read: Read 128 sectors at 0 failed. Error 1 (Unknown error) (DiskLib error 290: NBD_ERR_GENERIC) at 5178.

2017-06-14T09:21:07.900+05:00 avvcbimage Error <0000>: [IMG0008] VixDiskLib_Read() at offset 0 length 128 returned 1 (1) Unknown error
2017-06-14T09:21:07.900+05:00 avvcbimage Error <0000>: [IMG0008] VixDiskLib_Read() ([Prod-Datastore] VM-Prod/VM-Prod.vmdk) at offset 0 length 128 sectors returned (1) (1) Unknown error
2017-06-14T09:21:07.900+05:00 avvcbimage Info <9772>: Starting graceful (staged) termination, VixDiskLib_Read returned an error (wrap-up stage)
2017-06-14T09:21:07.901+05:00 avvcbimage Info <40654>: isExitOK()=157
2017-06-14T09:21:07.901+05:00 avvcbimage Info <16022>: Cancel detected(miscellaneous error), isExitOK(0).
2017-06-14T09:21:07.901+05:00 avvcbimage Info <9746>: bytes submitted was 0

You might also notice a back trace:

2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  INTERNAL ERROR: <0001> assert error (uapp::staging().isExitCancel() || list == __null || skipped_ntfs_chunks), /local/jenkins/workspace/client_CuttyS
ark_SLES11-64/abs2/work/src/avclient/filepipe.cpp line 101
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00000000006b93b1
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00000000006ba127
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00000000006ba292
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00000000006ba3fe
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00000000006ba87d
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 000000000060cb74
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 000000000060d67d
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00000000006d9379
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00000000006dc9a1
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00000000006dc200
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00000000006d9379
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00000000005e5af1
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 000000000056455a
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 0000000000442c07
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00000000004844d5
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00000000004844f9
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 000000000049a0bd
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 000000000070ac4c
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00007f8a8d282806
2017-06-14T09:21:08.322+05:00 [VcbImageBackupAssistThread]  | 00007f8a8bec79bd

This is because, in the newer release of VDP for backup over network, they have added couple of new compression algorithms. Zlib, FastLZ, SkipZ in the VDDK module. And ESXi 5.1 does not support this.

You could try making the VMs storage on ESXi 5.1 available to the VDP's ESXi host so a hotadd protocol can be used.

But the bottom line is, Please! Upgrade your ESXi from 5.1 (This is officially unsupported for 6.1.3 and 6.1.4 VDP)

Hope this helps.