Starting with VMware ESXi 5.0, VMware introduced the SCSI UNMAP primitive (VAAI Thin Provisioning Block Reclaim) to their VAAI feature collection for thin provisioned LUNs. VMware even automated the SCSI UNMAP process, however, starting with ESXi 5.0U1, SCSI UNMAP became a manual process. Also, SCSI UNMAP needs to be supported by your underlying SAN array. Nimble Storage started to support SCSI UNMAPs with Nimble OS version 188.8.131.52 and later.
What is the problem?
When deleting a file from your VMFS5 datastore (thin provisioned), the usage reported on your datastore and the underlying Nimble Storage volume will not match. The Nimble Storage volume is not aware of any space reclaimed within the VMFS5 datastore. This could be caused by a single file like an ISO but also be due to the deletion of a whole virtual machine.
What version of VMFS is supported?
You can run SCSI UNMAPs against VMFS5 and upgraded VMFS3-to-VMFS5 datastores.
What needs to be done on the Nimble Storage array?
SCSI UNMAP is supported by Nimble Storage arrays starting from version 184.108.40.206 and later.
There is nothing to be done on the array.
How do I run SCSI UNMAP on VMware ESXi 5.x?
- Establish a SSH session to your ESXi host which has the datastore mounted.
- Run esxcli storage core path list | grep -e ‘Device Display Name’ -e ‘Target Transport Details’ to get a list of volumes including the EUI identifier.
- Run VAAI status get to verify if SCSI UNMAP (Delete Status) is supported for the volume.
esxcli storage core device vaai status get -d eui.e5f46fe18c8acb036c9ce900c48a7f60
VAAI Plugin Name:
ATS Status: supported
Clone Status: unsupported
Zero Status: supported
Delete Status: supported
- Change to the datastore directory.
- Run vmkfstools to trigger SCSI UNMAPs.
For ESXi 5.5: Use esxcli storage vmfs unmap -l
Note: the value for the percentage has to be between 0 and 100. Generally, I recommend using 60 to start with.
- Wait until the ESXi host returns “Done”.
Further details for ESXi 5.0 and 5.1 can be found here and for ESXi 5.5, please click here.