SCSI UNMAP – VMware ESXi and Nimble Storage Array

Starting with VMware ESXi 5.0, VMware introduced the SCSI UNMAP primitive (VAAI Thin Provisioning Block Reclaim) to their VAAI feature collection for thin provisioned LUNs. VMware even automated the SCSI UNMAP process, however, starting with ESXi 5.0U1, SCSI UNMAP became a manual process. Also, SCSI UNMAP needs to be supported by your underlying SAN array. Nimble Storage started to support SCSI UNMAPs with Nimble OS version 1.4.3.0 and later.


What is the problem?

When deleting a file from your VMFS5 datastore (thin provisioned), the usage reported on your datastore and the underlying Nimble Storage volume will not match. The Nimble Storage volume is not aware of any space reclaimed within the VMFS5 datastore. This could be caused by a single file like an ISO but also be due to the deletion of a whole virtual machine.

What version of VMFS is supported?

You can run SCSI UNMAPs against VMFS5 and upgraded VMFS3-to-VMFS5 datastores.

What needs to be done on the Nimble Storage array?

SCSI UNMAP is supported by Nimble Storage arrays starting from version 1.4.3.0 and later.
There is nothing to be done on the array.

How do I run SCSI UNMAP on VMware ESXi 5.x?

  1. Establish a SSH session to your ESXi host which has the datastore mounted.
  2. Run esxcli storage core path list | grep -e ‘Device Display Name’ -e ‘Target Transport Details’  to get a list of volumes including the EUI identifier. list eui for scsi unmap
  3. Run VAAI status get to verify if SCSI UNMAP (Delete Status) is supported for the volume.
    esxcli storage core device vaai status get -d eui.e5f46fe18c8acb036c9ce900c48a7f60
    eui.e5f46fe18c8acb036c9ce900c48a7f60
    VAAI Plugin Name:
    ATS Status: supported
    Clone Status: unsupported
    Zero Status: supported
    Delete Status: supported
  4. Change to the datastore directory.
    cd /vmfs/volumes/
  5. Run vmkfstools to trigger SCSI UNMAPs.
    vmkstools -y
    For ESXi 5.5: Use 
    esxcli storage vmfs unmap -l
    Note: the value for the percentage has to be between 0 and 100. Generally, I recommend using 60 to start with.
  6. Wait until the ESXi host returns “Done”.

 

Further details for ESXi 5.0 and 5.1 can be found here  and for ESXi 5.5, please click here.

 

 

Change The OpenStack Glance Image Store

Today I ran into an issue where I ran out of space on my root partition due to multiple ISOs which I have stored in OpenStack Glance. After some tests, I decided to change the Glance image store to an iSCSI volume attached to my controller.

Let’s get started with the basic iSCSI setup (no MPIO), I assume you’ve already created a volume on your storage and set the ACL accordingly :

OpenStack – Icehouse Deployment Via Packstack

Today I decided to set-up a new OpenStack environment to run some tests and provide a training on it.
This blog post will cover “OpenStack – Icehouse Deployment Via Packstack”.

There are several ways to deploy a OpenStack environment with a single-node or a multi-node:

  1. Packstack – Quickest and easiest way to deploy a single-node or multi-node OpenStack lab on any RHEL distribution
  2. Devstack – Mainly used for development, requires more time then packstack
  3. Juju – Very time-consuming setup but very stable, Ubuntu only.
  4. The manual way, most time-consuming, recommended for production environments.
    Detail can be found here.

In my scenario I deployed 4 CentOS 6.4 64bit VMs with each having 2x2vCPUs, 4GB memory, 2x NIC cards (one for MGMT, one for iSCSI – no MPIO).
After you completed the CentOS 6.4 installation, follow the steps below:


The initial install of OpenStack via packstack has been completed and you can start to configure it via CLI or using Horizon.

How I Got Started With Surfing

When I started this blog, I decided not to post just about virtualization but also about my personal life and the things which belong to it. As some of you might already know, I am a big fan of surfing and try to watch every tournament and practice whenever possible. I started surfing in October 2013 when my friend Jeremy Sallee, a UI/UX designer, introduced me to the sport. Since then I have been out in the water almost every weekend. My first session ever has been at Linda Mar Beach in Pacifica,CA with 6ft waves. I can tell you, that’s not the best condition for a noob who doesn’t know what he’s doing out there in the water.

Currently I ride a 8ft longboard with a 3-fin setup. The length of the board provides the stability of a typical longboard and the 3-fin setup allows easier and quicker turns like on a shortboard. After almost one year of surfing, I start to feel comfortable taking 5-6ft waves with my board. However, later this year, I plan a transition to a shorter, egg shaped, board.

Two weeks ago, I went out for another session at Linda Mar Beach in Pacifica, CA. The day started out perfect with some great breakfast and then a good 2.5h session in the water. I caught some nice 2-5ft waves and luckily, I recorded some of my experience with a GoPro.

 

 

 

Crucial Data In Your VMware ESXi 5 Log Files

As an Escalation Engineer, part of my daily work is reviewing log files of various systems and vendors. In my first blog post, I would like to show which VMware ESXi 5 log files are most relevant for troubleshooting storage and networking related problems.

All current ESXi 5 logs are located under /var/log and as they rotate, they’ll be available under /scratch/logs

 

vmware_esxi_logs

/var/log/vmkernel.log:

  • VMkernel related activities, such as:
    • Rescan and unmount of storage devices and datastores
    • Discovery of new storage like iSCSI and FCP LUNs
    • Networking (vmnic and vmks connectivity)

/var/log/vmkwarning.log:

  • Extracted warning and alert messages from the vmkernel.log

/var/log/hostd.log::

  • Logs related to the host management service
  • SDK connections
  • vCenter tasks and events
  • Connectivity to vpxa service, which is the vCenter agent on the ESXi server

/var/log/vobd.log:

  • VMkernel observations
  • Useful for network and performance issues

Also, if you have a VM which is affected in particular, it might be worth looking into the vmware.log which is stored with the Virtual Machine. You can find the log under /vmfs/volumes/datastore_name/VM_name/vmware.log.

For the location of ESXi 3.5 and 4.x log files, can be found here.