Jumbo Frames – Do It Right

Configuring jumbo frames can be such a pain if it doesn’t get done properly. Over the last couple of years, I have seen many customers having mismatched MTUs due to improperly configured jumbo frames.. If it is done properly, jumbo frames can increase the overall network performance between your hosts and your storage array. It is recommendable to use it if you have 10GbE connection to your storage device. However, if it is not configured properly, jumbo frames quickly become your worst nightmare. I have seen it causing performance issues, drops of connection as well as ESXi hosts losing storage devices.

Now, we all know what kind of issues jumbo frames can cause as well as it is advisable to use it if you have a 10GbE connection to your storage device. However, let’s discuss some details about jumbo frames:

  • Larger than 1500 bytes
  • Many devices support up to 9216 bytes
    • Refer to your switch manual for the proper setting
  • Most people will refer to jumbo frame as a MTU 9000 bytes
  • It often causes a MTU mismatch due to misconfiguration

 

Below’s steps offer guidance on how to setup jumbo frame properly:

Note: I recommend to schedule a maintenance window for this change!

On your Cisco Switch:

Please take a look at this Cisco page which lists the syntax for most of their switches.
Once the switch ports have been configured properly, we can go ahead and change the networking settings on the storage device.

On Nimble OS 1.4.x:

  1. Go to Manage -> Array -> Edit Network Addresses
  2. Change the MTU of your data interfaces from 1500 to jumbo

nimble_1-4-X_jumbo

On Nimble OS 2.x:

  1. Go to Administration -> Network Configuration -> Active Settings -> Subnets
  2. Select your data subnet and click on edit. Change the MTU of your data interfaces from 1500 to jumbo.

NimbleOS2X_jumbo

 

On ESXi 5.x:

  1. Connect to your vCenter using the vSphere Client
  2. Go to Home -> Inventory -> Hosts and Clusters
  3. Select your ESXi host and click on Configuration -> NetworkingESXi_networking
  4. Click on Properties of the vSwitch which you want to configure for jumbo framesvSwitch_properties
  5. Select the vSwitch and click on Edit.
  6. Under “Advanced Properties”, change the MTU from 1500 to 9000 and click ok.vSwitch_Jumbo
  7. Next, select your vmkernel port and click on Edit.
  8. Under “NIC settings” you can change the MTU to 9000.vmk_jumbo
  9. Follow step 7 & 8 for all your vmkernel ports within this vSwitch.

After you changed the settings on your storage device, switch and ESXi host, log in to your ESXi host via SSH and run the following command to verify that jumbo frames are working from end to end:

vmkping -d -s 8972 -I vmkport_with_MTU_9000 storage_data_ip

If the ping succeeds, you’ve configured jumbo frames correctly.

SCSI UNMAP – VMware ESXi and Nimble Storage Array

Starting with VMware ESXi 5.0, VMware introduced the SCSI UNMAP primitive (VAAI Thin Provisioning Block Reclaim) to their VAAI feature collection for thin provisioned LUNs. VMware even automated the SCSI UNMAP process, however, starting with ESXi 5.0U1, SCSI UNMAP became a manual process. Also, SCSI UNMAP needs to be supported by your underlying SAN array. Nimble Storage started to support SCSI UNMAPs with Nimble OS version 1.4.3.0 and later.


What is the problem?

When deleting a file from your VMFS5 datastore (thin provisioned), the usage reported on your datastore and the underlying Nimble Storage volume will not match. The Nimble Storage volume is not aware of any space reclaimed within the VMFS5 datastore. This could be caused by a single file like an ISO but also be due to the deletion of a whole virtual machine.

What version of VMFS is supported?

You can run SCSI UNMAPs against VMFS5 and upgraded VMFS3-to-VMFS5 datastores.

What needs to be done on the Nimble Storage array?

SCSI UNMAP is supported by Nimble Storage arrays starting from version 1.4.3.0 and later.
There is nothing to be done on the array.

How do I run SCSI UNMAP on VMware ESXi 5.x?

  1. Establish a SSH session to your ESXi host which has the datastore mounted.
  2. Run esxcli storage core path list | grep -e ‘Device Display Name’ -e ‘Target Transport Details’  to get a list of volumes including the EUI identifier. list eui for scsi unmap
  3. Run VAAI status get to verify if SCSI UNMAP (Delete Status) is supported for the volume.
    esxcli storage core device vaai status get -d eui.e5f46fe18c8acb036c9ce900c48a7f60
    eui.e5f46fe18c8acb036c9ce900c48a7f60
    VAAI Plugin Name:
    ATS Status: supported
    Clone Status: unsupported
    Zero Status: supported
    Delete Status: supported
  4. Change to the datastore directory.
    cd /vmfs/volumes/
  5. Run vmkfstools to trigger SCSI UNMAPs.
    vmkstools -y
    For ESXi 5.5: Use 
    esxcli storage vmfs unmap -l
    Note: the value for the percentage has to be between 0 and 100. Generally, I recommend using 60 to start with.
  6. Wait until the ESXi host returns “Done”.

 

Further details for ESXi 5.0 and 5.1 can be found here  and for ESXi 5.5, please click here.

 

 

InfoSight – Manage Case Creation Efficiently

Nimble Storage’s InfoSight changes how storage administrators manage and monitor their arrays in today’s environment. InfoSight includes many great features for free. Just to mention a few, the Assets tab provides a basic overview of your array’s storage and cache utilization as well as it’s configured pro-active health mechanisms. The Capacity tab shows the current usage as well as projected usage for the upcoming weeks.

Today, we’ll cover how to manage case creation through InfoSight’s Wellness tab.

By default Nimble Storage creates pro-actively cases for any condition on the array which causes an issue or could potentially cause headache for the storage administrator. However, not all pro-active cases might be important to you. If you want to get a list of all pro-active cases which are available on InfoSight, please follow the steps as shown below.

Note: Unchecking a condition equals disabling it.

Please login to Nimble Storage’s InfoSight and go to the Wellness Tab. .

InfoSight_Wellness

When clicking on Case Creation Options, you’ll get an overview of all case creation conditions and can either set a snooze period or disable them.

Note: Snooze Period indicates after how many days a new case for an existing problem will be created. If Snooze Period has been set to 1, this will create a new case every day until the actual problem has been resolved.

Manage Case Creation

 

Basically, InfoSight is a great all-in-one tool which even allows you manage Nimble Storage’s pro-active case creation more efficiently. My next post will be about common log files on your ESXi host and how you can use them to your benefit while troubleshooting.