Nimble Storage Fibre Channel & VMware ESXi Setup

This post will cover the integration of a Nimble Storage Fibre Channel array in a VMware ESXi environment. The steps are fairly similar to integrating a Nimble iSCSI array but there are some more FC specific settings which need to be set.

First, go ahead and create a new volume on your array. Go to Manage -> Volumes and click on New Volume. Specify the Volume Name, Description and select the proper Performance Policy for proper block alignment. Next, select the initiator group which has the information of your ESXi host. If you don’t have an initiator group yet, click on New Initiator Group.

Create Volume FC

Name your new initiator group and specify the WWNs of your ESXi hosts. This will allow your hosts to connect to the newly created volume.
Also, specify a unique LUN ID. In this case, I have assigned LUN ID 87.

Screen Shot 2014-11-20 at 8.58.41 PM

 

Screen Shot 2014-11-20 at 9.41.54 PM

Next, specify the size and reservation settings for the volume.

Screen Shot 2014-11-20 at 8.59.45 PM

Specify any protection schedule if required and click on Finish to create the volume.

Screen Shot 2014-11-20 at 9.31.13 PM

Now, the volume is created on the array and your initiators are set-up to allow connection from your FC HBA on the host to connect.
After a rescan of the FC HBA, I can see my LUN with the ID 87.

Screen Shot 2014-11-20 at 9.40.31 PM

 

Looking at the path details for LUN 87, you can 8 paths (2HBA’s x 4 Target Ports). The PSP should be set to NIMBLE_PSP_DIRECTED.
I have 4 Active(I/O) paths and 4 Standby paths. The Active(I/O) paths are going to the active controller and the Standby paths are for the standby controller.

Screen Shot 2014-11-20 at 9.49.51 PM

 

On the array I can now see all 8 paths under Manage -> Connections.

Screen Shot 2014-11-20 at 9.53.17 PM

The volume can now be used as a Raw Device Map or a Datastore. Those were all steps required to get your FC array connected to the an ESXi host, once your zones on your FC switches are configured.

 

Some of the images have been provided by Rich Fenton, one of Nimble’s Sales Engineer from UK.

Nimble Storage Fibre Channel Array Setup

Since Nimble Storage introduced Fibre Channel, I’m sure that many of our customers and prospects want to use their new FC array.
In this post, I will cover how to setup your new FC array and indicate what has changed in the setup manager as well as in the WebUI of the array.

All Fibre Channel arrays will be shipped with the Nimble OS version 2.2.2.0 as it is the first OS which supports FC.
Once you have unpacked your new array, racked and cabled it, power it on. For the initial setup, you will need the Nimble Setup Manager on your local machine. The Nimble Setup Manager is part of the Nimble Windows Toolkit and can be downloaded from InfoSight. If you do not have an InfoSight login yet, please register as a new user.

Note: You will need your array serial number to register successfully.

After you started the Nimble Setup Manager, it will find your storage array and ask you to accept the EULA.
Next, you will be asked if you want to add this array to an existing group or set it up as  a standalone array.

FCInstall1

In this setup, we decided to not join an existing group. Specify the array & group name and some additional management settings and hit next.

 

FCInstall2

In the next screen you have to specify your subnet labels. Since this is a Fibre Channel array, you do not need to specify a data subnet. However, we have chosen to create a data subnet dedicated for replication.

FCInstall3

Finally, we can see our actual FC ports and as you hover over each single FC port, you can see the operational speed.
By the way, don’t forget to set your diagnostic IPs. Those come in handy if you ever have to engage Support.

FCInstall4

The next screen should look familiar again as it is the same for every Nimble Storage array. Specify the domain name and your DNS server.

FCInstall5

Also, this screen should look familiar. Nothing has changed here. Specify your time zone and a NTP server.

FCInstall6

This is the final step for the initial setup. Make sure to setup an unauthorized SMTP relay on your mail server for your new array.
Also, please check the box for Send event data to Nimble Storage Support. A lot of Nimble’s case automation and pro-active wellness relies on email alerts.
If you think you don’t need email alerts and all this pro-active wellness stuff, watch this video and see what you’ll miss out on. I highly recommend enabling those alerts!

Additionally, make sure Autosupport is enabled and works. Autosupport data is also playing a big role in Nimble’s pro-active wellness & InfoSight.
Once you are done, hit Finish and your array is ready for some action.

FCInstall7

Go to Manage -> Arrays and select your array name. It will open this part of the WebUI and you can see your Ethernet and FC ports as well as the usual details.

FCInstall8

Heading over to Administration -> Network Configuration. Select the active configuration and select the Interfaces-Tab. Here, you can see all your FC port including their WWPNs and the WWNN.
For the guys new to FC, WWPN = World Wide Port Name & WWNN = World Wide Node Name.

FCInstall9

Additional to the new Interfaces-Tab, Nimble Storage also changed the Initiator Group UI in order to accommodate FC Initiators/WWPNs.

FCInstall10

All images have been provided by Rich Fenton, one of Nimble’s Sales Engineer from UK.

Nimble Storage Fibre Channel

On Monday, November 17th, Nimble Storage announced official Fibre Channel support. This is another big milestone achieved by the team at Nimble. Fibre Channel will be available for the CS300, CS500 and CS700. The CS210 and CS215 will not be supported.

FC-Array

 

Below are some screenshots of what’s new in the WebUI:

FC_array

As you can see above, fc9 and fc10 are the FC ports on both controllers. Hovering over those ports will show the location of them.

FC_array

 

Additionally, the Initiator Groups have been modified to accommodate WWPNs

FC_array

Adding new WWPNs is as easy as we’re used to it from the iSCSI initiator.

FC_array

Overall, nothing special has been added to the WebUI. Everything has been kept simple as we expected and as we love the nimble device. This is most certainly opening up a bigger market share for Nimble Storage.

Over the next days, I’ll provide more details about Nimble’s FC integration

OpenStack & Nimble Storage ITO feature

Nimble Storage’s Cinder Driver includes a new feature called ITO – Image Transfer Optimization.

With most cinder backends, every time you deploy a new instance from an image, a volume/LUN gets created on the backend storage.
This means you might potentially use up a lot of space for data which is redundant.
In order to avoid unnecessary duplication of data, Nimble Storage introduced ITO – Image Transfer Optimization.

ITO will be helpful in cases where you might want to create 20 instances at a time from the same ISO.
With ITO, only one volume with the ISO will be created and then zero-copy clones will be utilized in order to boot the other 19 instances.
This seems to be the most space efficient way for deploying instances.

The benefits are simple:

  • Instant Copy
  • No duplicated data
  • Shared Cache

Below, you can see the workflow for deploying instances without ITO enabled:
no_ito

And here with ITO enabled:

ito_enabled

 

Thanks to @wensteryu for the images.

OpenStack & Nimble Storage – Cinder multi backend

This post describes how to setup a Nimble Storage array within a cinder multi backend configuration, running OpenStack Icehouse.
If you are new to OpenStack or cinder, you might be asking why should you run single-backend vs multi-backend.

Basically, single-backend means you are using a single storage array/single group of arrays as your backend storage. In a multi backend configuration, you might have storage arrays from multiple vendors or you might just have different Nimble Storage arrays which provide several levels of performance. For example, you might want to use your CS700 as high-performance storage and your CS220 as a less performance intensive storage.

  1. Upload your Cinder driver to /usr/lib/python2.6/site-packages/cinder/volume/drivers
  2. Add theNimbleCinderparameters to /etc/cinder/cinder.conf as a new section
  3. Add [nimble-cinder] to enabled-backend.Ifenabled-backends does not yet exist in your cinder.conf file, please add the following line:
  4. Create a new volume type for the nimble-cinder backend:
  5. Next, link the backend name to the volume type:
  6. Restartcinder-api, cinder-scheduler and cinder-volume
  7. Create a volume either via Horizon or the CLI

  8. Verify the volume has successfully been created
  9. Verify the creation of the volume on your storage array. Go to Manage -> Volumes
    openstack_array