Showing posts with label vsphere api array integration. Show all posts
Showing posts with label vsphere api array integration. Show all posts

Wednesday, November 20, 2013

Using VAAI UNMAP on vSphere 5.5 & NetApp Storage

Welcome: To stay updated with all my Blog posts follow me on Twitter @arunpande !!


In my previous blog vStorage APIs for Array Integration (VAAI) & NetApp – How to set it right? I have shared steps to use VAAI. In this blog I will cover the steps required to use the VAAI UNMAP primitive in vSphere 5.5. The UNMAP primitive is used by the ESXi host to update the Storage Array about the storage blocks that has to be reclaimed after deleting a VM or migrating it to another datastore using Storage vMotion. In vSphere 5.5 # esxcli storage vmfs unmap command is used whereas in the earlier version vmkfstools –y command was used. You can now specify the number of blocks to be reclaimed using -n option whereas with vmkfstools –y command you had to specify the percentage of blocks that you want to reclaim. It is advised to perform this step after business hours or when there is no active I/O on the datastore, however I have not tested


In this scenario I am using a thin provisioned LUN from NetApp Storage and to demonstrate space reclamation I will create two scenarios (i) deleting the thick disk  (ii) migrating VMs using Storage vMotion. I will also share the storage capacity from NetApp Virtual Storage Console (VSC) which will give a view about the available space not only on the VMFS datastore but also the underlying LUN/Volume/Aggregate.


Scenario 1 – Deleting a thick disk from the virtual machine
Here is an overview about the Capacity of Datastore/LUN/Volume/Aggregate as per VSC.

             


Capacity of the datastore as per ESXi Shell


# du -h /vmfs/volumes/iscsi_2/
1.0M    /vmfs/volumes/iscsi_2/.sdd.sf
8.0K    /vmfs/volumes/iscsi_2/ntap_rcu1374646447227
8.0K    /vmfs/volumes/iscsi_2/ntap_rcu1374459789333
8.0K    /vmfs/volumes/iscsi_2/.naa.600a09802d6474573924384a79717958
194.1G  /vmfs/volumes/iscsi_2/Win2k8-1
194.9G  /vmfs/volumes/iscsi_2/


This indicates that the total used capacity on the datastore is 194.9 GB
We will now delete the 150 GB Eager Zeroed Thick Disk. After deleting this virtual disk the ESXi shell reports the following capacity.


# du -h
1.0M    ./.sdd.sf
8.0K    ./ntap_rcu1374646447227
8.0K    ./ntap_rcu1374459789333
8.0K    ./.naa.600a09802d6474573924384a79717958
44.1G   ./Win2k8-1
44.9G   .


The free space on the datastore is now 205 GB and the used space is approximately 44.9 GB.  However NetApp Storage does not detect this free space on the LUN, here is the output of the lun show command that is executed from the Clustered Data ONTAP CLI.


clus-1::> lun show -v /vol/iscsi_2/iscsi_2
              Vserver Name: vmwaretest
                  LUN Path: /vol/iscsi_2/iscsi_2
               Volume Name: iscsi_2
                Qtree Name: ""
                  LUN Name: iscsi_2
                  LUN Size: 250.3GB
                   OS Type: vmware
         Space Reservation: disabled
             Serial Number: -dtW9$8JyqyX
                   Comment: The Provisioning and Cloning capability created this lun at the request of Administrator
Space Reservations Honored: false
          Space Allocation: enabled
                     State: online
                  LUN UUID: 7fe6d24a-f782-476d-827e-a4d20f371abb
                    Mapped: mapped
                Block Size: 512
          Device Legacy ID: -
          Device Binary ID: -
            Device Text ID: -
                 Read Only: false
Inaccessible Due to Restore: false
                 Used Size: 237.9GB
       Maximum Resize Size: 2.50TB
             Creation Time: 12/16/2010 03:27:26
                     Class: regular
                     Clone: false
  Clone Autodelete Enabled: false
          QoS Policy Group: -


VSC also reports the same capacity for this LUN.

            

We will now use the UNMAP primitive from the ESXi shell using the command
# esxcli storage vmfs unmap -l iscsi_2


NOTE: You can also specify the number of blocks that you want to reclaim using –n option. If you specify 500 then 500 x 1MB (i.e. default block size in VMFS 5) blocks would be reclaimed.


On monitoring the esxtop I have observed that the DELETE statistics has increased to 52527.


VSC now reports the following capacity, where we see that the free space is now updated for LUN & Volume.



Scenario 2 – Test UNMAP after relocating VMs using Storage vMotion.


NetApp VSC reports the following storage usage.

            


Datastore Usage according to ESXi Shell


~ # df -h
Filesystem   Size   Used Available Use% Mounted on
VMFS-5       1.0T 881.5G    143.0G  86% /vmfs/volumes/FC-Infra


Datastore Usage per VM is given below


~ # du -h /vmfs/volumes/FC-Infra /
74.5G   /vmfs/volumes/FC-Infra/VC
78.3G   /vmfs/volumes/FC-Infra/DB
15.4G   /vmfs/volumes/FC-Infra/Oncommand-Proxy
8.0K    /vmfs/volumes/FC-Infra/.vSphere-HA
1.3M    /vmfs/volumes/FC-Infra/.dvsData/7a 4c 23 50 26 82 38 5d-d9 e5 e2 78 4f 7d af 26
32.0K   /vmfs/volumes/FC-Infra/.dvsData/3e 55 23 50 21 27 03 84-e3 f4 4a 7f de 48 08 32
1.3M    /vmfs/volumes/FC-Infra/.dvsData
29.4G   /vmfs/volumes/FC-Infra/AD
64.1G   /vmfs/volumes/FC-Infra/VASA
23.5G   /vmfs/volumes/FC-Infra/VSI Launcher-9
23.5G   /vmfs/volumes/FC-Infra/VSI Launcher-7
12.0G   /vmfs/volumes/FC-Infra/OnCommand Balance
32.7G   /vmfs/volumes/FC-Infra/ViewComposer
63.3G   /vmfs/volumes/FC-Infra/View Connection Server
19.5G   /vmfs/volumes/FC-Infra/VSIShare
19.5G   /vmfs/volumes/FC-Infra/VSI Launcher-10
21.4G   /vmfs/volumes/FC-Infra/UM-6.0
20.6G   /vmfs/volumes/FC-Infra/VSI Launcher
15.3G   /vmfs/volumes/FC-Infra/VSI Launcher-Template
24.4G   /vmfs/volumes/FC-Infra/VSI Launcher-2
23.5G   /vmfs/volumes/FC-Infra/VSI Launcher-4
24.7G   /vmfs/volumes/FC-Infra/VSI Launcher-3
23.5G   /vmfs/volumes/FC-Infra/VSI Launcher-5
25.5G   /vmfs/volumes/FC-Infra/VSI Launcher-6
25.5G   /vmfs/volumes/FC-Infra/VSI Launcher-8
34.4G   /vmfs/volumes/FC-Infra/UI VM
181.5G  /vmfs/volumes/FC-Infra/Analytics VM
1.5G    /vmfs/volumes/FC-Infra/vmkdump
881.3G  /vmfs/volumes/FC-Infra/


To make some free space on the Storage I have Storage vMotioned the following VMs to another datastore.
29.4G   /vmfs/volumes/FC-Infra/AD
78.3G   /vmfs/volumes/FC-Infra/DB


After the above VMs were migrated to other datastores the following datastore usage was reported:


From the Filer, notice that the LUN Used Size remains the same.


veo-f3270::> lun show -v /vol/infra_services/infra


             Vserver Name: Infra_Vserver
                 LUN Path: /vol/infra_services/infra
              Volume Name: infra_services
               Qtree Name: ""
                 LUN Name: infra
                 LUN Size: 1TB
                  OS Type: vmware
        Space Reservation: disabled
            Serial Number: 7T-iK+3/2TGu
                  Comment:
Space Reservations Honored: false
         Space Allocation: disabled
                    State: online
                 LUN UUID: ceaf5e6e-5a6a-11dc-8751-123478563412
                   Mapped: mapped
               Block Size: 512B
         Device Legacy ID: -
         Device Binary ID: -
           Device Text ID: -
                Read Only: false
                Used Size: 848.9GB
            Creation Time: 9/3/2007 18:12:49


NetApp VSC does not report any changes in LUN Usage either.

            


ESXi Shell reports the updated free space.
~ # df -h
Filesystem   Size   Used Available Use% Mounted on
VMFS-5       1.0T 773.8G    250.7G  76% /vmfs/volumes/FC-Infra


I have now performed the reclaim operation from the ESXi Shell using the below command
# esxcli storage vmfs unmap -l FC-Infra


NOTE: You can also specify the number of blocks that you want to reclaim using –n option. If you specify 500 then 500 x 1MB (i.e. default block size in VMFS 5) blocks would be reclaimed.

VSC now reports free space in the LUN Usage.

            



The filer also reports the updated Storage Capacity.
veo-f3270::> lun show -v /vol/infra_services/infra


             Vserver Name: Infra_Vserver
                 LUN Path: /vol/infra_services/infra
              Volume Name: infra_services
               Qtree Name: ""
                 LUN Name: infra
                 LUN Size: 1TB
                  OS Type: vmware
        Space Reservation: disabled
            Serial Number: 7T-iK+3/2TGu
                  Comment:
Space Reservations Honored: false
         Space Allocation: disabled
                    State: online
                 LUN UUID: ceaf5e6e-5a6a-11dc-8751-123478563412
                   Mapped: mapped
               Block Size: 512B
         Device Legacy ID: -
         Device Binary ID: -
           Device Text ID: -
                Read Only: false
                Used Size: 742.6GB
            Creation Time: 9/3/2007 18:12:49

Wednesday, October 30, 2013

vStorage APIs for Array Integration (VAAI) & NetApp – How to set it right?



Welcome: To stay updated with all my Blog posts follow me on Twitter @arunpande !!

This blog provides steps and points to consider while using vSphere VAAI (block & file) with NetApp Storage. vSphere VAAI is used to offloaded certain tasks from the ESXi hosts to the underlying storage resulting in faster provision/deployment and improved performance of the ESXi hosts. To ensure that this configuration is setup correctly, consider the following steps:


Compatibility:
It’s important that the setup you are using is compatible i.e. the ONTAP version and the corresponding ESXi version support the VAAI primitives that you are going to use. To understand the compatibility between ONTAP and ESXi review the NetApp KB How to determine if VAAI features are being used in a vSphere and NetApp environment? (You need to register with http://support.netapp.com in order to view the KB)


In addition to the above NetApp KB also check VMware Compatibility Guide to confirm the compatibility of the Storage Array with the version of vSphere. Here is an example of how the supported VAAI primitives are reported




Enable VAAI on ESXi host:


Once you have confirmed that the VAAI primitives are supported by the ESXi and Storage Array (ONTAP) version you have to ensure that VAAI is enabled. For block level storage (FC/iSCSI) VAAI is enabled by default on the ESXi hosts. To confirm this use the following commands on the ESXi host


# esxcfg-advcfg -g /DataMover/HardwareAcceleratedMove Value of HardwareAcceleratedMove is 1
# esxcfg-advcfg -g /DataMover/HardwareAcceleratedInit Value of HardwareAcceleratedInit is 1
# esxcfg-advcfg -g /VMFS3/HardwareAcceleratedLocking Value of HardwareAcceleratedLocking is 1


From the above command we can identify the VAAI primitives for example HardwareAcceleratedMove corresponds to FULL Copy, HardwareAcceleratedInit corresponds to Block Zero & HardwareAcceleratedLocking corresponds to ATS.


To disable VAAI primitive use the following command for all primitives
# esxcfg-advcfg –s 0 /DataMover/HardwareAcceleratedMove Value of HardwareAcceleratedMove is 0


To enable VAAI primitive use the following command for all primitives
# esxcfg-advcfg –s 1 /DataMover/HardwareAcceleratedMove Value of HardwareAcceleratedMove is 0


Alternatively you can also use GUI to enable VAAI on the ESXi host. For the ESXi host by navigating to Configuration > Advanced Setting under Software > Datamover and ensure that the values are 1 


     For the third primitive you have to navigate to VMFS3




NOTE: VAAI is enabled by default for ESXi hosts and if you are going to use block level storage no additional changes are required on the ESXi host.


If you want to VAAI for NetApp NFS Storage then you need to install the NFS VAAI Plugin. To download the plugin navigate to http://support.netapp.com/ and click on Downloads > Software > NetApp NFS Plug-in for VMware VAAI > Select ESXi > Go


You can now download the NetAppNasPlugIn.v20.vib or NetAppNasPlugin.v20.zip. I chose to download NetAppNasPlugin.v20.vib and used the following command to install the VIB on the ESXi host.
# esxcli software vib install –d /vmfs/datastore/nfs_datastore/ NetAppNasPlugin.v20.vib


NOTE: If you are using NetApp Virtual Storage Console (VSC) you can also push the installation of the VIB on the ESXi hosts using VSC.


Enable VAAI on NetApp Storage:

VAAI is enabled by default in Data ONTAP for block storage. For NFS you have to use the following commands to enable vStorage.


Clustered Data ONTAP
vserver nfs modify –vserver vserver_name -vstorage enabled


7 Mode
options nfs.vstorage.enable on


Check VAAI settings


The datastores that are VAAI capable should have Hardware Acceleration status as Supported. If this is marked as Unknown then either few or none of the VAAI primitives are supported.




You may also check the hardware acceleration status from ESXi Shell using the following commands


Get the details about the LUN using the following command


~ # esxcli storage core device list -d naa.600a09802d6474573924384a79717958
naa.600a09802d6474573924384a79717958
  Display Name: NETAPP iSCSI Disk (naa.600a09802d6474573924384a79717958)
  Has Settable Display Name: true
  Size: 256280
  Device Type: Direct-Access
  Multipath Plugin: NMP
  Devfs Path: /vmfs/devices/disks/naa.600a09802d6474573924384a79717958
  Vendor: NETAPP
  Model: LUN C-Mode
  Revision: 8200
  SCSI Level: 4
  Is Pseudo: false
  Status: degraded
  Is RDM Capable: true
  Is Local: false
  Is Removable: false
  Is SSD: false
  Is Offline: false
  Is Perennially Reserved: false
  Queue Full Sample Size: 0
  Queue Full Threshold: 0
  Thin Provisioning Status: yes
  Attached Filters: VAAI_FILTER
  VAAI Status: supported
  Other UIDs: vml.0200040000600a09802d6474573924384a797179584c554e20432d
  Is Local SAS Device: false
  Is Boot USB Device: false
  No of outstanding IOs with competing worlds: 32


To get VAAI details about a specific LUN use the following command, you can now determine the VAAI primitives that are supported. Note that all VAAI primitives might not be supported for a specific ESXi and ONTAP.


~ # esxcli storage core  device vaai status get -d naa.600a09802d6474573924384a79717958
naa.600a09802d6474573924384a79717958
  VAAI Plugin Name: VMW_VAAIP_NETAPP
  ATS Status: supported
  Clone Status: supported
  Zero Status: supported
  Delete Status: supported

I will discuss more about the statistics in the next blog.