Saturday, December 28, 2013

Do you use NetApp Virtual Storage Console for vSphere Infrastructure?

Welcome: To stay updated with all my Blog posts follow me on Twitter @arunpande !

In this blog post I will discuss how using NetApp Virtual Storage Console the vSphere Administrators can perform various NetApp Storage management tasks from vCenter interface. I will cover installation of VSC, followed by Registration of VSC and basic Setup. 
NetApp Virtual Storage Console is a plugin for vCenter which provides a single user interface and end-to-end management for the following tasks performed in the Infrastructure:
  • Discovery of new and existing NetApp Storage arrays, aggregates, volumes and LUNs.
  • Provisioning – Creating new VMFS/NFS volumes on NetApp Storage and mounting them to the ESXi hosts.
  • Capacity Management – Monitor the capacity of the VMFS/NFS datastores at all levels i.e. Aggregate, Volume, LUN & Datastore.
  • Backup & Recovery – Configure backup & recovery for entire datastore, virtual machine, VMDK or single files.
  • Optimization – Identify misaligned VMs or Datastores and rectify them.
  • Space Reclamation for virtual machines.
With VSC the gap between Storage & vSphere administrators can be bridged resulting in faster provisioning and other management tasks. VSC also has built in optimization settings for ESXi hosts which can applied to the ESXi hosts to reduce issues and support calls.
Install NetApp Virtual Storage Console (VSC)
Download the latest version of VSC from NetApp Support Site. Note that you need to have valid credentials to download this software. For this installation I chose x64 bit version of VSC and installed on a Windows 2008 R2 VM
Launch the installer to start the installation of VSC and click Next to continue
Read about the credentials required when using Backup and Recovery in VSC and click I understand to acknowledge and click Next to continue.
Select the capabilities of VSC and click Next to continue
Change the destination folder for VSC installation if required and click Next to continue.
Click Install to start the installation. Note the URL provided to register VSC, this has to be done after the installation is complete.

Register NetApp VSC with VMware vCenter
Once the installation is complete a web browser would be launched to https://localhost:8143/Register.html where the following details would be required to complete the registration of VSC with vCenter.
  • Plugin service information
    • Hostname or IP Address: IP Address or Hostname of the system where VSC was installed.

  • vCenter Server Information
    • Host name or IP Address: IP or FQDN of the vCenter Server
    • Port: 443
    • User name: Use administrator@vsphere.local if you want to use the default SSO user account.  If you are using vCenter Linux Appliance use root username. If you have created an Identity source for the domain user account and also set it as default then use the DOMAIN/Username.
    • User Password: Enter the password for the above vCenter administrator username.
Once the Registration is complete start a new vSphere Client session to vCenter. Navigate to Plug-ins & Manage-Plugins to confirm that Virtual Storage Console plugin is enabled.
Discover NetApp Storage in VSC
Use VSC to discover any existing NetApp storage array. Once the Array is discovered you would be able to perform various storage management tasks using VSC.
To launch VSC, connect to the vCenter server using the vSphere client and navigate to Home > Solutions & Applications > click on NetApp logo
Navigate to Monitoring & Host Configuration and click on Overview > click Add to add NetApp 7 Mode or Clustered Data ONTAP storage.
Enter the following information to add the Controller
  • Target Hostname: IP or FQDN of the 7 Mode Controller or Cluster IP in case of Clustered Data ONTAP
  • Target Port: 443
  • User  name Password: Credentials required to connect to NetApp 7 Mode Controller or Cluster IP in case of Clustered Data ONTAP.

Click Ok to add the controller.
Once the controllers are added to VSC the discovered information (IP Address, Version, Free capacity, VAAI & Supported protocols) would be displayed.
It’s recommended that you apply NetApp best practices to the ESXi hosts for Adapter, MPIO & NFS Settings. To do this, right click on the ESXi host and click on Set Recommended Values
To understand the details about the settings, click on Show Details.
This completes installation and basic setup of NetApp Virtual Storage Console. You can now use VSC to also manage your NetApp Storage using vCenter Server.

VMware Capacity Planning using Standard Deviation

Welcome: To stay updated with all my Blog posts follow me on Twitter @arunpande !

Recently I was working on a capacity planning project and this reminded me about my learning’s from one of the capacity planning engagements during my previous role as a Server Virtualization Consultant. In this blog post I would like to share the learning’s about doing Capacity Planning using Standard Deviation. I had a task to do the capacity planning for eight ESXi 4.0 hosts where the client was interested to know the available capacity to add new workloads. In this task I didn’t have the privilege to use CapacityIQ or vCenter Operations Manager which automates the entire data gathering and analytics process. The performance data from the existing ESXi hosts were shared in an Excel where I had hourly CPU & Memory usage for 45 days. Trust me, this is not one of the engagements that I had enjoyed considering the amount of manual work that was required. While I was researching for the best possible approach my colleague Richard Raju suggested that we explore using Standard Deviation for this since all the data is in Excel and StdDev can be used easily in Excel. Alternatively I could also take 90th percentile of the max usage to determine the available capacity.

Though using 90th percentile of the maximum utilization is common, I wanted to explore using Standard Deviation.  After understanding how it works, I used the same concept with more engagements.

Capacity Planning using Standard Deviation

Review the RAW data, this is the RAW data that I received from the client

Note that I received the hourly data for the following metrics:
  • Processor Percent Used
  • Processor Usage MHz
  • Memory Percent Used

As a best practice I have capped the utilization at 80%, leaving aside 20% for spikes and failover resources in HA cluster. Hence I added the following fields:
Utilization capped at 80% = (80* Processor Percent Used)/100
CPU Capacity Available = 80- Processor Percent Used

NOTE: It’s not mandatory to cap the utilization at 80% however I have used this as best practice in this case to ensure that I am not overcommitting the resources.

Once you have collated all this information you can create a Pivot Table with the above values. All you have to do is select all the columns/rows and click on Insert > Pivot table from the file menu. You should now get a similar output.

You now have to drag the values from the “Choose fields to add to report” to the “Drag fields between areas below”. In this example I will demonstrate how the Average, Minimum, Maximum & Standard Deviation for CPU was calculated. Similar approach can be used to calculate Memory Usage and then project the available capacity.

I have dragged the Shift_Day, Week, and Cluster fields in Report Filter so that I can pick and choose the data for different Servers during the weekdays for specific business hours and also for weekends. Note it’s important to cover these data points because generally the Servers may be utilized only during business hours and adding the data for after business hours may impact the overall utilization.

In the Row Labels, I will drag the Server so that I can get the utilization for all the servers.

In Values section drag “CPU Utilization capped at 80%”. Note that by default the “SUM of CPU Utilization capped at 80%” is reported in the Values section. You would have to change this to Max, Min, Average and Stand Dev. To do this, you have to click on the dropdown option available on the sum of CPU Utilization capped at 80% and then click on click on Value Field Settings and then click on Average. Follow the same steps to select Min, Max & Std Dev.  Here is the screenshot of the values that you need to add.

You should now have the following output for the CPU Utilization. With this we have found the number of standard deviations in the CPU Utilization. I will use this value while calculating the capacity.

Here you can see the number of standard deviations for the CPU utilization for Server-1 is 4.86. Similarly we can calculate the Memory Utilization capped at 80%.

Once you have the number of standard deviations, you can now calculate the Average CPU Usage based on Standard Deviation. To do this I have used the following

Let me explain the important fields
  • CPU – Number of Sockets
  • Cores – Number of Cores per Socket
  • Speed – Speed of each core
  • Total CPU (GHz) = (Number of Sockets) * (Number of Cores per Socket) * (Speed of each core)
  • The Min, Max, Average Processor % Used is provided from the Pivot Table created in the above section.
  • The “# of StdDev for Processor % Used” is also derived from the Pivot Table.

We now have to understand the number of standard deviation for maximum processor usage.
Number of Standard Deviation for Max = (Max Processor Percent Used - Average Processor Percent Used)/StdDev of Processor Percent Used

Since this value is greater than three I have calculated the CPU Usage based on 3x StdDev using the below formula
CPU Usage based on StdDev  = Average Processor Percent Used + (StdDev of Processor Percent Used * 3)

Once the CPU Usage is available (in the above example the CPU Usage based on StdDev is 63.98%), I have calculated the Available Capacity in % using the following formulas
Minimum Available Capacity (%) = 80 – Max Processor % Used
Maximum Available Capacity (%) = 80 – CPU Usage based on 3x StdDev

The above values regarding Available Capacity is in percentage and it has to be converted to MHz or GHz to find how many more vCPUs can be added. To convert the Maximum & Minimum Available Capacity, I have used the following formual

Minimum Available Capacity = = (Total CPU GHz * Minimum Available Capacity)/100
Maximum Available Capacity = = (Total CPU GHz * Maximum Available Capacity)/100

The client requirement is now met because we know what is the available capacity on the servers and how much workload can be added. Interestingly our findings also matched with the output of CapacityIQ which we were able to run only after 2 months.

Please drop your comments if you have anything to add about this approach.

VMware VASA & NetApp - Configuration & Best Practices

Welcome: To stay updated with all my Blog posts follow me on Twitter @arunpande !

In my previous blog post NetApp VASA provider for VMware – Setup Walkthrough I have shared instructions to setup NetApp VASA provider for VMware using vSphere C# & Web Client. In this blog post I will share information which will help you in configuring NetApp VASA Provider with VMware.
Once you have successfully setup NetApp VASA Provkider for VMware the Storage information will be displayed in Storage Providers section of the vSphere Web Client. In this example the VASA Provider has detected NetApp FAS 3170 with ONTAP version 8.1.1. This confirms that the setup is successful.
Let’s understand the importance of using VASA providers before we start with the configuration steps. With vStorage APIs for Storage Awareness the various storage capabilities of the Storage Array can now be detected by the vCenter Server. This can be further used by Profile Driven Storage (introduced in vSphere 5.0) which enables you to provision VMs based on the defined SLAs and other technical & business requirements. In short by using VASA you ensure that the VMs are provisioned, deployed from template or Storage vMotioned only to specific datastore. Resulting in reduced management costs of the vSphere environment.
This is achieved by creating various storage profiles and assigning VMs to those storage profiles. The Storage Profiles are created using either System Defined or User Defined capabilities. With NetApp VASA the following system defined storage capabilities are reported in vCenter.
Note that the VMFS/NFS datastores are automatically assigned to these System Defined storage capabilities while creating Storage Profiles. For e.g. ONLY LUNs/Volumes that are being replicated using Snapmirror would be detected by VMFS:Replication & NFS:Replication.
Alternatively you can also create User Defined Capabilities and assign them to the datastores which can be further used while creating Storage Profiles.
Create VM Storage Profiles using vSphere Web Client
If you are creating System Profiles using User Defined Capabilities, you would have to create tags and tag categories. In this example I have created a Tag Category User_Stor and assigned a tag “VMFS_Thin’. To create a new tag follow the new tag wizard that is available in vSphere Web Client Home
Once the tag is created, you have to assign it to one or more datastores
We can now create storage profiles either using System or User defined capabilities. To create new Storage Profiles, navigate to vSphere Web Client Home > VM Storage Policies 
You have to enable Storage Profiles either at the cluster level (which will propagate the changes to ESXi hosts) or single ESXi host.
Launch the Create New VM Storage Policy wizard.
Select one of the System Defined Rule-Set, in this case I chose VMFS:Performance. 
Alternatively, you may also create on “Add tag-based rule” to assign User Defined capabilities.
Select the datastores that you want to tag to this profile.
Ready to complete

Assign VM Storage Profiles to VMs
To assign the storage profile to the VM, select the VM and Navigate to Manage > VM Storage Policies > Manage VM Storage Profiles
You will find that the VM is not compliant because this is an existing VM which resides on a datastore which is not assigned to the VM Storage Policy.
To fix this, Storage vMotion the VM to a datastore that is compliant with the VM Storage Policy that is assigned to the VM.

Consider the following when using VM Storage Profiles
  • Assigning VM Storage Profiles to existing VMs does not relocate the VMs to the compatible datastores, you have to Storage vMotion the existing VMs to the compatible datastores.
  • Assign VM Storage Profiles to VM templates, so that the new VMs  deployed from template can be provisioned on the compatible datastore.
  • Create multiple rule sets in VM Storage Profiles for VMs with multiple requirements for e.g. you may want a datastore which is based on high performance SSD drives and is also replicated using SAN replication.
  • When you create Datastores Clusters, chose the compatible datastores with same capabilities so that datastore cluster is marked compatible.