Setting NTP on ESXi Hosts with PowerCLi

Setting NTP on ESXi hosts is a quick procedure using PowerCLi. Don’t forget to edit the script for your environment. Once you’re connected to vCenter you can run this script:


silhouette with brain and gears turning
Knowledge is power

Comments are always welcomed. Let me know if this has been helpful.

More from my site

Scratch File Location for VMHosts PowerCLi Script

Schannel Set-Crypto PowerCLI Script

Backup VDS PowerCLI Script

Cross SSO vMotion Between vCenters

Setting DNS for Windows Servers Using PowerShell

Beyond SQL on VMware Best Practices

Going beyond SQL in VMware Best Practices has involved settings and configurations that are never mentioned in the SQL Server On VMware Best Practices Guide. Working in a healthcare environment means working with a large EMR (Electronic Medical Record) provider like EPIC. In tuning one of their many large sql databases they had some recommendations. Some of these are throwbacks to my old days of tweaking performance out of a desktop with settings like TCP parameter KeepAliveTime. How many of you go beyond the recommendations in the SQL Best Practice Guide?

floating book in quaint library
Knowledge shared becomes wisdom attained (Photo by Jaredd Craig on Unsplash)

I’ve listed here some of the settings that have been suggested for our larger more accessed and just plain busy sql servers:

ESXi Settings

Adjust the Round Robin IOPS limit from the default 1000 to 1 on each database LUN. Refer to VMware KBA 2069356 for more information on setting this parameter. (We already utilize Round Robin but each lun was set to the default)

Why would you want to make this change?
“The default of 1000 input/output operations per second (IOPS) sends 1000 I/O down each path before switching. If the load is such that a portion of the 1000 IOPS can saturate the bandwidth of the path, the remaining I/O must wait even if the storage array could service the requests. The IOPS or bytes limit can be adjusted downward allowing the path to be switched at a more frequent rate. The adjustment allows the bandwidth of additional paths to be used while the other path is currently saturated. “

How to make this change:

In ESXi 5.x/6.x:
for i in esxcfg-scsidevs -c |awk '{print $1}' | grep naa.xxxx; do esxcli storage nmp psp roundrobin deviceconfig set –type=iops –iops=1 –device=$i; done

Where, .xxxx matches the first few characters of your naa IDs.
 
To verify if the changes are applied, run this command:

esxcli storage nmp device list
 
You see output similar to:
 
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=iops,iops=1,bytes=10485760,useANO=0;lastPathIndex=1: NumIOsPending=0,numBytesPending=0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba33:C1:T4:L0, vmhba33:C0:T4:L0

Registry Settings

Configure Windows TCP Parameters in the Registry

The default setting for the Windows TCP parameter KeepAliveTime is two hours. This setting controls how often TCP sends a keep-alive packet to verify that an idle connection is still intact. Reducing it from two hours to five minutes helps windows detect and clean up stale network connections faster.


How to make this change:

Use regedit to create the DWORD KeepAliveTime (if it does not currently exist) at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters\ . Then modify the value to 300000 (time in milliseconds).

The default setting for the Windows TCP parameter TCPTimedWaitDelay is four minutes. This setting determines the time that must elapse before TCP/IP can release a closed connection and reuse its resources. By reducing the value of this entry, TCP/IP can release closed connections faster and provide more resources for new connections.

How to make this change:

Using regedit to create the REG_DWORD TCPTimedWaitDelay (if it does not currently exist) at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters\ . Then modify the value to 30 .

What kind of ESXi settings, Windows registry settings, config file changes, etc. do you implement in your environment that goes beyond the SQL Server On VMware Best Practices Guide ? As always, I look forward to your comments and sharing of knowledge.

HPE DL560 Gen9 Will Soon Be on VMware HCG

HPE DL560 Gen9 servers will soon be on the VMware HCG. Like many of you we plan ESXi upgrades to new versions months into the future. As we approach our planned upgrades we hope that our HPE hardware will be on VMware’s HCG (Hardware Compatibility Guide) as approved for the latest ESXi 6.7 version. I can’t remember the last time that our hardware wasn’t on the HCG as approved.

My understanding is that most people that are using HPE server hardware are using the DL360 models. Typically those models get qualified faster. For philosophical reasons we like to use the DL560 models. We currently have quite a few of the DL560 gen9 models that we want to use. So far HPE has not bothered to have the DL560 gen9 model qualified as compatible with ESXi 6.7 but they did qualify the gen10 model. Our HPE guy has hinted that the DL560 is just not purchased nearly as much as the DL360s and that contributes to the slowness of getting the hardware on the HCG.

Well I have good new for those in the same boat as us. HPE has notified us that the qualification is being performed and will be completed around April 2nd. This may or may not come true but I thought I’d share the information.

I am very interested to hear what you are doing in your datacenter with HPE and VMware. Are HPE DL360s more your flavor or do you like the HPE DL560s like we do? Comments are always welcome! I look forward to hearing from all of you especially if you have HPE DL560 gen9 servers in your environment.

Datacenter with full racks of servers with flashing lights.  HPE DL560 gen9 servers are among the other servers

Nutanix vs VMware – Who is bullying whom?

Nutanix vs VMware – Who is bullying whom? Can’t we just all get along? Apparently not! VMware and Nutanix are at it again. Nutanix claims that VMware is bullying them by responding to Nutanix’s ‘You Decide’ marketing. When you mess with the bull you get the horns. Head on over to The Register to check out their always unique reporting. I have not used Nutanix in my environment so I can’t give a knowledgeable or unbiased response.

Nutanix vs VMware

Come back and let me know what you think. Comments are always welcome!

VMware Recertification Is Up To You

The pain is over for all that hated the every 2 years recertification policy for VMware certs.  You get to decide when you want to upgrade.  What a novel concept! We’ll see how this plays out as to whether this is a good idea or not.  In the meantime check out the comments in the VMware subreddit. As always, Reddit delivers with comments like ‘This looks like a hostage video’.

Here is the ‘hostage’ video:

via VMware Certification: Recertification Is Changing and What It Means to You – VMware Education Services

VCS to VCSA Fling Experience

We’ve been discussing using the VCS to VCVA fling converter appliance.  We spent a few days installing the lab and re-installing due to not reading all the requirements for the VMware Migration Fling. I built the test vCenter server with Windows 2012 Server which has specifically been shown to be buggy and does not work.  I shouldn’t have done that anyway since our production vCenter servers are Windows 2008 R2.

After spending time re-reading all the caveats and known issues with the fling at William Lam’s virtuallyGhetto it was time to test and get comfortable with how the fling works. We scheduled some conference room time and went through the deployment of the migration appliance and just followed the instructions.

Troubleshooting

The only issue that we experienced is that the converter appliance would not accept my domain credentials to access the existing vCenter server. Our default administrator account on all of our Windows servers is renamed in our domain.  It appeared that the converter appliance would only try the vCenter local account called Administrator. After several tries with domain\username, username, localusername, etc. we created a local account on the vCenter server called Administrator. Once that was done the conversion continued on smoothly.  After looking around I didn’t see anyone else having this issue but I thought I’d mention it in case someone else experiences this issue as well.

We were very happy with the result in the lab and will be scheduling the conversion of one of our production vCenter servers to the VCSA soon.

Do Work Politics and VMware Ignorance Rule Your Datacenter?

ServerDiagnosis  Do you ever find that as a VMware admin that you have to defend your choices when it comes to virtual machine sizing? We’ve all been there when our customers (i.e. internal I.T. analysts) or even your co-workers on your team question why you didn’t give their vm as much cpu or memory as originally requested.
How do you deal with it? Often it is easy to just declare I am the VMware admin and I obviously know more than you so just accept what I am saying. Besides, you are just an ignorant newb when it comes to VMware. The other response is to elevate the conversation and educate the ignorati.
I like to think that I choose the latter but I sometimes fantasize about the former.
In that vein I choose to highlight some basic troubleshooting methods that VMware recommends to determine if indeed that vm is worthy of a bump in cpu, memory, or even diagnose storage or network issues. A great knowledge base article to start with is Troubleshooting ESX/ESXi virtual machine performance issues (2001003) .

Hopefully this is a good start in troubleshooting ESXI performance issues and hopefully your political and ignorance issues are few and far between.  I’d love to hear from you about your experiences!

 

Deleting Orphaned Virtual Desktops In VMware View

If you’ve ever managed a VMware View vdi environment for a period of time sooner or later you will have to manually delete orphaned virtual desktops.  Although VMware provides KB 1008658 that explains this procedure.  It is lacking in clarity especially for first time VMware View admins.

As we all know our friend Google provides if you only ask.  I have found 2 other blogs that do very good job of taking KB 1008658 and parsing it down to a more concise version.  My intention was to do this myself but why reinvent the wheel when you can just pay homage to it.

Here are the 2 blog post links:

http://terenceluk.blogspot.com/2013/02/manually-deleting-orphaned-andor-stale.html

http://luckyblogshere.blogspot.com/2011/06/removing-vms-from-adam-database.html

 

The summarized steps for deleting an orphaned virtual desktop in VMware View is:

  1. Stop provisioning on the offending vdi pool (optional but my experience is that it is essential especially with very busy non-persistent pools with)
  2. Remove orphaned virtual desktop from ADAM database
  3. Remove all relevant entries for the orphaned vdi in the SQL Composer database
  4. Delete corresponding computer object out of Active Directory
  5. Enable provisioning once again on the pool

Please see the other blog posts for exact details.

Hopefully seeing more than one example really helps in understanding the necessary steps.

 

Note:  Edited 9/29/17 to remove a broken link