• Step by Step - Upgrade NSX 6.3.5 to NSX 6.4.1

    In the realease notes you will find a description of all the news https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.4/rn/releasenotes_nsx_vsphere_641.html

    1. Download from MyVMware NSX 6.4.1 Upgrade Bundle: VMware-NSX-Manager-upgrade-bundle-6.4.1-8599035.tar.gz

  • vRNI expired login error

    vRNI was installed and configured, but we forgot to put proper license and after demo license expired, we was locked down to put active license into environment.

    Some work around found which allow you activate temporary admin account. 

    1. Ssh to vRNI platform controller:

    ./check-service-health.sh

  • ESXi change scratch persistence to /tmp/scratch

    After I decided to have VSAN on my small home lab, I looked up some best practices, and this is what I've come up with. 

    1. The Host Syslog should not be on the VSAN Datastore 

  • Connect GNS3 to ESXi

    GNS3 server is running on a VM (ESXi host). Second interface eth1 connects from the VM to the default vSwitch0.  The interface tap0 is a loopback interface on the GNS3 server, which I used to connect into my GNS3 topology. Interface br0 bridges the tap0 interface to interface eth1.

  • VMware PowerCLI start notes

    VMware PowerCLI 10.0.0 is the new release of PowerCLI and the first to become multi-platform: it adds support for Mac OS and Linux!

    Thanks to Microsoft PowerShell Core 6.0, the open-sourced version of PowerShell that’s available on a number of operating systems, from Windows to Linux to Mac OS.

  • vCenter - current connection state of the host

    I just had an issue with vCenter 6.0 and ESXi 6.0 where I was unable to deploy OVF power on a VM.

    The error which I was faced with was a relatively generic message based on the host connection status.

    "The operation is not allowed in the current connection state of the host"

  • vSphere: ESXi behind NAT (vCenter connection)

    Issue:

    I have recently had the requirement to put a NAT router performing NAT overload between an ESXi server and it's respective vCenter server. According to VMware KB1010652 this is an unsupported configuration! More over our configuretion completely opposite and we not able to replicete provided by VMware solution.

    VMNAT04

    In our setup the vCenter server was appearing on the "WAN" side of the router and ESXi on the "LAN" which you would expect to not be a problem considering you add the ESXi IP (NATTED 1:1) address inside vCenter.

    In first connection this initially worked, as I'd expected it to. However problems begin after approximately one minute, the host simply dropped offline. I could still ping it fine, and communicate with it using the standalone vCenter client. I could even reconnect it in vCenter however it would only last another minute or so before it dropped. The issue is with heartbeat between vCenter and ESXi.

    VMNAT02

    In system logs on esxi server we can find:

    2016-10-17T01:37:04.058Z warning vpxa[70F3AB70] [Originator@6876 sub=Heartbeat opID=SWI-56f32f43] Failed to bind heartbeat socket for host address 20.16.12.50: Cannot assign requested address.
    2016-10-17T01:37:04.058Z verbose vpxa[70F3AB70] [Originator@6876 sub=Heartbeat opID=SWI-56f32f43] Waiting for 32 seconds for management interface to come up...
    2016-10-17T01:37:36.060Z warning vpxa[70F3AB70] [Originator@6876 sub=Heartbeat opID=SWI-56f32f43] Failed to bind heartbeat socket for host address 20.16.12.50: Cannot assign requested address.
    2016-10-17T01:37:36.060Z verbose vpxa[70F3AB70] [Originator@6876 sub=Heartbeat opID=SWI-56f32f43] Waiting for 64 seconds for management interface to come up...

    ESXi not able to respond correctly for request of heartbeat addressed to 20.16.12.50, not 10.1.1.50

    Solution:

    Add Management loopback with NAT IP address 20.16.12.50

    VMNAT03

    The host should now be online within vCenter and should stay online!

    ------

    Big thanks to KTDANN from Korea for his FEEDBACK:

    Hi,

    I was googling for my problem that matches your article.
     
    You are spot on. Worked like a charm. Except you’ve missed one critical information.
    When adding new VMKernel adapter, I had to select “Management” from the checkboxes you will be presented with.

    feedback01

    I took a screen capture to show you which one I’m referring to. BTW, this is Korean version of VCENTER.

    Without selecting this, it won’t work.

    Hopefully you’d update the article to help others suffering from same problem.

    Cheers,

     

  • Trunt (VLAN Tagging) to VMs on ESXi

    Sometimes you need pass through Trunk to VMs on ESXi emvitonment. Ex: Nested ESXi with NSX.

    To set a standard vSwitch portgroup to trunk mode:

    1. Edit host networking via the Virtual infrastructure Client.
    2. Navigate to Host> Configuration> Networking> vSwitch> Properties.
    3. Click Ports> Portgroup> Edit.
    4. Click the Generaltab.
    5. Set the VLAN ID to 4095. A VLAN ID of 4095 represents all trunked VLANs.
    6. Click OK.

    To set a distributed vSwitch portgroup to trunk mode:

    1. Edit host networking via the Virtual infrastructure Client.
    2. Navigate to Home > Inventory > Networking.
    3. Right-click on the dvPortGroup and select Edit Settings.
    4. Within that dvPortGroup, go to Policies > VLAN.
    5. Set VLAN type to VLAN Trunking and specify a range of VLANs or specificy a list of VLANs to be passed to the Virtual machines connected to this portgroup.
      Note: To improve security, virtual Distributed Switches allow you to specify a range or selection of VLANs to trunk rather than allowing all VLANS via VLAN 4095.
  • VMware ping duplicates issue (DUP!)

    I am getting lot of duplicate packets when i ping from host to host or from gusers on different hosts:

    64 bytes from 10.0.58.192: icmp_seq=169 ttl=64 time=0.930 ms
    64 bytes from 10.0.58.192: icmp_seq=169 ttl=64 time=1.073 ms (DUP!)
    64 bytes from 10.0.58.192: icmp_seq=170 ttl=64 time=1.277 ms
    64 bytes from 10.0.58.192: icmp_seq=170 ttl=64 time=1.337 ms (DUP!)
    64 bytes from 10.0.58.192: icmp_seq=171 ttl=64 time=1.170 ms
    64 bytes from 10.0.58.192: icmp_seq=171 ttl=64 time=5.017 ms (DUP!)
    64 bytes from 10.0.58.192: icmp_seq=172 ttl=64 time=3.843 ms
    64 bytes from 10.0.58.192: icmp_seq=172 ttl=64 time=3.920 ms (DUP!)

    That happens:
    - if you ping a broadcast address, because you send one ping and get many answers back.
    - if two or more systems have the same IP and both send an answer back.
    - if your NIC is in promiscious mode and you have a not so smart switch (or hub) which sends the ping also back to the pinging system.
    - if you bridging 2 vNICs/pNICs

    Workaround:

    If you connect 2 teaming pNICs in 2 different swithes (not smart enough).

    Change pNIC teaming on vSS to: Router based on IP hash or Explicit fileover order.

    If you choose IP hash when connection going to same switch, do not gorget activate LAG on the switch, otherwise use Explicit fileover order.

    Router based on source MAC hash or Router based originating port ID will not work in this situation.

     DUP01

     

  • Install VIB on VMware ESXi manually

    If you experience issue with Host Preparation on your LAN NSX enviroment, it could be related with SSL certificate and dificulties to get

    https://NSXMGR/bin/vdn/vibs-6.2.5/6.5-4463934/vxlan.zip

    https://NSXMGR/bin/vdn/vibs-6.3.1/5.5-5114250/vxlan.zip

    https://NSXMGR/bin/vdn/vibs-6.3.1/6.5-5124743/vxlan.zip

    Solution:

    1. Download vxlan.zip your PC

    2. pscp vxlan.zip root@ESXIIP:tmp

    3. on ESXi: esxcli software vib install -d /path/to/vxlan.zip

    NOTE: if error appear: VIB VMware_bootbank_esx-vxlan_6.5.0-0.0.4463934 requires nsx-api <= 1, but the requirement cannot be satisfied within the ImageProfile.

    please force the installation: esxcli software vib install -d /path/to/vxlan.zip --force

    4. Check that VIB installed:
    vmware -vl; esxcfg-vmknic -l | grep vxlan ; esxcli software vib list | grep esx-v

    5. Back to NSX Manager and repair Host preparation.

  • vSphere: vCenter behind NAT

    Issue:

    I have recently had the requirement to put a NAT router performing NAT overload between an ESXi server and it's respective vCenter server. According to VMware KB1010652 this is an unsupported configuration!

    In my setup the vCenter server was appearing on the "LAN" side of the router and ESXi on the "WAN" which you would expect to not be a problem considering you add the ESXi IP address inside vCenter.

    In first connection this initially worked, as I'd expected it to. However problems begin after approximately one minute, the host simply dropped offline. I could still ping it fine, and communicate with it using the standalone vCenter client. I could even reconnect it in vCenter however it would only last another minute or so before it dropped. The issue is with heartbeat between vCenter and ESXi.

     

    VMNAT01

    Solution:

    Within ESXi modify this file: /etc/vmware/vpxa/vpxa.cfg

    Modify the <serverIp>10.0.0.1</serverIp> directive to contain the WAN (outside) NAT address of the NAT router instead of the vCenter server IP.

    Also add the following line: <preserveServerIp>true</preserveServerIp> otherwise the IP you just entered will be overwritten.

    Restart the vpxa management agents on the host with services.sh restart

    The host should now be online within vCenter and should stay online!

     

  • ESXi password complexity

    Configure the pam_passwdqc.so plug-in to determine the basic standards all passwords must meet.

    1. Log in to the ESXi Shell and acquire root privileges.

    2. Open the passwd file with a text editor: vi /etc/pam.d/passwd

    3. Edit the following line: 

    password requisite /lib/security/$ISA/pam_passwdqc.so retry=3 min=8,7,6,5,4

    This action will decrease system password requirements.

  • NSX: Uninstalling stuck in progress

    Issue explanation:

    I went to prepare a cluster but the hosts never fully installed.
    I figured I'd uninstall and give it another try.
    Now all the hosts in the cluster report "In Progress" with the cluster status "Uninstalling".
    I've rebooted all hosts multiple times and checked that the VIBs have been removed.

    esxcli software vib list
    esx-vxlan
    esx-vsip
    esx-dvfilter-switch-security

    The manager and controllers have been rebooted as well.

     

    Solution:

    The vibs had already been uninstalled but the task was still hung.
    I was able to work around this issue by disconnecting each host and adding back to a different cluster.
    I deleted the cluster that was stuck and was then able to prepare the hosts in a new cluster.
    The only think I lost was VM folder structure.
    The hosts installed without issue and have now been added into my transport zone.

    Note:

    1. Do not forget to have DNS server and add there:

    • vCenter
    • esxi's
    • NSX Manager
    • Controllers

    2. Domain Controller is not compulsory but recommended.

  • VMware error adding datastore previously deleted

    Over the last weekend I finally got some time to fix my broken RAID5 in my home lab. After disk was created, I was unable to add any of the local disks to my ESXi host as VMFS datastores as I got the error “HostDatastoreSystem.QueryVmfsDatastoreCreateOptions” for object ‘ha-datastoresystem’ on ESXi….”.

    I’d used this host and the same disks previously, so I knew hardware incompatibility wasn’t an issue.

    I guess the issue with pre-existing or incompatible information on the hard disks. There are various situations which might lead to pre-existing info on the disk.

    You need to run the following command for each disk that you’re having issues with (this overwrites the partition table with a standard msdos one which VMware can work with);

    NOTE: This will ERASE ALL DATA on the disk in question so be careful to select the right disks!

    # partedUtil mklabel /dev/disks/<disk id> msdos

    To get a list of your disks;

    /dev/disks
    ls

Google AdSence

AUST IT - Computer help out of hours, when you need it most.

Find out why we do it for less.

About

AUST IT will help you resolve any technical support issues you are facing onsite or remotely via remote desktop 24/7. More...

Contacts

Reservoir, Melbourne,
3073, VIC, Australia

Phone: 0422 348 882

This email address is being protected from spambots. You need JavaScript enabled to view it.

Sydney: 0481 837 077

Connect

Join us in social networks to be in touch.

Newsletter

Complete the form below, and we'll send you our emails with all the latest AUST IT news.