ESXi Host Lifecycle
ESXi Host Lifecycle
ESXi Host Lifecycle
1
ESXi Host Lifecycle
Table of Contents
2
ESXi Host Lifecycle
3
ESXi Host Lifecycle
This walkthrough is designed to provide a step-by-step overview on how to manage the ESXi host using Direct Control User
Interface. Use arrow keys to navigate through the screens.
Connect to the host console. Select [Customize System/View Logs] to customize the system.
4
ESXi Host Lifecycle
Login using the root user ID and the password that were set during the installation. Select [OK] to access the System
Customization screen.
From the list of customization options, select [Configure Password] to modify the password. Select [ Change] to configure
password.
5
ESXi Host Lifecycle
Update the password. Select [OK] to save and go back to the home screen.
On the System Customization screen, select [Configure Lockdown Mode]. Use the Spacebar to Enable/Disable lockdown
mode and select [OK] to save.
6
ESXi Host Lifecycle
From the System Customization screen, select Configure Management Network. Go to [Network Adapters] and select [
Change] to update network adapters.
Assign multiple adapters to provide for redundancy. Select [OK] to save and go back.
7
ESXi Host Lifecycle
Choose VLAN from the list and select [ Change] to set the VLAN ID.
Setting the VLAN ID is optional. Once complete, select [OK] to save and continue.
8
ESXi Host Lifecycle
Select [IP Configuration] from the Configure Management Network list. Set up the IP, the subnet mask and the default
gateway. Select [OK] to save.
Select [IPv6 Configuration] from the Configure Management Network list to reach this screen. Enable/disable IPv6. Any
changes here will restart the host without a warning. Select [OK] to save.
9
ESXi Host Lifecycle
Select [DNS Configuration] from the list. Specify the IP addresses of the primary and alternate DNS servers, and the
hostname of the vSphere host. Select [OK] to save and proceed.
Select [Custom DNS Suffixes] to configure additional DNS suffixes. Specify the desired DNS suffix and select [OK]. Go back
to the main screen using the [Esc] key.
10
ESXi Host Lifecycle
Choose [Restart the Management Network] from the list and select [Restart] to confirm.
Select [OK] to close the Restart Management Network window and proceed.
11
ESXi Host Lifecycle
Choose [Test Management Network] from the list. It pings the local gateway along with the IP addresses of the DNS server.
Select [OK] to continue.
It automatically performs a resolution on the hostname. Select [OK] to close this window and go back to the home screen
to view options to restore the network.
12
ESXi Host Lifecycle
Choose [Network Restore Options]. It helps restore connectivity when a host gets disconnected from the network. Select
[Change] to customize settings and [Exit] to go back.
Select [Configure Keyboard]. Choose the desired layout and select [OK].
13
ESXi Host Lifecycle
Select [Troubleshooting Options] from the list. In this screen, you can enable/disable ESXi shell and SSH service on the
host.
Select [Modify ESXi Shell and SSH Timeouts]. Set timeouts to ensure that the services are not left on indefinitely, and to
automatically terminate unattended shell sessions. Select [OK] to continue.
14
ESXi Host Lifecycle
Select [Restart Management Agents]. Select [OK] to confirm or [Cancel] to abort and go back to the home screen.
On the System Customization screen, select [Host Logs] to view log details on the right window pane.
15
ESXi Host Lifecycle
Select [View Support Information] from the list. This option presents details about the host serial number, license key, SSL
and SSH keys on the right window pane.
Select [Reset System Configuration] from the list to reset all the changes made. Select [Log Out] to go back.
16
ESXi Host Lifecycle
17
ESXi Host Lifecycle
This concludes the walkthrough of managing the ESXi host using Direct Control User Interface.Select the next walkthrough
of your choice using the navigation panel.
This walkthrough provides a step-by-step overview on how to install ESXi on a vSphere host. Use the arrow keys to
navigate through the screens.
18
ESXi Host Lifecycle
Begin by downloading the VMware ESXi installation media and inserting/mounting the ISO/Image into the server's CD-
ROM/DVD drive. Configure the host's BIOS to boot from CD-ROM/DVD and boot the host. Following the boot the ESXi
Installer will automatically load.
Begin the installation by pressing [Enter] at the ESXi Installer welcome screen.
19
ESXi Host Lifecycle
Press [F11] to accept the End User License Agreement. The installer will proceed to scan for available hard disks.
Use the arrow keys to highlight the boot disk where you will install ESXi and press [Enter].
20
ESXi Host Lifecycle
If an existing ESXi image is found on the disk, the installer ask if you want to upgrade or do a fresh install. Use the arrow
keys to choose the type of install, press the [spacebar] to select, and press [Enter] to continue.
Use the arrow keys to highlight the desired keyboard layout and press [Enter] to continue.
21
ESXi Host Lifecycle
Press [F11] to confirm the install. The hard disk will be partitioned and ESXi will be installed on the host.
22
ESXi Host Lifecycle
After the installation completes, eject/unmount the ESXi ISO from the server's CD-ROM/DVD drive and press [Enter] to
reboot.
Following the reboot configure the host’s management network. From the console press [F2] to customize the system and
login as "root" with the password you set during the install.
23
ESXi Host Lifecycle
Use the arrow keys to highlight "Configure Management Network" and press [Enter].
Use the arrow keys to highlight "Network Adapters" and press [Enter].
24
ESXi Host Lifecycle
Use the arrow keys to highlight the network adapters that will be used for the management network and press the
[spacebar] to select each adapter. After all the adapters have been selected press [Enter] to continue.
If you use VLAN tags set the VLAN-ID for the management network. Use the arrow keys to select "VLAN (optional)”, set the
VLAN-ID for the management network and press [Enter]. If a VLAN-ID is not required skip this step.
25
ESXi Host Lifecycle
Next, select "IP Configuration". Specify if the host will use a dynamic or static IP address. If using static IP address provide a
host unique IP address along with the appropriate subnet mask and default gateway. Note, static IP addresses are
recommended for vSphere hosts. Press [Enter] to continue.
Next, select "DNS Configuration". Specify if the host will use dynamic or static DNS settings. If using static DNS settings,
provide the IP address for the primary and alternate DNS server along with the server's hostname. Press [Enter] to
continue.
26
ESXi Host Lifecycle
Next, select "Custom DNS Suffixes". Enter the DNS suffix for the vSphere host and press [Enter] to continue.
Press [Esc] to exit the "Configure Management Network" menu. When prompted to apply the changes press [Y].
27
ESXi Host Lifecycle
Next, verify the network settings. Select "Test Management Network" and press [Enter].
Press [Enter] to begin the test. The server will verify it has network connectivity by pinging its default gateway, the primary
and alternate DNS servers, and by resolving the hostname.
28
ESXi Host Lifecycle
Verify all the tests complete with a status "OK". Press [Enter] to close the window and press [Esc] to exit out of the System
Customization menu.
This concludes the walkthrough on installing and configuring vSphere ESXi on a vSphere host. Continue to the next PWT in
the series to see how to add the vSphere host to vCenter Server.
29
ESXi Host Lifecycle
This walkthrough is designed to show how to upgrade a vSphere host using the "esxcli" command from within the ESXi
Shell. Use the arrow keys to navigate through the screens.
We begin by accessing the vSphere host's console. Here we see the host is currently running ESXi 5.0. We will upgrade this
host to version 5.5 using the “esxcli” command.
30
ESXi Host Lifecycle
At the host's console press [F2] to login as a full privileged administrative user. In this example we are logging in as "root".
31
ESXi Host Lifecycle
Select [Enable ESXi Shell] and press [Enter] to enable the shell. We need to enable the ESXi shell before we can logon to it
and perform the upgrade.
Next, select [Enable SSH] and press enter to enable the SSH service. In this example we will use SSH to copy the ESXi 5.5
software depot onto the host prior to the upgrade.
32
ESXi Host Lifecycle
Before we can upgrade the host we need to copy the ESXi 5.5 upgrade image onto a datastore that is accessible from the
host. Here we have saved a copy of the ESXi 5.5 software depot onto a Linux desktop. We then used the secure copy
command (scp) to copy it to the local datastore [local-ds-01] on our ESXi host.
After copying the ESXi 5.5 software depot onto the host, we return to the host console and press Alt-F1 to access the ESXi
Shell.
33
ESXi Host Lifecycle
Login to the ESXi shell as an administrative user, in this example we have logged in as "root"
Next, we run the “esxcli software sources profile list” command and pass in the location of the 5.5 software depot. This
command provides a list of the available 5.5 image profiles. Here we see that there are two image profiles: a “standard”
profile and a “no-tools” profile. We will use the “Standard” profile.
34
ESXi Host Lifecycle
Next we perform the upgrade by running the "esxcli software profile update command" and pass in the location of the
offline depot along with the name of the image profile that we want to use. As this command will generate a lot of output
we re-direct the output into the file /tmp/output.txt to make it easier to review following the upgrade.
Following the upgrade we run the command "more /tmp/output.txt" to review the results.
35
ESXi Host Lifecycle
We see at the beginning of the output.txt file that the upgrade completed successfully and that a system reboot is needed
for the upgrade to take affect. The remaining information captured in the output.txt file is a summary of the VIBs that were
installed as part of the upgrade. Reboot the host to complete the upgrade.
Following the reboot we see that the host is now running ESXi 5.5. This concludes the walkthrough on upgrading a vSphere
host using the "esxcli" command from within the ESXi Shell. Use the navigation menu on the left to select the next
walkthrough.
36
ESXi Host Lifecycle
2. Host Profiles
VMware vSphere Host Profiles offers configuration management and compliance checking for clusters
of VMware ESXi hosts.
37
ESXi Host Lifecycle
Host Profiles is an advanced capability of VMware vSphere that provides for configuration and
compliance checking of multiple VMware ESXi hosts. Although a profile can be attached directly to a
single host in vCenter Server, typically, a profile is attached to a vSphere cluster, where all the hosts
have the same hardware, storage, and networking configurations. The latest release of vSphere
includes several enhancements to Host Profiles. This article discusses two different sources of
configuration settings for a host.
While Host Profiles focuses on configuring identical settings across multiple hosts, certain items must
be unique for each host. These unique items are known as customizations; in the past, known as
answer files.
Administrators initially configure a reference host to meet business requirements and then extract the
entire configuration into a new profile which can be subsequently edited or updated as requirements
change. These settings are applied to other hosts in the cluster through the process of remediation,
and hosts that are not able to meet all the profile requirements are flagged as non-compliant.
38
ESXi Host Lifecycle
The following image gives some examples of settings on a host that will require customization:
39
ESXi Host Lifecycle
When these customizations are missing, the profile will not be compliant – for many reasons. For
example, shared datastores cannot be mounted if the appropriate VMkernel IP address is not
configured.
40
ESXi Host Lifecycle
41
ESXi Host Lifecycle
Once the host customizations have been provided and stored on vCenter Server, the associated profile
can be remediated to become compliant.
42
ESXi Host Lifecycle
And finally, be aware that these host customizations apply to both stateful hosts using traditional on-
disk installation, as well as stateless hosts that are booted from the network with Auto Deploy.
43
ESXi Host Lifecycle
Takeaways
• Host Profiles is a feature of vSphere designed to apply identical configuration to multiple
VMware ESXi hosts
• Settings that are unique for individual hosts are provided through customizations
• vSphere Administrators enter or update customizations through graphical clients or via CSV file
44
ESXi Host Lifecycle
3. Auto Deploy
VMware vSphere Auto Deploy uses industry-standard PXE technologies to boot VMware ESXi hosts
directly from the network instead of local storage devices.
45
ESXi Host Lifecycle
Because the Image Builder and Auto Deploy features are tightly coupled, the UI is only visible when
both of these services are running. To enable the GUI, navigate to Administration > System
Configuration > Services in the vSphere Web Client. Start both services, and set them to start
automatically, if desired. Then log out and back in to the Web Client to verify the Auto Deploy object is
available.
Alternatively, these services can be enabled via command line. Simply SSH into the VCSA and run the
following commands:
Regardless of whether Auto Deploy is in use in an environment or not, the Image Builder GUI is a
convenient alternative to the PowerCLI cmdlets previously required for creating custom VMware ESXi
images. Administrators can upload zip depots of images and drivers, as well as create online depots
that connect to VMware or OEM partner image repositories.
46
ESXi Host Lifecycle
In addition to being available to Auto Deploy for deploy rule creation, the UI also allows administrators
to customize, compare, or export images to ISO or zip format for a variety of uses. The vSphere 6.5
product documentation describes the functionality in more detail.
Even though the PowerCLI Image Builder is still available, this new Image Builder GUI helps those
customers that prefer a more guided approach for these tasks.
The latest release of VMware vSphere contains improvements to Auto Deploy, including a new
graphical user interface, a new deployment workflow, and various manageability and operational
enhancements. One such enhancement is a dramatically-simplified caching capability.
There are several reasons why you might consider adding reverse proxy caching to your Auto Deploy
infrastructure. First, this design will reduce the load on the vCenter Server Appliance and Auto Deploy
service, freeing up resources for other processes. Second, the boot time of individual stateless
VMware ESXi hosts is modestly improved – saving about 30 seconds in a typical setup, possibly more
in a heavily-loaded environment. Finally, you can potentially boot far more stateless hosts concurrently
without overwhelming the VCSA.
47
ESXi Host Lifecycle
Resiliency is a natural priority when changing critical infrastructure components. I’m glad to report
that the new reverse proxy design does not create a single point of failure, since you can deploy
multiple proxy servers that are tried in a round-robin sequence with no load balancers. Furthermore, if
all proxies happen to become unavailable, the stateless clients fail gracefully back to the default
behavior of directly accessing the Auto Deploy server. This is a welcome improvement over previous
releases. Just keep in mind that the caches are only for performance optimization, and not for
redundancy of the overall stateless infrastructure – the Auto Deploy server is still in charge and must
be online for successful host boot operations.
In the above example, the proxy will listen on port 5100 and fetch any requested ESXi image files from
your existing Auto Deploy server located at 10.197.34.22. Run this container on each VM that will act
as a proxy, and make note of their IP addresses for the next part.
Connectivity Test
Before you configure Auto Deploy to use these new caches, it’s a good idea to verify connectivity. One
way to do this is to watch the Nginx log file while manually requesting a file from the cache.
To watch the Nginx log, get the id of the container and use the docker logs –f command:
root@photon-a9f9d2d38769 [ ~ ]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c73960b6cd13 egray/auto_deploy_nginx "/bin/sh -
c 'envsubst" 5 seconds ago Up 4 seconds 443/tcp, 0.0.0.0:5100->80/
tcp determined_booth
root@photon-a9f9d2d38769 [ ~ ]# docker logs -f c739
Then, request the Auto Deploy tramp file from another system, like so:
$ curl http://10.197.34.172:5100/vmw/rbd/tramp
!gpxe
set max-retries 6
set retry-delay 20
post /vmw/rbd/host-register?bootmac=${mac}
Confirm that the proxy responds immediately with the above output. If it does not, go back and
double-check addresses, ports, and other potential connectivity problems. Also, observe the log file
that is being tailed for a corresponding hit.
48
ESXi Host Lifecycle
Check the configuration by running Get-ProxyServer, and if necessary, remove a proxy from rotation
with the Remove-ProxyServer cmdlet.At this point, any stateless hosts that boot will use the cache.
You can verify the configuration by accessing the Auto Deploy diagnostics web interface:
https://vcsa:6501/vmw/rbd/host/
Click on any listed host, then on the diagnostics page that will appear, click Get iPXE Configuration.
Check the resulting configuration for the multiple-urs directive and lines beginning with uris -a that
point to your proxy caches, like so:
Action!
Boot or reboot stateless hosts, and they will access the proxy caches. You can monitor requests
coming to the Auto Deploy server and to the caches to verify the changes have taken effect. Note that
the first time a host boots, the proxy will need to fetch all the files from Auto Deploy to cache them.
After that, everything but a small set of non-cacheable files will be served from the caches.
The caches are easy to monitor through the docker logs command, as described above. It’s also pretty
simple to watch key activity on the Auto Deploy (VCSA) system. Try the following command with and
without the caches enabled if you want to get a feel for the boot time reduction in your environment:
Summary
The new reverse proxy cache feature in Auto Deploy 6.5 is very easy to set up, and will boost
performance without introducing additional failure points to your vSphere infrastructure. Docker
containers running Nginx offer a simple way to demonstrate the concept in your environment.
49
ESXi Host Lifecycle
50
ESXi Host Lifecycle
4.1 Using the Update Manager Interface to Upgrade from ESXi 6.5
to 6.7
Upgrade VMware ESXi Hosts with the New Update Manager Interface in vSphere 6.7
In VMware vSphere 6.7, the vSphere Update Manger (VUM) interface is now part of the HTML5 vSphere Client.In this
demo, we will walk through the workflow to perform a major version upgrade.Click the Update Manager icon to begin.
51
ESXi Host Lifecycle
52
ESXi Host Lifecycle
53
ESXi Host Lifecycle
Attach Baseline
VUM is most effective when a baseline is attached to a cluster of ESXi hosts, although it is possible to attach to individual
hosts, if necessary. With the cluster selected, click "Attach"
54
ESXi Host Lifecycle
55
ESXi Host Lifecycle
Remediation Pre-Check
The pre-check process will check to see if DRS is enabled so that running VMs can be migrated with zero-downtime across
the cluster. The pre-check also displays the status of HA admission control and enhanced vMotion compatibility. Click
"Done"
56
ESXi Host Lifecycle
Streamlined Remediation
In the new Update Manager interface, the remediation wizard from previous releases is gone. Instead, we have a chance to
review the actions that will be taken in a very efficient way.Click OK
57
ESXi Host Lifecycle
4.2 Using the Update Manager 6.7 Interface to Patch VMware ESXi
6.5 Hosts
Using Update Manager 6.7 to Keep a Cluster of VMware ESXi 6.5 Hosts Patched
VMware vSphere Update Manager is capable of performing major version upgrades, applying patches and updates to
supported versions of ESXi host, or installing drivers or other third-party components.In this example, we will walk through
58
ESXi Host Lifecycle
the procedure to apply a patch to a cluster of hosts running VMware ESXi 6.5, as the underlying application is not yet
certified on VMware ESXi 6.7, so we cannot perform a major version upgrade at this time. Click the Update Manager icon
to begin.
59
ESXi Host Lifecycle
Review Baselines
60
ESXi Host Lifecycle
Update Manager is able to perform major version upgrades, apply patches, or install extensions on managed ESXi hosts.
Each of these tasks are enabled via baselines In our patching scenario, we need to create a new baseline to act as a
container for the patches we just imported. Click New.
New Baseline
On the Baselines tab, the "New" menu item has two sub-entries, choose "New Baseline"
61
ESXi Host Lifecycle
Select Patches
For this baseline, we will select the two patch bulletins that are part of the bundle we just uploaded.Since this environment
does not have Internet access, only the patches that we import to the repository appear in this list. In a less-restrictive
datacenter, this list would include all possible patch releases and could be filtered as needed by clicking the column
headings. Click Next.
62
ESXi Host Lifecycle
Verify Baseline
One final check of the patch baseline... Everything looks good, so click Finish.
63
ESXi Host Lifecycle
64
ESXi Host Lifecycle
65
ESXi Host Lifecycle
Pre-Check Finished
The pre-check dialog box will show the status of individual items, such as confirming DRS is enabled. Everything is ready
for remediation, so click Done.
Begin Remediation
Now that the pre-check is finished, we can proceed with cluster remediation. Click Remediate
66
ESXi Host Lifecycle
67
ESXi Host Lifecycle
Patching Complete
After Update Manager is finished applying patches to all nodes in the cluster, the status will be updated to show that they
are compliant with our chosen patch baseline.Update Manager 6.7 can upgrade hosts to the latest release of VMware ESXi,
or it can keep hosts running older versions patched until the time comes to upgrade.
68
ESXi Host Lifecycle
69
ESXi Host Lifecycle
Downloading virtual appliance upgrades, host patches, extensions, and related metadata is a
predefined automatic process that you can modify. By default, at regular configurable intervals,
Update Manager contacts VMware or third-party sources to gather the latest information (metadata)
about available upgrades, patches, or extensions.
VMware provides information about patches for ESXi hosts and virtual appliance upgrades.
• Metadata about all ESXi 5.5 and ESXi 6.x patches, regardless of whether you have hosts of such
versions in your environment or not.
• Metadata about ESXi 5.5 and ESXi 6.x patches as well as about extensions from third-party
vendor URL addresses.
• Notifications, alerts, and patch recalls for ESXi 5.5 and ESXi 6.x hosts.
• Metadata about upgrades for virtual appliances.
Downloading information about all updates is a relatively low-cost operation in terms of disk space
and network bandwidth. The availability of regularly updated metadata lets you add scanning tasks for
hosts or appliances at any time.
Update Manager supports the recall of patches for hosts that are running ESXi 5.0 or later. A patch is
recalled if the released patch has problems or potential issues. After you scan the hosts in your
environment, Update Manager alerts you if the recalled patch has been installed on a certain host.
Recalled patches cannot be installed on hosts with Update Manager. Update Manager also deletes all
the recalled patches from the Update Manager patch repository. After a patch fixing the problem is
released, Update Manager downloads the new patch to its patch repository. If you have already
installed the problematic patch, Update Manager notifies you that a fix was released and prompts you
to apply the new patch.
If Update Manager cannot download upgrades, patches, or extensions — for example, if it is deployed
on an internal network segment that does not have Internet access — you must use UMDS to
download and store the data on the machine on which UMDS is installed. The Update Manager server
can use the upgrades, patches, and extensions that UMDS downloaded after you export them.
For more information about UMDS, see Installing, Setting Up, and Using Update Manager Download
Service.
You can configure Update Manager to use an Internet proxy to download upgrades, patches,
extensions, and related metadata.
You can change the time intervals at which Update Manager downloads updates or checks for
notifications. For detailed descriptions of the procedures, see Configure Checking for Updates and
Configure Notifications Checks.
Update Manager downloads software updates and metadata from Internet depots or UMDS-created
shared repositories. You can import offline bundles and host upgrade images from a local storage
device into the local Update Manager repository.
Bulletin A grouping of one or more VIBs. Bulletins are defined within metadata.
Depot A logical grouping of VIBs and associated metadata that is published online.
70
ESXi Host Lifecycle
Host
An ESXi image that you can import in the Update Manager repository and use for
upgrade
upgrading ESXi 5.5 or ESXi 6.0 hosts to ESXi 6.5.
image
A bulletin that groups one or more VIBs together to address a particular issue or
Patch
enhancement.
Roll-up A collection of patches that is grouped for ease of download and deployment.
VA upgrade Updates for a virtual appliance, which the vendor considers an upgrade.
71