VmWare Questions and Answers

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 34

VMware Questions and answers

1. What is HA?
VMware HA delivers the availability needed by many applications running in virtual machines,
independent of the operating system and application running in it. VMware HA provides uniform,
cost-effective failover protection against hardware and operating system failures within your
virtualized IT environment.
• Monitors virtual machines to detect operating system and hardware failures.
• Restarts virtual machines on other physical servers in the resource pool without manual
intervention when server failure is detected.
• Protects applications from operating system failures by automatically restarting virtual machines
when an operating system failure is detected.

2. How HA works?
VMware HA continuously monitors all servers in a resource pool and detects server failures. An agent
placed on each server maintains a “heartbeat” with the other servers in the resource pool and a loss of
“heartbeat” initiates the restart process of all affected virtual machines on other servers. VMware HA
ensures that sufficient resources are available in the resource pool at all times to be able to restart
virtual machines on different physical servers in the event of server failure. Restart of virtual
machines is made possible by the Virtual Machine File System (VMFS) clustered file system which
gives multiple ESX Server instances read-write access to the same virtual machine files, concurrently.
VMware HA is easily configured for a resource pool through VirtualCenter.
Key Features of VMware HA

• Automatic detection of server failures. Automate the monitoring of physical server availability. HA
detects server failures and initiates the virtual machine restart without any human intervention.

• Resource checks. Ensure that capacity is always available in order to restart all virtual machines
affected by server failure. HA continuously monitors capacity utilization and “reserves” spare
capacity to be able to restart virtual machines.

• Automatic restart of virtual machines. Protect any application with automatic restart in a different
physical server in the resource pool.

• Intelligent choice of servers (when used with VMware Distributed Resource Scheduler (DRS)).
Automate the optimal placement of virtual machines restarted after server failure.

The VMware HA Solution


With VMware HA, a set of ESX Server hosts is combined into a cluster with a shared pool of resources.
VMware HA monitors all hosts in the cluster. If one of the hosts fails, VMware HA immediately
responds by restarting each affected virtual machine on a different host.

Using VMware HA has a number of advantages:


• Minimal setup and startup. The New Cluster wizard is used for initial setup. Hosts and new virtual
machines can be added using the Virtual Infrastructure Client.
• Reduced hardware cost and setup. In a traditional clustering solution, duplicate hardware and
software must be available, and the components must be connected and configured properly. When
using VMware HA clusters, you must have sufficient resources to accommodate the number of hosts
for which you want to guarantee failover. However, the VirtualCenter Server takes care of all other
aspects of the resource management.
• VMware HA "democratizes" high availability by making it available and cost-justifiable for any
application, regardless of hardware and operating system platform. VMware HA is focused on
hardware failure, not on operating system or software failure. If you need greater levels and
guarantees of availability to handle those situations, you can consider using both VMware HA and
traditional high availability approaches together.

VMware HA Features
Using a cluster enabled for VMware HA provides the following features:
• Automatic failover is provided on ESX Server host hardware failure for all running virtual machines
within the bounds of failover capacity.
VMware HA provides automatic detection of server failures and initiates the virtual machine restart
without any human intervention.
• VMware HA can take advantage of DRS to provide for dynamic and intelligent resource allocation
and optimization of virtual machines after failover. After a host has failed and virtual machines have
been restarted on other hosts, DRS can provide further migration recommendations or migrate
virtual machines for more optimum host placement andbalanced resource allocation.
• VMware HA supports easy-to-use configuration and monitoring using VirtualCenter. HA ensures that
capacity is always available (within the limits of specified failover capacity) in order to restart all
virtual machines affected by server failure (based on resource reservations configured for the virtual
machines.)
• HA continuously monitors capacity utilization and "reserves" spare capacity to be able to restart
virtual machines. Virtual Machines can fully utilize spare failover capacity when there hasn't been a
failure.

3. What is DRS?

Align Resources to Meet Business Needs


VMware DRS continuously monitors utilization across resource pools and intelligently aligns
resources with business needs, enabling us to:
• Dynamically allocate IT resources to the highest priority applications. Create rules and policies to
prioritize how resources are allocated to virtual machines.
• Give IT autonomy to business organizations. Provide dedicated IT infrastructure to business units
while still achieving higher hardware utilization through resource pooling.
• Empower business units to build and manage virtual machines within their resource pool while
giving central IT control over hardware resources.

Balance Your Computing Capacity


VMware DRS continuously balances computing capacity in resource pools to deliver the performance,
scalability and availability not possible with physical infrastructure. VMware DRS allows us to:
• Improve service levels for all applications. VMware DRS continuously balance capacity will ensure
that each virtual machine has access to appropriate resources at any point in time.
• Easily deploy new capacity. VMware DRS will seamlessly take advantage of the additional capacity of
new servers added to a resource pool by redistributing virtual machines without system disruption.
• Automate planned server maintenance. VMware DRS can automatically migrate all virtual machines
off physical servers to enable scheduled server maintenance with zero downtime.
• Dramatically increase system administrator productivity. Enable system administrators to monitor
and effectively manage more IT infrastructure.

Reduce Energy Consumption in the Datacenter


VMware Distributed Power Management (DPM) continuously optimizes power consumption in the
datacenter. When virtual machines in a DRS cluster need fewer resources, such as during nights and
weekends, DPM consolidates workloads onto fewer servers and powers off the rest to reduce power
consumption. When virtual machine resource requirements increase (such as when users log into
applications in the morning), DPM brings powered-down hosts back online to ensure service levels
are met.
VMware Distributed Power Management allows IT organizations to:
• Cut ongoing power and cooling costs by up to 20% in the datacenter during low utilization time
periods.
• Automate management of energy efficiency in the datacenter
VMware DRS (with DPM) is included in the VMware vSphere Enterprise and Enterprise Plus edition.
DRS and DPM leverage VMware vMotion (live migration) to balance load and optimize power
consumption with no downtime.
Features
The following is a list of the key features of VMware DRS.
• Aggregation of physical server resources. Manage CPU and memory across a group of physical
servers as a uniform shared pool of resources.
• Flexible hierarchical organization. Organize resource pools hierarchically to match available IT
resources to the business organization. VMware DRS ensures that resource utilization is maximized
while business units retain control and autonomy of their infrastructure. Resource pools can be
flexibly added, removed, or reorganized as business needs or organization change.
• Priority Settings. Assign priorities in the form of shares or reservations to virtual machines within
resource pools and to sub resource pools to reflect business priorities. For example, the production
sub resource pool can have higher shares of the total resources in a cluster and business critical
applications within the production resource pool can have fixed guarantees(reservations) of CPU
bandwidth and memory,
• Management of sets of virtual machines running a distributed application. Optimize the service level
of distributed applications by controlling the aggregate allocation of resources for the entire set of
virtual machines running the distributed application.
• Affinity Rules. Create rules that govern placement of virtual machines on physical servers. For
example, a group of virtual machines can be set to always run on the same server for performance
reasons. Alternatively, certain virtual machines can be set to always run on different servers to
increase availability. New in vSphere 4.1 is the ability to restrict placement of virtual machines to a
group of physical servers in a cluster. This is useful for controlling the mobility of virtual machines
that run software licensed for a specific group of physical servers. In addition, this feature can be used
to keep sets of virtual machines on different racks or blade systems for availability reasons.
• Power Management. Reduce energy consumption in the datacenter by using the Distributed Power
Management (DPM) feature of DRS to consolidate workloads and power off servers when they are not
needed by the virtual machines in the cluster. When resource requirements of virtual machines
increase, DPM brings hosts back online so service levels can be met.
• Manual and Automatic Mode. VMware DRS collects resource usage information from servers and
virtual machines, and then generates recommendations to optimize virtual machine allocation. These
recommendations can be executed automatically or manually.
o Initial placement. When a virtual machine is first powered on, VMware DRS either automatically
places the virtual machine on the most appropriate physical server or makes a recommendation.
o Continuous optimization. VMware DRS continuously optimizes resource allocations based on
defined resource allocation rules and resource utilization. The resource allocation changes can be
automatically executed by performing live migration of virtual machines through vMotion.
Alternatively, in manual mode, VMware DRS provides execution recommendations for system
administrators.
• Maintenance mode for servers. Perform maintenance on physical servers without disruption to
virtual machines and end users. When a physical server is placed in maintenance mode, VMware DRS
identifies alternative servers where the virtual machines can run. Based on automation mode
settings, the virtual machines are either automatically moved to use the alternative servers, or the
system administrator performs the move manually using the VMware DRS recommendations as a
guideline.
• Large-scale management. Manage CPU and memory across up to 32 servers and 1280 virtual
machines per DRS cluster.

4. What is vMotion?

Experience Game-changing Virtual Machine Mobility


VMware vMotion technology, leverages the complete virtualization of servers, storage and
networking to move an entire running virtual machine instantaneously from one server to another.
VMware vMotion uses VMware’s cluster file system to control access to a virtual machine’s storage.
During a vMotion, the active memory and precise execution state of a virtual machine is rapidly
transmitted over a high speed network from one physical server to another and access to the virtual
machines disk storage is instantly switched to the new physical host. Since the network is also
virtualized by the VMware host, the virtual machine retains its network identity and connections,
ensuring a seamless migration process.

VMware vMotion allows you to:


• Perform live migrations with zero downtime, undetectable to the user.
• Continuously and automatically optimize virtual machines within resource pools.
• Perform hardware maintenance without scheduling downtime and disrupting business operations.
• Proactively move virtual machines away from failing or underperforming servers.

Reliably Manage Live Migrations with Ease


Benefit from the reliability and manageability derived from a production-proven product used by
thousands of customers for years. Live migration of virtual machines across your infrastructure is
surprisingly simple with functionality that lets you:
• Perform multiple concurrent migrations to continuously optimize a virtual IT environment.
• Identify the optimal placement for a virtual machine in seconds with a migration wizard providing
real-time availability information.
• Migrate any virtual machine running any operating system across any type of hardware and storage
supported by vSphere, including Fibre Channel SAN, NAS and iSCSI SAN.
• Prioritize live migrations to ensure that mission-critical virtual machines maintain access to the
resources they need.
• Schedule migrations to happen at pre-defined times, and without an administrator’s presence.
• Maintain an audit trail with a detailed record of migrations.
How Does VMware VMotion Work?

Live migration of a virtual machine from one physical server to another with VMware VMotion is
enabled by three
underlying technologies.

First, the entire state of a virtual machine is encapsulated by a set of files stored on shared storage
such as Fibre Channel or iSCSI Storage Area Network (SAN) or Network Attached Storage (NAS).
VMware vStorage VMFS allows multiple installations of VMware ESX® to access the same virtual
machine files concurrently.

Second, the active memory and precise execution state of the virtual machine is rapidly transferred
over a high speed network, allowing the virtual machine to instantaneously switch from running on
the source ESX host to the destination ESX host. VMotion keeps the transfer period imperceptible to
users by keeping track of on-going memory transactions in a bitmap.
Once the entire memory and system state has been copied over to the target ESX host, VMotion
suspends the source virtual machine, copies the bitmap to the target ESX host, and resumes the
virtual machine on the target ESX host. This entire process takes less than two seconds on a Gigabit
Ethernet network.

Third, the networks being used by the virtual machine are also virtualized by the underlying ESX host,
ensuring that even after the migration, the virtual machine network identity and network
connections are preserved. VMotion manages the virtual MAC address as part of the process. Once the
destination machine is activated, VMotion pings the network router to ensure that it is aware of the
new physical location of the virtual MAC address.
Since the migration of a virtual machine with VMotion preserves the precise execution state, the
network identity, and the active network connections, the result is zero downtime and no disruption
to users.

Key Features of vMotion.

Reliability.
Proven by thousands of customers in production environments since 2004, VMotion continues to set
the standard for the most dependable live migration capabilities.

Performance.
Perform live migrations with downtime unnoticeable to the end users. Optimal use of CPU and
network resources ensures that the live migrations occur quickly and efficiently.

Interoperability.
Migrate virtual machines running any operating system across any type of hardware and storage
supported by VMware ESX.
• Support for Fibre Channel SAN.
Implement live migration of virtual machines utilizing a wide range of up to 4GB Fibre Channel SAN
storage systems.
• NAS and iSCSI SAN support. Implement live migration of virtual machines with lower-cost,
more easily managed shared storage.
• Customizable CPU compatibility settings. Ensure that virtual machines can be migrated across
different
versions of hardware. Enable virtual machines to benefit from the latest CPU innovations.
• New - Enhanced VMotion Compatibility. Live migrate virtual machines across different generations
of
hardware. Migrate virtual machines from older servers to new ones without disruption or downtime.

Manageability
• Migration wizard.
Quickly identify the best destination for a virtual machine using real-time information provided by
migration wizard.
• Multiple concurrent migrations.
Perform multiple concurrent migrations to continuously optimize virtual machine placement across
the entire
IT environment.
• Priority levels.
Assign a priority to each live migration operation to ensure that the most important virtual machines
always have access to the resources they need.
• Scheduled migration tasks.
Automate migrations to happen at pre-defined times, and without an administrator’s presence.
• Migration audit trail.
Maintain a detailed record of migration operations, including date/time and the administrators
responsible for initiating them.

5. What is VMware Storage VMotion?

VMware Storage VMotion is a component of VMware vSphere™ that provides an intuitive interface for
live migration of virtual machine disk files within and across storage arrays with no downtime or
disruption in service. Storage VMotion relocates virtual machine disk files from one shared storage
location to another shared storage location with zero downtime, continuous service availability and
complete transaction integrity. Storage VMotion enables organizations to perform proactive storage
migrations, simplify array migrations, improve virtual machine
storage performance and free up valuable storage capacity. Storage VMotion is fully integrated with
VMware vCenter Server to provide easy migration and monitoring.

How is VMware Storage VMotion Used in the Enterprise?

Customers use VMware Storage VMotion to:

• Simplify array migrations and storage upgrades.


The traditional process of moving data to new storage is cumbersome, time-consuming and
disruptive. With Storage VMotion, IT organizations can accelerate migrations while minimizing or
eliminating associated service disruptions, making it easier, faster and more cost-effective to embrace
new storage platforms and file formats, take advantage of flexible leasing models, retire older, hard-
to-manage storage arrays and to conduct storage upgrades and migrations based on usage and
priority policies. Storage VMotion works with any operating system and storage hardware platform
supported by VMware ESX™, enabling customers to use a heterogeneous mix
of datastores and file formats.
• Dynamically optimize storage I/O performance.
Optimizing storage I/O performance often requires reconfiguration and reallocation of storage, which
can be a
highly disruptive process for both administrators and users and often requires scheduling downtime.
With Storage
VMotion, IT administrators can move virtual machine disk files to alternative LUNs that are properly
configured to
deliver optimal performance without the need for scheduled downtime, eliminating the time and cost
associated with traditional methods.

• Efficiently manage storage capacity.


Increasing or decreasing storage allocation requires multiple manual steps, including coordination
between groups, scheduling downtime and adding additional storage. This is then followed by a
lengthy migration of virtual machine disk files to the new datastore, resulting in significant service
downtime. Storage VMotion improves this process by enabling administrators to take advantage of
newly allocated storage in a non-disruptive manner. Storage VMotion can also be used as a storage
tiering tool by moving data to different types of storage platforms based the data value, performance
requirements and storage costs.

How Does VMware Storage VMotion Work?

VMware Storage VMotion allows virtual machine storage disks to be relocated to different datastore
locations with no downtime, while being completely transparent to the virtual machine or the end
user.

Before moving a virtual machines disk file, Storage VMotion moves the “home directory” of the virtual
machine to the new location. The home directory contains meta data about the virtual machine
(configuration, swap and log files). After relocating the home directory, Storage VMotion copies the
contents of the entire virtual machine storage disk file to the destination storage host, leveraging
“changed block tracking” to maintain data integrity during the migration process. Next, the software
queries the changed block tracking module to determine what regions of the disk were written to
during the first iteration, and then performs a second iteration of copy, where those regions that were
changed during the first iteration copy (there can be several more iterations).

Once the process is complete, the virtual machine is quickly suspended and resumed so that it can
begin using the virtual machine home directory and disk file on the destination datastore location.
Before VMware ESX allows the virtual machine to start running again, the final changed regions of the
source disk are copied over to the destination and the source home and disks are removed.

This approach guarantees complete transactional integrity and is fast enough to be unnoticeable to
the end user.

Key Features of VMware Storage VMotion

Complete transaction integrity.


No interruption or downtime for users and applications during virtual machine storage disk
migrations.
Interoperability.
Storage VMotion can migrate storage disk files for virtual machines running any operating system
across any type of hardware and storage supported by VMware ESX.

1) How we manage the licenses, i.e. timely updating licenses, briefly explain?

-----------------------------------------------------------------------------------------------------------------------
2) If we found HA issue, what are steps we should follow to resolve the issues?

To troubleshoot HA errors:

Note: Most of these troubleshooting steps are done on the ESX console.
1.Run this command to verify that host name is in lowercase and is fully qualified:

hostname

2.Run this command to verify that hostname is shortname only and is in lowercase:

hostname –s

3.Run this command to verify that the correct service console IP is displayed:

hostname –i

4.Verify that the host name in /etc/hosts is lowercase and both FQDN and shortname are present.
5.Verify that the search domain is present in the /etc/resolv.conf file and is in lowercase.
6.Verify that the host name in /etc/sysconfig/network is FQDN and is in lowercase.
7.Verify that the host name in the /etc/vmware/esx.conf file is FQDN and is in lowercase.
8.Verify that the system name returned by the uname -a command is in lowercase.
9.Verify that the host name is in your DNS server and is in lowercase. To do this, run these commands:
a.nslookup

Where is the name of the host.

This command should return the service console IP.


b.nslookup

Where is the FQDN name of the host.

This command should return the service console IP.

c.nslookup

Where is the IP address of the host.

This command should return the FQDN of the host

10.Make sure the route for the service console is correct. To do this, from each host, ping the other
hosts in the environment.
11.Verify that all primary service consoles have the same name.
12.Verify that all primary service consoles are in the same IP subnet.
13.If the vmkernel port group of vMotion is on same vSwitch as primary Service Console port group,
add das.allowVmotionNetworks=1 to the advanced settings of
HA.
14.If the host has multiple service consoles, add das.allowNetwork0 to the Advanced HA Settings of
the cluster to ensure that only the primary service
console is allowed. For more information, see Incompatible HA Networks appearing when attempting
to configure HA (High Availability) (1006541).
15.Verify that you have the appropriate licenses available for HA. To do this, in LM Tools, perform a
status enquiry and verify that you have VC_DAS licenses
available.
If you are unable to configure HA after verifying these troubleshooting steps:
1.Run this command on the ESX host to stop vpxa:

service vmware-vpxa stop

The host appears as not responding in the vCenter Server after a while.

2.Run these commands to uninstall aam:


1.rpm -qa | grep aam
2.rpm -e (package names output from command above)
3.rpm -e (other package names output from command above)
4.find / -name aam

Note: Ensure to delete the directories listed by this command.

3.Disconnect the ESX host from the vCenter Server.


4.Re-connect the host to the vCenter Server. This forces the VPXA package and the the HA packages to
re-deploy.
5.Re-configure all the hosts for HA.
6.Upgrade to ESX 3.5 U4 or later and vCenter Server 2.5U4 or later.
7.After upgrading, add das.bypassNetCompatCheck=true to the Advanced HA Settings of the cluster, if
it continues to have issues.
If your issue continues to exist after performing these steps, contact your network or storage
administrator.

-----------------------------------------------------------------------------------------------------------------------
3) Redundancy between NICs in an ESX server & how many minimum NICs required for esx

-----------------------------------------------------------------------------------------------------------------------
4) Minimum requirements for VMotion configure?

-----------------------------------------------------------------------------------------------------------------------
5) How licenses calculated/purchased for VMware environment?

-----------------------------------------------------------------------------------------------------------------------
6) What are the partitions of an ESX server?

Service Console Partitions and Sizes for Each ESX Server Host

Mount Point Partition Size Description

/dev/sda (Primary)

/boot ext3 250 MB Change for additional space for upgrades

N/A swap 1600 MB Change for maximum service console swap size

/ ext3 5120 MB Change for additional space in root

/dev/sda (Extended)

/var ext3 4096 MB Create partition to avoid overfilling root with log files

/tmp ext3 1024 MB Create partition to avoid overfilling root with temporary files

/opt ext3 2048 MB Create partition to avoid overfilling root with VMware HA log files

/home ext3 1024 MB Create partition to avoid overfilling root with agent / user files

vmkcore 100 MB Pre-configured


Free Space (Optional) Auto-configured and used for local VMFS-3 volume (needed for virtual
machines running Microsoft’s Clustering Software.

------------------------------------------------------------------------------------------------------------------------------
7) Whether we need licenses for HA, DRS feature?
Yes,

------------------------------------------------------------------------------------------------------------------------------
8) What should be the main reason for purple screen errors?

Purple Screen of Death

A Purple Screen of Death as seen in VMware ESX Server 3.0 In the event of a hardware error, the
vmkernel can 'catch' a Machine Check Exception.This results
in an error message displayed on a purple console screen. This is colloquially known as a PSOD, or
Purple Screen of Death.

Upon displaying a PSOD, the vmkernel writes debug information to the core dump partition. This
information, together with the error codes displayed on the
PSOD can be used by VMware support to determine the cause of the problem.

------------------------------------------------------------------------------------------------------------------------------
9) How to configure virtual switches & what is port-group & what is VLAN?

-----------------------------------------------------------------------------------------------------------------
10) Whether HA use VMotion or not?

No, it requires DRS.

-----------------------------------------------------------------------------------------------------------------
11) Whether DRS use VMotion or not?

yes

-----------------------------------------------------------------------------------------------------------------

12) What are processes & port numbers for virtual center, HA running in ESX?

Ports and descriptions:

80 – Required for direct HTTP connections. Port 80 redirects requests to HTTPS port 443.

443 - Listens for connections from the vSphere Client, vSphere Web Access Client, and other SDK
clients. Open port 443 in the firewall to enable the vCenter
Server system to receive data from the vSphere Client.

389 - This port is used for Lightweight Directory Access Protocol (LDAP) services. Who says LDAP,
says Active Directory Services for the vCenter Server
group.

636 – SSL port of the local instance for vCenter Linked Mode. It’s the port of the local vCenter Server
ADAM Instance.

902 - Used to send data to managed hosts. To send data to your ESX or ESXi hosts. Also this port is
used for remote console access to virtual machines from
vSphere Client. This port must not be blocked by firewalls between the server and the hosts or
between hosts.

902/903 - Used by the vSphere Client to display virtual machine consoles.

8080 – vCenter Management Webservices HTTP.

8443 - Secure connections for vCenter Management Webservices HTTPS.

60099 - Used to stream inventory object changes to SDK clients. Firewall rules for this port on the
vCenter Server can be set to block all, except from and
to localhosts if the clients are installed on the same host as the vCenter Server service.

--------Various services are installed when you deploy vCenter, in total 5 services are installed----------

1.VMware VirtualCenter Server: Heart of vCenter


2.VMware mount service for VirtualCenter: used during cloning operation or while deploying from
template
3.VMware VirtualCenter management webservices: Web management services run on it.
4.VMwareVCMSDS:ADAM services for linked mode
5.VMware vCenter orchestrator configuration: use for vCenter orchestrator

---------------------------------------------------------------------------------------------------------------

13) In ESX2.5.2 how we take backups of vm files?

----------------------------------------------------------------------------------------------------------------------
14) Explain the purpose of Redo log files?

-------------------------------------------------------------------------------------------------------------------------
15) VM is not able to power off, how to trouble shoot d issues?

Powering off the virtual machine

To determine if you must use the command line, attempt to power off the virtual machine:
1.Connect VMware Infrastructure (VI) Client to the Virtual Center Server. Right-click on the virtual
machine and click Power off.
2.Connect VI Client directly to the ESX host. Right-click on the virtual machine and click Power off.

If this does not work, you must use the command line method.
Determining the virtual machine's state
1.Determine the host on which the virtual machine is running. This information is available in the
virtual machine's Summary tab when viewed in the VI Client
page.
2.Log in as root to the ESX host using an SSH client.
3.Run the following command to verify that the virtual machine is running on this host:

# vmware-cmd -l

The output of this command returns the full path to each virtual machine running on the ESX host.
Verify that the virtual machine is listed, and record the
full path for use in this process. For example:

# /vmfs/volumes///.vmx

4.Run the following command to determine the state in which the ESX host believes the virtual
machine to be operating:

# vmware-cmd getstate

If the output from this command is getstate() = on, the VirtualCenter Server may not be
communicating with the host properly. This issue must be addressed in
order to complete the shutdown process.

If the output from this command is getstate() = off, the ESX host may be unaware it is still running the
virtual machine. This article provides additional
assistance in addressing this issue.
Powering off the virtual machine while collecting diagnostic information using the vm-support script

Use the following procedure when you want to investigate the cause of the issue. This command
attempts to power off the virtual machine while collecting
diagnostic information. Perform these steps in order, as they are listed in order of potential impact to
the system if performed incorrectly.

Perform these steps first:


1.Determine the WorldID with the command:

# vm-support -x

2.Kill the virtual machine by using the following command in the home directory of the virtual
machine:
# vm-support -X

This can take upwards of 30 minutes to terminate the virtual machine.

Note: This command uses several different methods to stop the virtual machine. When attempting
each method, the command waits for a pre-determined amount of
time. The timeout value can be configured to be 0 by adding -d0 to switch to the vm-support
command.
If the preceding steps fail, perform the following steps for an ESX 3.x host:
1.List all running virtual machines to find the VMID of the affected virtual machine with the command:

# cat /proc/vmware/vm/*/names

2.Determine the master world ID with the command:

# less -S /proc/vmware/vm/####/cpu/status

3.Scroll to the right with the arrow keys until you see the group field. It appears similar to:

Group
vm.####

4.Run the following command to shut the virtual machine down with the group ID:

# /usr/lib/vmware/bin/vmkload_app -k 9 ####
If the preceding steps fail, perform the following steps for an ESX 4.x host:
1.List all running virtual machines to find the vmxCartelID of the affected virtual machine with the
command:

# /usr/lib/vmware/bin/vmdumper -l

2.Scroll through the list until you see your virtual machine's name. The output appears similar to:

vmid=5151 pid=-1 cfgFile="/vmfs/volumes/4a16a48a-d807aa7e-e674-


001e4ffc52e9/mdineeen_test/vm_test.vmx" uuid="56 4d a6 db 0a e2 e5 3e-a9 2b 31 4b 69
29 15 19" displayName="vm_test" vmxCartelID=####

3.Run the following command to shut the virtual machine down with the vmxCartelID:

# /usr/lib/vmware/bin/vmkload_app -k 9 ####
Powering off the virtual machine using the vmware-cmd command
This procedure uses the ESX command line tool, and attempts to gracefully power off the virtual
machine. It works if the virtual machine's process is running
properly and is accessible. If unsuccessful, the virtual machine's process may not be running properly
and may require further troubleshooting.
1.From the Service Console of the ESX host, run the following command:

vmware-cmd stop

Note: is the complete path to the configuration file, as determined in the previous section. To verify
that it is stopped, run the command:

# vmware-cmd getstate

2.From the Service Console of the ESX host, run the command:

# vmware-cmd stop hard

Note: is the complete path to the configuration file, as determined in the previous section. To verify
that it is stopped, run the command:

# vmware-cmd getstate

3.If the virtual machine is still inaccessible, proceed to the next section.
Using the ESX command line to kill the virtual machine

If the virtual machine does not power off using the steps in this article, it has likely lost control of its
process. You need to manually kill the process
at the command line.

Caution: This procedure is potentially hazardous to the ESX host. If you do not identify the
appropriate process id (PID), and kill the wrong process, it may
have unanticipated results. If you are not comfortable with the following procedure, contact VMware
Technical Support and open a Service Request. Please
refer to this article when you create the SR.
1.To determine if the virtual machine process is running on the ESX host, run the command:

# ps auxwww |grep -i .vmx

The output of this command appears similar to the following if the .vmx process is running:

root 3093 0.0 0.3 2016 860 ? S< Jul30 0:17 /usr/lib/vmware/bin/vmkload_app
/usr/lib/vmware/bin/vmware-vmx -ssched.group=host/user -# name=VMware ESX
Server;version=3.5.0;licensename=VMware ESX Server;licenseversion=2.0 build-158874; -@
pipe=/tmp/vmhsdaemon-0/vmx569228e44baf49d1; /vmfs/volumes/49392e30- 162037d0-17c6-
001f29e9abec//.vmx
The process ID (PID) for this process is in bold. In this example, the PID is 3093. Take note of this
number for use in the following steps.

Caution: Ensure that you identify the line specific only to the virtual machine you are attempting to
repair. If you continue this process for another
virtual machine the one in question, you can cause downtime for the other virtual machine.

If the .vmx process is listed, it is possible that the virtual machine has lost control of the process and
that it must be stopped manually.

2.To kill the process, run the command:

# kill

3.Wait 30 seconds and check for the process again.


4.If it is not terminated, run the command:

# kill -9

5.Wait 30 seconds and check for the process again.


6.If it is not terminated, the ESX host may need to be rebooted to clear the process. This is a last resort
option, and should only be attempted if the
preceding steps in this article are unsuccessful.

------------------------------------------------------------------------------------------------------------------------------
16) Why we use two different ports for licenses, and what r those port No.?

27000 --- License transactions from ESX Server 3i to the license server (lmgrd.exe).|Outgoing TCP|

27010 --- License transactions from ESX Server 3i to the license server (vmwarelm.exe).|Outgoing
TCP|

------------------------------------------------------------------------------------------------------------------------------
17) VC server is not coming up, how to troubleshoot?

------------------------------------------------------------------------------------------------------------------------------
18) Difference between ESX3.5 & 4.0?

------------------------------------------------------------------------------------------------------------------------------
19) Briefly describe about update Manager, is it possible to update the powered off vms by update
manager?
------------------------------------------------------------------------------------------------------------------------------
20) Explain VMware Snapshot & what is d command to take a snapshot?

------------------------------------------------------------------------------------------------------------------------------
21) Suppose we have 3 port groups configured in a single Vswitch (connected to single physical NIC of
the esx host) with 3 different VLANs so how d VMs from
one VLAN will communicate to another VM of different VLAN?

------------------------------------------------------------------------------------------------------------------------------
22) What is d command to list all the running VMs & registered VMs?

Run the vm-support -x command to show which virtual machines are currently running on the ESX
host.

Run the vmware-cmd -l command to display the names of the virtual machines registered on this host.

------------------------------------------------------------------------------------------------------------------------------
23) What is d command to list d HBAs?

esxcfg-scsidevs -a (-a|--hbas Print HBA devices with identifying information)

esxcfg-scsidevs -A (-A|--hba-device-list Print a mapping between HBAs and the devices it provides
paths to.)

------------------------------------------------------------------------------------------------------------------------------
24) What r d P2V conversion processes/tools available, how we can perform d P2V of a Linux server
with d help of CLI commands (in case no specific tools
available)?

Converting a powered on Windows operating system (P2V)


Source Destination TCP Ports UDP Ports Notes

Converter server to Source computer 445, 139, 9089, 9090 137, 138 If the source computer uses
NetBIOS, port 445 is not required. If NetBIOS is not being
used, ports 137, 138, and 139 are not required. If in doubt, make sure that none of the ports are
blocked.
Note: Unless you have installed Converter server to the source computer, the account used for
authentication to the source computer must have a password, the
source computer must have network file sharing enabled, and it cannot be using Simple File Sharing.
Converter server to VirtualCenter 443 Only required if the conversion target is VirtualCenter.
Converter client to Converter server 443 Only required if a custom installation was performed and
the Converter server and client portions are on different
computers.
Source computer to ESX 443, 902 If the conversion target is VirtualCenter then only port 902 is
required.
Converting a powered on Linux operating system (P2V)
Source Destination TCP Ports Notes
Converter server to Source computer 22 The Converter server must be able to establish an SSH
connection with the source computer.
Converter client to Converter server 443 Only required if a custom installation was performed and
the Converter server and client portions are on different
computers.
Converter server to VirtualCenter 443 Only required if the conversion target is VirtualCenter.
Converter server to ESX 443, 902, 903 If the conversion target is VirtualCenter, only ports 902 and
903 are required.
Converter server to Helper virtual machine 443
Helper virtual machine Source computer 22 The helper virtual machine must be able to establish an
SSH connection with the source computer. By default the
helper virtual machine gets its IP address assigned by DHCP. If there is no DHCP server available on
the network chosen for the target virtual machine you
must manually assign it an IP address.

Converting an existing virtual machine (V2V)


Source Destination TCP Ports UDP Ports Notes

Converter server to Fileshare path 445, 139 137, 138 This is only required for standalone virtual
machine sources or destinations.

If the computer hosting the source or destination path uses NetBIOS, port 445 is not required. If
NetBIOS is not being used, ports 137, 138, and 139 are not
required. If in doubt, make sure that none of the ports are blocked.
Converter client to Converter server 443 Only required if a custom installation was performed and
the Converter server and client portions are on different
computers.
Converter server to VirtualCenter 443 Only required if the target is VirtualCenter.
Converter server to ESX 443, 902 If the conversion target is VirtualCenter, only port 902 is required.

------------------------------------------------------------------------------------------------------------------------------
25) What is d command to check d status of a VM?

vmware-cmd getstate

Retrieve the list of VMs in inventory with the following command:


vmware-vim-cmd vmsvc/getallvms

[root@ESX-SRV-94 /]# vmware-vim-cmd vmsvc/getallvms


Vmid Name File Guest OS Version Annotation
160 VMVXP-1 [SAN-STORE-2] VMVXP-1/VMVXP-1.vmx winXPProGuest vmx-07
240 Ubuntu [ESX-Storage-94-2] Ubuntu/Ubuntu.vmx ubuntuGuest vmx-07
Then query each VM with their VMID:
vmware-vim-cmd vmsvc/power.getstate

For example:
vmware-vim-cmd vmsvc/power.getstate 160

[root@ESX-SRV-94 /]# vmware-vim-cmd vmsvc/power.getstate 160


Retrieved runtime info
Powered on

------------------------------------------------------------------------------------------------------------------------------
26) What is d command to rescan the HBAs?

esxcfg-rescan

------------------------------------------------------------------------------------------------------------------------------
27) How to find the world ID of a particular VM and what is d VMware proprietary command to kill
the same?

vm-support -x
esxcli vms vm list

List all running virtual machines on the system to see the World ID of the virtual machine you want to
stop.
esxcli vms vm list

2 Stop the virtual machine by running the following command.

esxcli vms vm kill --type --world-id

The command supports three --type options. Try the types sequentially (soft before hard, hard before
force). The following types are supported through the --type option:
. soft – Gives the VMX process a chance to shut down cleanly (like kill or kill -SIGTERM)
. hard – Stops the VMX process immediately (like kill -9 or kill -SIGKILL)
. force – Stops the VMX process when other options do not work.
If all three options do not work, reboot your ESX/ESXi host to resolve the issue.

------------------------------------------------------------------------------------------------------------------------------
28) what is d command to add a route in esx to communicate to different network segment?

Configure the route using the command:

#route add -net 142.121.56.0 netmask 255.255.254.0 gw 224.58.175.1 Add the following line to
/etc/rc.local so that route is set on boot:

#/sbin/route add -net 142.121.56.0 netmask 255.255.254.0 gw 224.58.175.1 To ensure the route
holds on reboot, create an executable file.

To create an executable file:


1.Login to the ESX host using a SSH client.
2.Change the directory to /etc/init.d .
3.Run this command to create a file called routes:

#vi routes

4.Add this code to the file:

##! /bin/bash # case "$1" in 'start') echo "Adding additional routes... "

/sbin/route add -net 172.31.3.0 netmask 255.255.255.0 gw 172.31.8.1; echo ;; *) echo "Usage: $0 {
start }" ;; esac

5.Save the file and exit the vi editor.

6.Run this command to make the file executable:

#chmod 777 routes

7.Change the directory to /etc/rc3.d.


8.Run this command to create a symbolic link to that file:

#ln /etc/init.d/routes

9.Reboot the ESX host for the changes to take effect.

------------------------------------------------------------------------------------------------------------------------------

29) What is d default size of the swap partition & SC MEMORY?

1600MB SWAP, 400 MB (MAX 800MB)

------------------------------------------------------------------------------------------------------------------------------

30) How to increase SC memory after the esx build?

•ESX Host – 8GB RAM -> Default allocated Service Console RAM = 300MB
•ESX Host – 16GB RAM -> Default allocated Service Console RAM = 400MB
•ESX Host – 32GB RAM -> Default allocated Service Console RAM = 500MB
•ESX Host – 64GB RAM -> Default allocated Service Console RAM = 602MB
•ESX Host – 96GB RAM -> Default allocated Service Console RAM = 661MB
•ESX Host – 128GB RAM -> Default allocated Service Console RAM = 703MB
•ESX Host – 256GB RAM -> Default allocated Service Console RAM = 800MB

cp /etc/vmware/esx.conf /etc/vmware/esx.conf.old
cp /boot/grub/grub.conf /boot/grub/grub.conf.old
/bin/sed -i -e ‘s/272/800/’ /etc/vmware/esx.conf
/bin/sed -i -e ‘s/512/800/’ /etc/vmware/esx.conf
/bin/sed -i -e ‘s/272M/800M/’ /boot/grub/grub.conf
/bin/sed -i -e ‘s/512M/800M/’ /boot/grub/grub.conf
/bin/sed -i -e ‘s/277504/818176/’ /boot/grub/grub.conf
/bin/sed -i -e ‘s/523264/818176/’ /boot/grub/grub.conf

-----------------------------------------------------------------------------------------

31) What r d port No. for Vmotion & VMware converter?

ESX 4.x 8000 TCP ESX/ESXi Host (VM Target) TO ESX/ESXi Host (VM Source) VMotion Communication
on VMKernel Interface
ESX 4.x 8000 TCP ESX/ESXi Host (VM Source) TO ESX/ESXi Host (VM Target) VMotion Communication
on VMKernel Interface

ESXi 4.x 8000 TCP ESX/ESXi Host (VM Target) TO ESX/ESXi Host (VM Source) VMotion Communication
on VMkernel Interface
ESXi 4.x 8000 TCP ESX/ESXi Host (VM Source) TO ESX/ESXi Host (VM Target) VMotion Communication
on VMkernel Interface

Converter 4.x 22 TCP Helper Virtual Machine Source Computer to be converted Required for
conversion of Linux-based source computers (data flows from source
to VM)

Converter 4.x 22 TCP vCenter Converter Server Source Computer to be converted Required for
conversion of Linux-based source computers

Converter 4.x 137 UDP vCenter Converter Server Source Computer to be converted For hot migration.
Not required if the source computer does not use NetBIOS

Converter 4.x 138 UDP vCenter Converter Server Source Computer to be converted For hot migration.
Not required if the source computer does not use NetBIOS

Converter 4.x 139 TCP vCenter Converter Server Source Computer to be converted For hot migration.
Not required if the source computer does not use NetBIOS

Converter 4.x 443 TCP vCenter Converter Client vCenter Converter Server Only required if the
Converter Client and Converter Server were installed on
different systems
Converter 4.x 443 TCP Source Computer to be converted ESX/ESXi Host Required for destination VM
access when target is ESX/ESXi/vCenter

Converter 4.x 443 TCP Source Computer to be converted vCenter Server Required if vCenter Server is
the conversion target

Converter 4.x 443 TCP vCenter Converter Server vCenter Server Required if vCenter Server is the
conversion target

Converter 4.x 443 TCP vCenter Converter Server ESX/ESXi Host Required for system conversion

Converter 4.x 443 TCP vCenter Converter Server Helper Virtual Machine Required for conversion of
Linux-based source computers

Converter 4.x 445 TCP vCenter Converter Server Source Computer to be converted Required for
system conversion. Not required if the source computer uses
NetBIOS

Converter 4.x 902 TCP Source Computer to be converted ESX/ESXi Host Required for data transport
during cloning of system to be converted to target ESX/ESXi
Host

Converter 4.x 9089, 9090 TCP vCenter Converter Server Source Computer to be converted Required
for system conversion. Remote agent deployment

------------------------------------------------------------------------------------------------------------------------------
32) How to create Vmkcore partition after the esx build?

using parted we can create vmkcore partition if there is free space availabe else first free up about
100MB space on disk by resizing the root or any other
partion on the disk and then create new vmkcore partion with fc filesystem and reboot the host.

------------------------------------------------------------------------------------------------------------------------------
33) What r d agents will install, after adding an esx in VC server?

Vmware vcenter Agent

----------------------------------------------------------------------------------------------------------
34) What r d port No. for VMware management service?

8080, 8443 VMware vCenter 4 Management Web Services - HTTP and HTTPS

----------------------------------------------------------------------------------------------------------
35) What is d max No of VMs can run per host?
320

----------------------------------------------------------------------------------------------------------
36) What r all d files going to b create after a vm build?

.vmx, .vmfx, .vmsd, .vmdk (when start 3 more files are created --- .log, vswp, .nvram)

----------------------------------------------------------------------------------------------------------
37) What r d location of VC server log files?

C:\ProgramData\VMware\VMware VirtualCenter\Logs

---------------------------------------------------------------------------------------------------------
38) What r d necessary log files in ESX server?

esx server logs

VMWare ESX Server Logs

1) Vmkernel

a. Location: /var/log/

b. Filename: vmkernel

c. This log records information related to the vmkernel and virtual machines

2) Vmkernel Warnings

a. Location: /var/log/

b. Filename: vmkwarning

c. This log records information regarding virtual machine warnings

3) Vmkernel Summary

a. Location: /var/log/

b. Filename: vmksummary

c. This log records information used to determine uptime and availability statistics for ESX Server.
This log is not easily readable by humans, import
into a spreadsheet or database for use.

d. For a summary of the statistics in an easily viewed file, see vmksummary.txt


4) ESX Server Boot Log

a. Location: /var/log

b. Filename: boot.log

c. Log file of all actions that occurred during the ESX server boot.

5) ESX Server Host Aagent Log

a. Location: /var/log/vmware/

b. Filename: hostd.log

c. Contains information on the agent that manages and configures the ESX Server host and its virtual
machines (Search the file date/time stamps to find
the log file it is currently outputting to).

6) Service Console

a. Location: /var/log/

b. Filename: messages

c. Contain all general log messages used to troubleshoot virtual machines on ESX Server.

7) Web Access

a. Location: /var/log/vmware/webAccess

b. Filename: various files in this location

c. Various logs on Web access to the ESX Server.

8) Authentication Log

a. Location: /var/log/

b. Filename: secure

c. Contains the records of connections that require authentication, such as VMware daemons and
actions initiated by the xinetd daemon.

9) VirtualCenter HA Agent Log

a. Location: /var/log/vmware/aam/

b. Filename: aam_config_util_*.log
c. These files contain information about the installation, configuration, and connections to other HA
agents in the cluster.

10) VirtualCenter Agent

a. Location: /var/log/vmware/vpx

b. Filename: vpxa.log

c. Contains information on the agent that communicates with the VirtualCenter Server.

11) Virtual Machine Logs

a. Location: The same directory as the virtual machine’s configuration files are placed in.

b. FileName: vmware.log

c. Contains information when a virtual machine crashes or ends abnormally.

VirtualCenter Installation Logs

1) The following install logs are located in the %TEMP% directory of the user that installed
VirtualCenter

a. vmlic.log i. Contains various test results for provided license file during the installation.

b. redist.log i. Contains MDAC/MCAD QFE rollup installation information

c. vmmsde.log i. Contains MSDE installation information

d. vmls.log i. The License server installation log.

e. vmosql.log i. The VirtualCenter database creation log file

f. vminst.log i. VirtualCenter installation log file

g. VCDatabaseUpgrade.log i. Results on upgrading the VC Database.

h. vmmsi.log i. The VI client installation log. Vpxd-0.log is a small log from the starting the client the
first time.

Virtual Center Logs

1) Location:
a. C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\Logs

2) Name: vpxd-#.log (# is one digit, 0-9)

a. vpxd-index contains the # of the currently active log file

3) Logs rotate each time vpxd is started, and also when it reaches 5 MB in size

VI Client Logs

1) Location: User %TEMP%\vpx

2) Name: viclient-#.log (# is one digit, 0-9)

3) Logs rotate each time VI Client is started, and is should be used for client-specific diagnostics

Miscellaneous Logs

1) Core Dump

a. Location: %USERPROFILE%\Application Data\VMware

2) License Server Debug Log

a. Location: %SystemRoot%\Temp

b. Filename: lmgrd.log i. This file is overwritten each time the service starts

c. This file contains various information about the license file and server.

3) Web Access (Tomcat) Logs

a. Location: C:\Program Files\VMware\Infrastructure\VirtualCenter Server\tomcat\logs

b. Filename: various files

c. All the Tomcat logs are here

esx-console logs
sysboot-vmkernel-boot.log , sysboot-dmesg-boot.log, sysboot-vmkernel-late.log, sysboot-dmesg-
late.og, sysboot.log

cd /vmfs/volumes/ESX-Storage-94-1/esxconsole-4c44398f-4238-b888-226e-001e0bcd236a/logs/

Core-dump location

cd vmfs/volumes/ESX-Storage-94-1/esxconsole-4c44398f-4238-b888-226e-001e0bcd236a/core-
dumps

----------------------------------------------------------------------------------------------------------------------
39) What is ESXTOP command and how to use this command (with all the fields/options)?

Esxtop version 4.1.0


Secure mode Off

Esxtop: top for ESX

These single-character commands are available:

^L - redraw screen
space - update display
h or ? - help; show this text
q - quit

Interactive commands are:

fF Add or remove fields


oO Change the order of displayed fields
s Set the delay in seconds between updates
# Set the number of instances to display
W Write configuration file ~/.esxtop41rc
k Kill a world
e Expand/Rollup Cpu Statistics
V View only VM instances
L Change the length of the NAME field
l Limit display to a single group

Sort by:
U:%USED R:%RDY N:GID
Switch display:
c:cpu i:interrupt m:memory n:network
d:disk adapter u:disk device v:disk VM p:power mgmt

Hit any key to continue:

9:26:17pm up 9 days 45 min, 149 worlds; CPU load average: 0.02, 0.06, 0.06
PCPU USED(%): 2.5 32 38 0.3 19 0.5 0.3 0.4 2.2 57 0.0 0.0 0.3 24 0.3 50 AVG: 14
PCPU UTIL(%): 3.4 34 41 0.5 26 1.0 0.7 0.7 2.7 65 0.2 0.2 0.6 29 0.6 60 AVG: 16
CCPU(%): 0 us, 2 sy, 97 id, 0 wa ; cs/sec: 108

ID GID NAME NWLD %USED %RUN %SYS %WAIT %RDY %IDLE %OVRLP %CSTP %MLMTD %SWPWT
1 1 idle 16 1351.56 1497.67 0.00 0.00 122.22 0.00 0.94 0.00 0.00 0.00
59 59 Ubuntu 7 229.64 264.45 0.00 441.37 0.02 138.50 0.79 0.00 0.00 0.00
11 11 console 1 1.65 2.66 0.03 98.24 0.07 98.23 0.01 0.00 0.00 0.00
60 60 VMVXP-1 5 1.08 1.56 0.00 500.00 0.10 199.79 0.01 0.00 0.00 0.00
7 7 helper 77 0.04 0.05 0.00 7700.00 0.01 0.00 0.00 0.00 0.00 0.00
8 8 drivers 10 0.01 0.01 0.00 1000.00 0.00 0.00 0.00 0.00 0.00 0.00
56 56 vmkiscsid.4303 2 0.01 0.01 0.00 200.00 0.00 0.00 0.00 0.00 0.00 0.00
49 49 storageRM.4292 1 0.00 0.00 0.00 100.00 0.00 0.00 0.00 0.00 0.00 0.00
19 19 vmkapimod 9 0.00 0.00 0.00 900.00 0.00 0.00 0.00 0.00 0.00 0.00
2 2 system 7 0.00 0.00 0.00 700.00 0.00 0.00 0.00 0.00 0.00 0.00
9 9 vmotion 4 0.00 0.00 0.00 400.00 0.00 0.00 0.00 0.00 0.00 0.00
47 47 FT 1 0.00 0.00 0.00 100.00 0.00 0.00 0.00 0.00 0.00 0.00
48 48 vobd.4291 6 0.00 0.00 0.00 600.00 0.00 0.00 0.00 0.00 0.00 0.00
52 52 net-cdp.4300 1 0.00 0.00 0.00 100.00 0.00 0.00 0.00 0.00 0.00 0.00
53 53 net-lbt.4301 1 0.00 0.00 0.00 100.00 0.00 0.00 0.00 0.00 0.00 0.00
57 57 vmware-vmkauthd 1 0.00 0.00 0.00 100.00 0.00 0.00 0.00 0.00 0.00 0.00

The following optional switches, relevant to esxtop in batch mode, can be used:

a Shows all statistics and not what is specified in the default configuration file, if it exists.
b Runs esxtop in batch mode.
c Loads a user-defined configuration file instead of the ~/.esxtop310rc default.
d Specifies the delay between statistics updates; the default is 5 seconds and the minimum is 2.
n Specified the number of statistics updates to capture before exiting.

For example, the following command would run esxtop in batch mode, updating all statistics to the file
perfstats.csv every 10 seconds for 360 iterations (a
total of 60 minutes) before exiting:

esxtop -a -b -d 10 -n 360 > perfstats.csv

-------------------------------------------------------------------------------------------------
40) What is d location of esx dump file and how to read it?

Core-dump location

cd vmfs/volumes/ESX-Storage-94-1/esxconsole-4c44398f-4238-b888-226e-001e0bcd236a/core-
dumps
-----------------------------------------------------------------------------------------------------
41) What id d location of the license file (*.LIC) in VC server and ESX server?

C:\ProgramData\VMware\VMware VirtualCenter\licenses\site\VMware VirtualCenter


Server\4.0\4.1.0.2

----------------------------------------------------------------------------------------------------
42) What is d command to check the VMFS version and ESX version?

vmkfstools -P storageN
vmware -v and
vimsh -n -e 'hostsvc/hostsummary' | grep fullName OR
cat /proc/vmware/version

-----------------------------------------------------------------------------------------------------------
43) How to extend the OS drive of a guest OS (windows VM)

vmkfstools -X 50M /vmfs/volumes/Storage2/testvm/testvm.vmdk

vmkfstools -X 50M /vmfs/volumes/Storage2/testvm/testvm.vmdk

----------------------------------------------------------------------------------------------------------
44) What is d command to clone a VM?

vmware-vdiskmanager with option -r

# vmkfstools -i /vmfs/volumes/Datastore04/rhel5_test_template/rhel5_test_template.vmdk
/vmfs/volumes/Datastore04/rhel5_test_clone/rhel5_test_clone.vmdk

-----------------------------------------------------------------------------------------------------------
45) What is d command to check all d virtual switch configuration details?

To configure networking from the ESX service console command line:


1.Ensure the network adapter you want to use is currently connected with the command:

[root@server root]# esxcfg-nics –l

The output appears similar to:

Name PCI Driver Link Speed Duplex Description


vmnic0 06:00.00 tg3 Up 1000Mbps Full Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet
vmnic1 07:00.00 tg3 Up 1000Mbps Full Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet

In the Link column, Up indicates that the network adapter is available and functioning.
2.List the current virtual switches with the command:

[root@server root]# esxcfg-vswitch –l

The output appears similar to:

Switch Name Num Ports Used Ports Configured Ports Uplinks


vSwitch0 32 3 32 vmnic0

PortGroup Name Internal ID VLAN ID Used Ports Uplinks


VM Network portgroup2 0 0 vmnic0

In the example output, there exists a virtual machine network named VM Network with no Service
Console portgroup. For illustration, the proceeding steps show
you how to create a new virtual switch and place the service console port group on it.
3.Create a new virtual switch with the command:

[root@server root]# esxcfg-vswitch –a vSwitch1

4.Create the Service Console portgroup on this new virtual switch:

[root@server root]# esxcfg-vswitch –A “Service Console” vSwitch1

Because there is a space in the name (Service Console), you must enclose it in quotation marks.

Note: To create Service Consoles one at time, you may need to delete all previous settings. For more
information, see Recreating Service Console Networking
from the command line (1000266).

5.Up-link vmnic1 to the new virtual switch with the command:

[root@server root]# esxcfg-vswitch –L vmnic1 vSwitch1

6.If you need to assign a VLAN, use the command:

[root@server root]# esxcfg-vswitch -v -p “Service Console” vSwitch0

where is the VLAN number. A zero here specifies no VLAN.

7.Verify the new virtual switch configuration with the command:

[root@server root]# esxcfg-vswitch –l

The output appears similar to:

Switch Name Num Ports Used Ports Configured Ports Uplinks


vSwitch0 32 3 32 vmnic0

PortGroup Name Internal ID VLAN ID Used Ports Uplinks


Service Console portgroup5 0 1 vmnic0

Switch Name Num Ports Used Ports Configured Ports Uplinks


vSwitch1 64 1 64 vmnic1

PortGroup Name Internal ID VLAN ID Used Ports Uplinks


Service Console portgroup14 0 1 vmnic1
8.Create the vswif (Service Console) interface. For example, run the command:

[root@server root]# esxcfg-vswif –a vswif0 –i 192.168.1.10 –n 255.255.255.0 –p “Service Console”


[‘Vnic’ warning] Generated New Mac address, 00:50:xx:xx:xx:xx for vswif0

Nothing to flush.

9.Verify the configuration with the command:

[root@esx]# esxcfg-vswif –l
Name Port Group IP Address Netmask Broadcast Enabled DHCP
v swif0 Service Console 192.168.1.10 255.255.255.0 192.168.1.255 true false

------------------------------------------------------------------------------------------------------------------------------------------
----
46) What is d command to upgrade the FS fromVMFS2 to VMFS3?

The first thing that you will need to do to perform the upgrade is log into the ESX 3 host as root. Once
logged in then you need to unload the ESX 3 VMFS
drivers that are currently loaded. The unloading of the VMFS drivers is for both VMFS 2 and VMFS 3.
To perform this you need to run the commands below.

vmkload_mod -u vmfs2

vmkload_mod -u vmfs3

You then need to load a specific driver for ESX3 that is called the "ESX3 Auxiliary FS driver". The
command below that loads this driver also includes the
switch to enable the upgrade mode contained within the driver.

vmkload_mod fsaux fsauxFunction=upgrade

The next step is to perform the upgrade on the VMFS2 volume. To do this, you need to make sure that
there are no other hosts accessing the volume. This is
very important, as it will go pear shaped quickly if other servers try to access the volume during the
upgrade process.

vmkfstools -T /vmfs/volumes/

Once the upgrade is completed, you need to check and confirm that the volume is vmfs3. You can do
this by running the following command which is once again
another vmkfstools command.

vmkfstools -P /vmfs/volumes/
You should also confirm that all your files are ok by checking the file system. The commonly used list
command for file systems at the service console is "ls
-l". If you have any more volumes to upgrade you may rinse and repeat the steps above until they are
all done. Once all your volumes are upgraded you do
need to unload the "auxiliary driver" that we loaded before and reload the normal VMFS drivers. Two
ways of doing this, one is to reboot and the other is to
run the commands below.

vmkload_mod -u fsaux

vmkload_mod vmfs2

vmkload_mod vmfs3

•ESX 3.0.0 is provided with 3.21 (initial release)


•ESX 3.5.0 is provided with 3.31
•vSphere (ESX 4.0) is provided with 3.33
•vSphere (ESX 4.1) is provided with 3.46

-------------------------------------------------------------------------------------------------------------------------------
47) What is RDM and what r all d File Systems (FS) it supports?

-------------------------------------------------------------------------------------------------------------------------------
48) What is SRM and how it works?

VMware vCenter Site Recovery Manager delivers advanced capabilities for disaster recovery
management, non-disruptive testing and automated failover. VMware
vCenter Site Recovery Manager can manage failover from production datacenters to disaster recovery
sites, as well as failover between two sites with active
workloads. Multiple sites can even recover into a single shared recovery site. Site Recovery Manager
can also help with planned datacenter failovers such as
datacenter migrations.

Disaster Recovery Management

. Create and manage recovery plans directly from VMware vCenter Server.
. Discover and display virtual machines protected by storage replication using integrations certified
by storage vendors.
. Extend recovery plans with custom scripts.
. Monitor availability of remote site and alert users of possible site failures.
. Store, view and export results of test and failover execution from VMware vCenter Server.
. Control access to recovery plans with granular role-based access controls.
. Leverage iSCSI, FibreChannel, or NFS-based storage replication solutions.
. Recover multiple sites into a single shared recovery site.
. Take advantage of the latest features and technologies included in VMware vSphere.

Non-Disruptive Testing

. Use storage snapshot capabilities to perform recovery tests without losing replicated data.
. Connect virtual machines to an existing isolated network for testing purposes.
. Automate execution of tests of recovery plans.
. Customize execution of recovery plans for testing scenarios.
. Automate cleanup of testing environments after completing tests.

Automated Failover

. Initiate recovery plan execution from VMware vCenter Server with a single button.
. Automate promotion of replicated datastores for recovery using adapters created by leading storage
vendors for their replication platforms.
. Execute user-defined scripts and pauses during recovery.
. Reconfigure virtual machines’ IP addresses to match network configuration at failover site.
. Manage and monitor execution of recovery plans within VMware vCenter Server.

What’s New in vCenter Site Recovery Manager 4?

. Protect more of your environment with added support for NFS storage replication.
. Set up many-to-one failover using shared recovery sites.
. Leverage new features in vSphere.

-------------------------------------------------------------------------------------------------------------------------------
49) What is d series of H/W (virtual) used for VM’s virtual mother mother/main board?

-------------------------------------------------------------------------------------------------------------------------------
50) What r d date store path selections & what r d options available for network load balancing?

You can display information about paths by running vicfg-mpath with one of the following options:
. List all devices with their corresponding paths, state of the path, adapter type, and other
information.
vicfg-mpath --list-paths
. Display a short listing of all paths.
vicfg-mpath --list-compact
. List all paths with adapter and device mappings.
vicfg-mpath --list-map

Managing Path Policies with esxcli

For each storage device managed by NMP (not PowerPath), an ESX/ESXi host uses a path selection
policy. By
default, VMware supports the following path selection policies. If you have a third‐party PSP installed
on your
host, its policy also appears on the list. The following path policies are supported by default:

Table 5-1. Supported Path Policies

Policy Description

VMW_PSP_FIXED The host always uses the preferred path to the disk when that path is available. If the
host
cannot access the disk through the preferred path, it tries the alternative paths. If you use the

VMW_PSP_FIXED policy, use esxcli nmp fixed to set or get the preferred path

VMW_PSP_FIXED_AP Extends the VMW_PSP_FIXED functionality to active‐passive and ALUA mode


arrays.

VMW_PSP_MRU The host uses a path to the disk until the path becomes unavailable. When the path
becomes
unavailable, the host selects one of the alternative paths. The host does not revert back to the
original path when that path becomes available again. There is no preferred path setting with
the MRU policy. MRU is the default policy for active‐passive storage devices and is required
for those devices.

VMW_PSP_RR The host uses an automatic path selection algorithm rotating through all available
paths. This
algorithm implements load balancing across all the available physical paths. Load balancing
is the process of spreading server I/O requests across all available host paths. The goal is to
optimize performance in terms of throughput (I/O per second, megabytes per second, or
response times).

Table 5-2. Path Policy Effects

Policy Active/Active Array Active/Passive Array


Most Recently Used Administrator action is required to fail
back after path failure.
Administrator action is required to fail back
after path failure.
Fixed VMkernel resumes using the preferred
path when connectivity is restored.
VMkernel attempts to resume using the
preferred path. This can cause path
thrashing or failure when another SP now
owns the LUN.
Round Robin No fail back. Next path in round robin scheduling is

You might also like