Homework 9
Homework 9
A network load balancing cluster filters and distributes TCP/IP traffic across a range of
nodes, regulating connection load according to administrator-defined port rules.
A failover cluster provides high availability for services, applications, and other
resources through an architecture that maintains a consistent image of the cluster on all
nodes and that allows nodes to transfer resource ownership on demand.
The following are the programming interfaces for the Windows Clustering technologies:
The Network Load Balancing Provider allows developers to create remote administration
and configuration tools as well as customized user interfaces for Network Load
Balancing clusters.
Because single quorum device clusters are the most widely used cluster, this Technical Reference
focuses on this type of cluster.
Majority Node Set Cluster
Windows Server 2003 supports another type of cluster, the majority node set cluster. In a
majority node set cluster, each node maintains its own copy of the cluster configuration data. The
quorum resource keeps configuration data consistent across the nodes. For this reason, majority
node set clusters can be used for geographically dispersed clusters. Another advantage of
majority node set clusters is that a quorum disk can be taken offline for maintenance and the
cluster as a whole will continue to operate.
The major difference between majority node set clusters and single quorum device clusters is
that single quorum device clusters can operate with just one node, but majority node set clusters
need to have a majority of the cluster nodes available for the server cluster to operate. The
following figure illustrates a majority node set cluster. For the cluster in the figure to continue to
operate, two of the three cluster nodes (a majority) must be available.
Majority Node Set Cluster
This Technical Reference focuses on the single quorum device cluster, which is explained earlier
in this section.
Figure 1. You must install the Failover Clustering feature onto each cluster node.
Once the Failover Clustering feature has been installed onto each cluster node, you can build
your cluster. To do so, choose the Failover Cluster Manager command from the Server Manager's
Tools menu.
The first thing that you will have to do is to validate your hardware configuration to make sure
that it meets the requirements for building a failover cluster. To do so, click on the Validate
Configuration option, found in the Actions pane. This will cause Windows to launch the Validate
a Configuration Wizard.
Click Next to bypass the wizard's Welcome screen. You must then add the names of the servers
that you want to cluster to the Selected Servers list, as shown in Figure 2.
Figure 2. Specify the names of the servers that you would like to validate.
Click Next, and then tell Windows that you would like to run all tests. Click Next two more
times and the testing process will begin. The length of time that the tests take to complete varies
depending on the number of servers in your cluster and on your hardware's performance. In my
experience the validation tests usually complete within five minutes. It is normal for the
validation report to contain warnings, as shown in Figure 3, but it should not contain any errors.
The Nodes container lists the cluster nodes and their status, as shown in Figure 6. This is also
where you would go to add additional nodes to the cluster. As you can see in the figure, the
Actions pane contains an Add Node link.
Figure 6. The Nodes container lists all of the cluster nodes and the status of each node.
Keep in mind that even though the cluster is not using shared storage, it still needs to have some
storage available to it. Otherwise, you won't be able to fail over storage dependent resources
(such as virtual machines). You can add storage by selecting the Disks container and then
clicking on the Add Disks link.
3.
What is SDDC?
SDDC is short for software-defined data center.May also be called software-defined datacenter
(SDD) or virtual data center.Software-defined data center (SDDC) is the phrase used to refer to
a data center where all infrastructure is virtualized and delivered as a service. Control of the data
center is fully automated by software, meaning hardware configuration is maintained through
intelligent software systems. This is in contrast to traditional data centers where the infrastructure
is typically defined by hardware and devices.
Software-defined data centers are considered by many to be the next step in the evolution of
virtualization and cloud computing as it provides a solution to support both legacyenterprise
applications and new cloud computing services.
Core Components of the Software-Defined Data Center
According to Torsten Volk, EMA, there are three core components of the software-defined data
center: network virtualization, server virtualization and storage virtualization. A business logic
layer is also required to translate application requirements, SLAs, policies and cost
considerations. (Source: EMA Blogs; The Software-Defined Data center: Core Components)
4. What is OVF?
Open Virtualization Format (OVF) is an open-source standard for packaging and
distributing software and applications for virtual machines (VM). An OVF package contains
multiple files in a single directory. The directory always contains an Extensible Markup
Language (XML) file called the OVF descriptor with the name, hardware requirements, and
references to other files in the package. In addition, the OVF package typically contains
anetwork description, a list of virtual hardware, virtual drives, certificate files, information about
the operating system (OS) and in some cases, a human-readable description of every information
item.
The OVF standard is one of several standards and initiatives supported by the Distributed
Management Task Force (DMTF). VMware, a company that provides virtualization software
for x86 computers, offers a widely-used OVF package. Numerous other vendors
including IBM, Microsoft, JumpBox, VirtualBox, XenServer, AbiCloud, OpenNode Cloud,
SUSE Studio, Morfeo Claudia, and OpenStack use OVF in their virtualization products.
If youve worked with recent versions of VMware virtual infrastructure, Converter, or
Workstation, you may be familiar with the fact that these products have the native ability to work
with virtual machines in the Open Virtualization Format, or OVF for short. OVF is a
Specification governed by the DMTF (Distributed Management Task Force) which to me sounds
a lot like RFCs which provide standards for protocols and communication across compute
platforms basically SOPs for how content is delivered on the internet as we know it today.
So if theres one standard, why is it that when I choose to create an OVF (Export OVF Template
in the vSphere Client), Im prompted to create either an OVF or an OVA? If the OVF is an OVF,
then whats an OVA?
Personally, Ive seen both formats, typically when deploying packaged appliances. The answer
is simple: Both the OVF and the OVA formats roll up into the Specification defined by the
DMTF. The difference between the two is in the presentation and encapsulation. The OVF is a
construct of a few files, all of which are essential to its definition and deployment. The OVA on
the other hand is a single file with all of the necessary information encapsulated inside of it.
Think of the OVA as an archive file. The single file format provides ease in portability. From a
size or bandwidth perspective, there is no advantage between one format or the other as they
each tend to be the same size when all is said and done.
The DMTF explains the two formats on pages 12 through 13 in the PDF linked above:
An OVF package may be stored as a single file using the TAR format. The extension of that file
shall be .ova (open virtual appliance or application).
An OVF package can be made available as a set of files, for example on a standard Web server.
Do keep in mind that which ever file type you choose to work with, if you plan on hosting them
on a web server, MIME types will need to be set up for .OVF, OVA, or both, in order for a client
to download them for deployment onto your hypervisor.
A VMware resource pool is the aggregated physical compute hardware -- CPU and
memory, as well as other components -- allocated to virtual machines (VMs) in a VMware
virtual infrastructure. A VMware administrator can choose how much of each physical
resource to allocate to each new VM and allocate portions of these logical resource groups to
various users, add and remove compute resources, or reorganize pools as required.
The VMware resource pool manages and optimizes these physical resources for virtual systems
within a VMware Distributed Resource Scheduler (DRS) cluster. With memory overcommit,
more resources can be allocated to VMs than are physically available. Changes that occur in one
resource pool will not affect other, unrelated resource pools VMware administrators create.
Administrators use VMware vCenter, third-party tools, or command-line interfaces (CLI) like
esxtop to monitor resource pools, gathering detailed CPU and memory statistics. End users
should not make changes to the resource pools.
Citrix and Microsoft also create resource pools in their respective virtualization environments
6. Difference between vsphere 5.5 and 6 ?
vSphere 5.5
vSphere 6.0
September 2013
March 2015
320
480
4 TB
12 TB
512
1000
vCPU per VM
64
128
vMEM per VM
1 TB
4 TB
VMDK Size
62 TB
62 TB
Cluster Size
32
64
1 vCPU
4 vCPU
Eager-Zeroed
Lazy-Zeroed
Eager-Zeroed
Thin Provision
Released
FT Max vCPU
FT Supported Disk Types
FT Supported Features
Virtualization, Snapshot,
Paravirtual Devices, Storage
Redundancy
VM Hardware Version
10
11
5.60
5.61
Management
Authentication
Management
vMotion restricted to
Datacenter object
vMotion Network
Support
L2 Network
max. 10ms RTT
L3 Network
max. 100ms RTT
Content Library
NO
YES
Certificate Authority
(VMCA)
NO
YES
Virtual Volumes
NO
YES
VSAN 5.5
VSAN 6.0
NO
YES
32 Nodes
64 Nodes
NO
YES
Windows
Linux (vCSA)
Windows
Linux (vCSA)
Windows only
Microsoft ADAM Replication
100 Hosts
3000 VMs
1000 Hosts
VMFS Version
vMotion
Virtual SAN
All-Flash VSAN
VSAN Scale
VSAN Fault Domains
vCenter Type
7. Purpose of vcentre ?
WMware VCenter Server is a data center management server application developed by
VMware Inc. to monitor virtualized environments.
VCenter Server provides centralized management and operation, resource provisioning and
performance evaluation of virtual machines residing on a distributed virtual data center.
VMware VCentre Server is designed primarily for VSphere, VMwares platform for building