0% found this document useful (0 votes)
64 views12 pages

Homework 9

Windows Clustering encompasses two clustering technologies: network load balancing clusters and failover clusters. Network load balancing clusters distribute TCP/IP traffic across nodes, while failover clusters provide high availability of services and applications by maintaining a consistent cluster image and allowing nodes to transfer resource ownership. There are three types of server clusters based on how cluster nodes are connected to storage: single quorum device clusters, majority node set clusters, and local quorum clusters. Single quorum device clusters are the most widely used.

Uploaded by

ro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views12 pages

Homework 9

Windows Clustering encompasses two clustering technologies: network load balancing clusters and failover clusters. Network load balancing clusters distribute TCP/IP traffic across nodes, while failover clusters provide high availability of services and applications by maintaining a consistent cluster image and allowing nodes to transfer resource ownership. There are three types of server clusters based on how cluster nodes are connected to storage: single quorum device clusters, majority node set clusters, and local quorum clusters. Single quorum device clusters are the most widely used.

Uploaded by

ro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

1. Windows Clustering ?

A cluster is a group of independent computer systems, referred to as nodes, working together as a


unified computing resource. A cluster provides a single name for clients to use and a single
administrative interface, and it guarantees that data is consistent across nodes.
Windows Clustering encompasses two different clustering technologies. These technologies
implement the following two types of clusters.

A network load balancing cluster filters and distributes TCP/IP traffic across a range of
nodes, regulating connection load according to administrator-defined port rules.

A failover cluster provides high availability for services, applications, and other
resources through an architecture that maintains a consistent image of the cluster on all
nodes and that allows nodes to transfer resource ownership on demand.

The following are the programming interfaces for the Windows Clustering technologies:

The Network Load Balancing Provider allows developers to create remote administration
and configuration tools as well as customized user interfaces for Network Load
Balancing clusters.

The Failover Cluster APIs allow developers to create cluster-aware applications,


implement high availability for new types of resources, and create remote administration
and configuration tools.

Types of Server Clusters


There are three types of server clusters, based on how the cluster systems, called nodes, are
connected to the devices that store the cluster configuration and state data. This data must be
stored in a way that allows each active node to obtain the data even if one or more nodes are
down. The data is stored on a resource called the quorum resource. The data on the quorum
resource includes a set of cluster configuration information plus records (sometimes called
checkpoints) of the most recent changes made to that configuration. A node coming online after
an outage can use the quorum resource as the definitive source for recent changes in the
configuration.
The sections that follow describe the three different types of server clusters:

Single quorum device cluster, also called a standard quorum cluster

Majority node set cluster

Local quorum cluster, also called a single node cluster

Single Quorum Device Cluster


The most widely used cluster type is the single quorum device cluster, also called the standard
quorum cluster. In this type of cluster there are multiple nodes with one or more cluster disk
arrays, also called the cluster storage, and a connection device, that is, a bus. Each disk in the
array is owned and managed by only one server at a time. The disk array also contains the
quorum resource. The following figure illustrates a single quorum device cluster with one cluster
disk array.
Single Quorum Device Cluster

Because single quorum device clusters are the most widely used cluster, this Technical Reference
focuses on this type of cluster.
Majority Node Set Cluster
Windows Server 2003 supports another type of cluster, the majority node set cluster. In a
majority node set cluster, each node maintains its own copy of the cluster configuration data. The
quorum resource keeps configuration data consistent across the nodes. For this reason, majority
node set clusters can be used for geographically dispersed clusters. Another advantage of
majority node set clusters is that a quorum disk can be taken offline for maintenance and the
cluster as a whole will continue to operate.
The major difference between majority node set clusters and single quorum device clusters is
that single quorum device clusters can operate with just one node, but majority node set clusters
need to have a majority of the cluster nodes available for the server cluster to operate. The
following figure illustrates a majority node set cluster. For the cluster in the figure to continue to
operate, two of the three cluster nodes (a majority) must be available.
Majority Node Set Cluster

This Technical Reference focuses on the single quorum device cluster.


Local Quorum Cluster
A local quorum cluster, also called a single node cluster, has a single node and is often used for
testing. The following figure illustrates a local quorum cluster.
Local Quorum Cluster

This Technical Reference focuses on the single quorum device cluster, which is explained earlier
in this section.

2. How To Create a Windows Server Failover Cluster Without Shared Storage?


One of the really great things about Windows Server 2012 and Windows Server 2012 R2 is that
it removes the requirement for shared storage. Microsoft recommends the use of a cluster shared
volume whenever possible, but it is possible to build a cluster without it.
The first step in the process is to install the Failover Clustering feature. To do so, select the Add
Roles and Features command from Server Manager's Tools menu. Next, work your way through
the Add Roles and Features Wizard until you reach the wizard's Features page. Select the
Failover Clustering feature, as shown in Figure 1. If you are prompted to deploy additional
features, click on the Add Features button. Now, click Next, followed by Install to complete the
installation process. You will need to install the Failover Clustering feature on each cluster node

Figure 1. You must install the Failover Clustering feature onto each cluster node.
Once the Failover Clustering feature has been installed onto each cluster node, you can build
your cluster. To do so, choose the Failover Cluster Manager command from the Server Manager's
Tools menu.
The first thing that you will have to do is to validate your hardware configuration to make sure
that it meets the requirements for building a failover cluster. To do so, click on the Validate
Configuration option, found in the Actions pane. This will cause Windows to launch the Validate
a Configuration Wizard.
Click Next to bypass the wizard's Welcome screen. You must then add the names of the servers
that you want to cluster to the Selected Servers list, as shown in Figure 2.

Figure 2. Specify the names of the servers that you would like to validate.
Click Next, and then tell Windows that you would like to run all tests. Click Next two more
times and the testing process will begin. The length of time that the tests take to complete varies
depending on the number of servers in your cluster and on your hardware's performance. In my
experience the validation tests usually complete within five minutes. It is normal for the
validation report to contain warnings, as shown in Figure 3, but it should not contain any errors.

Figure 3. The validation report should not contain any errors.


When you click Finish, Windows will launch the Create Cluster Wizard. Click Next to bypass
the wizard's Welcome screen and then you will be prompted to enter a name for the cluster. The
cluster name works similarly to a computer name in that it identifies the cluster on the network.
You must enter a unique name and it must adhere to NetBIOS naming conventions. While you
are on this screen, you must also enter an IP address for the cluster, as shown in Figure 4.

Figure 4. Enter a name and an IP address for the cluster.


Click Next and you will see a confirmation screen outlining the cluster configuration. There is a
checkbox on this screen that allows you to add all eligible storage to the cluster. You will need to
deselect this check box if you wish to avoid having storage claimed by the cluster.
Click Next and the cluster will be created. This process usually takes less than a minute to
complete. Upon completion, you should see a summary screen indicating that the cluster has
been created successfully. You can see what this screen looks like in Figure 5. Click Finish to
complete the process.

Figure 5. The cluster was created successfully.


Now that the failover cluster has been created, you will see it listed in the Failover Cluster
Manager. The Roles container lists your clustered roles. No roles are clustered by default, so this
container will be empty.

The Nodes container lists the cluster nodes and their status, as shown in Figure 6. This is also
where you would go to add additional nodes to the cluster. As you can see in the figure, the
Actions pane contains an Add Node link.

Figure 6. The Nodes container lists all of the cluster nodes and the status of each node.
Keep in mind that even though the cluster is not using shared storage, it still needs to have some
storage available to it. Otherwise, you won't be able to fail over storage dependent resources
(such as virtual machines). You can add storage by selecting the Disks container and then
clicking on the Add Disks link.
3.

What is SDDC?

SDDC is short for software-defined data center.May also be called software-defined datacenter
(SDD) or virtual data center.Software-defined data center (SDDC) is the phrase used to refer to
a data center where all infrastructure is virtualized and delivered as a service. Control of the data
center is fully automated by software, meaning hardware configuration is maintained through
intelligent software systems. This is in contrast to traditional data centers where the infrastructure
is typically defined by hardware and devices.
Software-defined data centers are considered by many to be the next step in the evolution of
virtualization and cloud computing as it provides a solution to support both legacyenterprise
applications and new cloud computing services.
Core Components of the Software-Defined Data Center
According to Torsten Volk, EMA, there are three core components of the software-defined data
center: network virtualization, server virtualization and storage virtualization. A business logic

layer is also required to translate application requirements, SLAs, policies and cost
considerations. (Source: EMA Blogs; The Software-Defined Data center: Core Components)

4. What is OVF?
Open Virtualization Format (OVF) is an open-source standard for packaging and
distributing software and applications for virtual machines (VM). An OVF package contains
multiple files in a single directory. The directory always contains an Extensible Markup
Language (XML) file called the OVF descriptor with the name, hardware requirements, and
references to other files in the package. In addition, the OVF package typically contains
anetwork description, a list of virtual hardware, virtual drives, certificate files, information about
the operating system (OS) and in some cases, a human-readable description of every information
item.
The OVF standard is one of several standards and initiatives supported by the Distributed
Management Task Force (DMTF). VMware, a company that provides virtualization software
for x86 computers, offers a widely-used OVF package. Numerous other vendors
including IBM, Microsoft, JumpBox, VirtualBox, XenServer, AbiCloud, OpenNode Cloud,
SUSE Studio, Morfeo Claudia, and OpenStack use OVF in their virtualization products.
If youve worked with recent versions of VMware virtual infrastructure, Converter, or
Workstation, you may be familiar with the fact that these products have the native ability to work
with virtual machines in the Open Virtualization Format, or OVF for short. OVF is a
Specification governed by the DMTF (Distributed Management Task Force) which to me sounds
a lot like RFCs which provide standards for protocols and communication across compute
platforms basically SOPs for how content is delivered on the internet as we know it today.
So if theres one standard, why is it that when I choose to create an OVF (Export OVF Template
in the vSphere Client), Im prompted to create either an OVF or an OVA? If the OVF is an OVF,
then whats an OVA?

Personally, Ive seen both formats, typically when deploying packaged appliances. The answer
is simple: Both the OVF and the OVA formats roll up into the Specification defined by the
DMTF. The difference between the two is in the presentation and encapsulation. The OVF is a
construct of a few files, all of which are essential to its definition and deployment. The OVA on
the other hand is a single file with all of the necessary information encapsulated inside of it.
Think of the OVA as an archive file. The single file format provides ease in portability. From a
size or bandwidth perspective, there is no advantage between one format or the other as they
each tend to be the same size when all is said and done.

The DMTF explains the two formats on pages 12 through 13 in the PDF linked above:
An OVF package may be stored as a single file using the TAR format. The extension of that file
shall be .ova (open virtual appliance or application).
An OVF package can be made available as a set of files, for example on a standard Web server.
Do keep in mind that which ever file type you choose to work with, if you plan on hosting them
on a web server, MIME types will need to be set up for .OVF, OVA, or both, in order for a client
to download them for deployment onto your hypervisor.

5. What is Resource pool?

A VMware resource pool is the aggregated physical compute hardware -- CPU and
memory, as well as other components -- allocated to virtual machines (VMs) in a VMware
virtual infrastructure. A VMware administrator can choose how much of each physical
resource to allocate to each new VM and allocate portions of these logical resource groups to
various users, add and remove compute resources, or reorganize pools as required.
The VMware resource pool manages and optimizes these physical resources for virtual systems
within a VMware Distributed Resource Scheduler (DRS) cluster. With memory overcommit,
more resources can be allocated to VMs than are physically available. Changes that occur in one
resource pool will not affect other, unrelated resource pools VMware administrators create.
Administrators use VMware vCenter, third-party tools, or command-line interfaces (CLI) like
esxtop to monitor resource pools, gathering detailed CPU and memory statistics. End users
should not make changes to the resource pools.
Citrix and Microsoft also create resource pools in their respective virtualization environments
6. Difference between vsphere 5.5 and 6 ?
vSphere 5.5

vSphere 6.0

September 2013

March 2015

Physical CPUs per host

320

480

Physical RAM per host

4 TB

12 TB

VMs per host

512

1000

vCPU per VM

64

128

vMEM per VM

1 TB

4 TB

VMDK Size

62 TB

62 TB

Cluster Size

32

64

1 vCPU

4 vCPU

Eager-Zeroed

Lazy-Zeroed
Eager-Zeroed
Thin Provision

HA, DPM, SRM, VDS

HA, DPM, SRM, VDS, Hot


Configure FT, H/W

Released

FT Max vCPU
FT Supported Disk Types

FT Supported Features

Virtualization, Snapshot,
Paravirtual Devices, Storage
Redundancy
VM Hardware Version

10

11

5.60

5.61

Management

vSphere Web Client


vSphere Client (C#)

vSphere Web Client


vSphere Client (C#)

Authentication
Management

Single Sign-On 5.5

Platform Services Controller

vMotion restricted to
Datacenter object

vMotion across vCenters


vMotion across vSwitches

vMotion Network
Support

L2 Network
max. 10ms RTT

L3 Network
max. 100ms RTT

Content Library

NO

YES

Certificate Authority
(VMCA)

NO

YES

Virtual Volumes

NO

YES

VSAN 5.5

VSAN 6.0

NO

YES

32 Nodes

64 Nodes

NO

YES

Windows
Linux (vCSA)

Windows
Linux (vCSA)

vCenter Linked Mode

Windows only
Microsoft ADAM Replication

Windows & VCSA


Native Replication

vCSA Scale (vPostgres)

100 Hosts
3000 VMs

1000 Hosts

VMFS Version

vMotion

Virtual SAN
All-Flash VSAN
VSAN Scale
VSAN Fault Domains
vCenter Type

7. Purpose of vcentre ?
WMware VCenter Server is a data center management server application developed by
VMware Inc. to monitor virtualized environments.
VCenter Server provides centralized management and operation, resource provisioning and
performance evaluation of virtual machines residing on a distributed virtual data center.
VMware VCentre Server is designed primarily for VSphere, VMwares platform for building

virtualized cloud infrastructures.WMware VCenter Server was previously known as VMware


VirtualCenter. VCenter Server is installed at the primary server of a virtualized data center
and operates as the virtualization or virtual machine manager for that environment. It also
provides data center administrators and a central management console to manage all the
system's virtual machines.
Virtual center provides statistical information about the resource use of each virtual machine
and provisions the ability to scale and adjust the compute, memory, storage and other
resource management functions from a central application. It manages the performance of
each virtual machine against specified benchmarks, and optimizes resources wherever
required to provide consistent efficiency throughout the networked virtual architecture.
Besides routine management, virtual center also ensures security by defining and monitoring
access control to and from the virtual machines, migration of live machines, and
interoperability and integration among other Web services and virtual environments.
8. Templates in vmware ?
A VMware template is a perfect, model copy of a virtual machine from which you can clone,
convert or deploy more virtual machines. A VMware template includes the virtual machine's
virtual disks and settings from its .vmx configuration file, managed with permissions.
Templates save time and avoid errors when configuring settings and other choices to create
new Windows or Linux server VMs. They can also be used as long-term in-place backups of
VMs, and to ensure consistent VMs are created and deployed across a company. A VMware
template cannot be operated without reverting it to a virtual machine (VM)

You might also like