0% found this document useful (0 votes)
13 views34 pages

DC Interrview May 2024

DC interview
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views34 pages

DC Interrview May 2024

DC interview
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Server: A server is a computer program or device that provides a service to another computer program and its user, also

known as
the client. In a data center, the physical computer that a server program runs on is also frequently referred to as a server. That
machine might be a dedicated server or it might be used for other purposes.
RAID (redundant array of independent disks): a way of storing the same data in different places on multiple hard disks or solid-
state drives (SSDs) to protect data in the case of a drive failure.

RAID Levels: Devices with a redundant array of inexpensive disks adopt different versions called levels. The original blueprint that
brought about the term and developed the RAID setup enumerated several RAID levels. With these numbered systems, IT
professionals could easily differentiate RAID versions. Recently, the number of RAID levels has been categorized into three groups:

 Standard RAID Levels


 Nested RAID Levels
 Non-Standard RAID levels.

Standard RAID Levels:

 RAID 0: RAID 0 simply entails merging multiple disks into a single volume. This augments the speed of operation as users
are simultaneously reading and writing from multiple disks. A single file can then use the speed and capacity of the entire
drive.

The demerit of RAID 0 is that it lacks redundancy. If a single disk is lost, there will be complete data loss. It is not advisable to use
RAID 0 in a server environment is not advisable. Still, it can be used for other purposes where speed is vital and data loss doesn’t
cause significant havoc, such as cache.

 RAID 1: RAID 1 uses mirroring. Compared to RAID 0, RAID 1 can carry out more sophisticated configurations. The most
common use case of RAID 1 is where users possess a pair of similar disks that identically replicate the data across the
entire drives in the array.

The primary objective of RAID 1 is redundancy. If users lose a drive, additional drives will keep on running the operation. Also, if
there’s a drive failure, users can replace the faulty drives without any downtime. Furthermore, RAID 1 provides users with better
read performance. As a result, data can be read off on any of the drives present in the array. Nevertheless, this comes with a
downside which is a slightly higher write latency. This is because users need to write data on both drives in the array and only the
capacity of a single drive is available.

 RAID 2: Generally, RAID 2 is rarely used practically. RAID 2 stripes data at the bit level and uses a Hamming code to rectify
errors. The disks in RAID 3 are synchronized by the controller, which causes them to spin at corresponding angles such that
they attain index points at the same time. Therefore, RAID 2 cannot efficiently handle multiple requests at the same time.
Notwithstanding, contingent upon the rate of the Hamming code, several spindles would operate in parallel to ensure a
simultaneous transfer of data, such that very high data transfer rates are feasible.

Since error corrections are implemented on all hard disk drives, the complexity of an external Hamming code offers an advantage
over uniformity. For this reason, RAID 2 has been infrequently implemented, and it is the only standard RAID level that is currently
unused.

 RAID 3: RAID 3 entails byte-level striping with a committed parity disk. Among the features of RAID 3 is that it can not
effectively monitor multiple requests simultaneously. The reason for this is that any single block of data will be transmitted
across all the entire set members and will occupy the exact physical location on each disk. Therefore, any input/output
operation will require activity over the whole disks, as well as synchronized spindles.

For these reasons, RAID 3 is suitable for applications that require the highest transfer rates in long chronological reads and write.
This RAID level will perform woefully for applications that make miniature reads and writes from random disk locations.

 RAID 4: RAID 4 entails block-level striping with a dedicated parity disk. The layout of RAID 4 provides good performance of
random reads even though the performance of random writes is low because of the need to write the entire parity data to a
single disk. This can be taken care of if the filesystem is RAID-4-aware and compensates for that.

RAID has the edge over others because it can be quickly extended online. This doesn’t require parity recomputation, as long as the
newly added disks are filled with 0-bytes.

 RAID 5 and 6: RAID 5 and RAID 6 use similar techniques. For RAID 5 to be used, there must be at least three drives. On
the other hand, RAID 6 requires at least four drives. This level incorporates the idea of RAID 0 and stripes data across
multiple drives to augment performance.
However, it also adjoins the aspect of redundancy by distributing parity information across the disks. In essence, RAID 5 can lose
only one disk and maintain operations without interruption. RAID 6 can lose two disks and still maintain operations and data without
any hitches. RAID 5 and 6 provide users with better read performance. However, the write performance is contingent upon the
utilized RAID controller.

RAID 5 and 6 require a dedicated hardware controller because of the need to compute the parity data and write it across the entire
disk. Hence, RAID 5 and 6 are suitable options for file servers, standard web servers, and other systems where most of the
transactions are read.

Nested RAID Levels: Nested RAID levels are obtained from the combination of standard RAID levels. Some examples of nested
RAID levels are:

 RAID 10 (RAID 1+0): When RAID 1 and RAID 0 are combined, RAID 10 is produced. RAID 10 is more expensive than
RAID 1 and also offers better performance than RAID 1. The data in RAID 10 is mirrored, and the mirrors in this RAID are
striped.

 RAID 03: RAID 0+3, otherwise known as RAID 53 or RAID 5+3, adopts RAID 0’s striping method and RAID 3’s virtual disk
blocks. This produces a higher performance compared to RAID 3, but at a higher cost.

Non-Standard RAID levels: Non-Standard RAID levels differ from standard RAID levels and are usually developed by companies
primarily for exclusive use. Some examples of non-standard RAID levels are:

 RAID 7: RAID 7 is a non-standard RAID level obtained from RAID 3 and RAID 4. RAID 7 entails caching via a high-speed
bus, real-time incorporated operating system as a controller, and other features of a solitary computer.

 Adaptive RAID: Adaptive RAID allows the RAID controller to determine how the parity on disks will be stored. Adaptive
RAID chooses between RAID 3 and RAID 5.

RAID and Data Backup and Recovery: RAID is often used in conjunction with data backup and recovery strategies to help protect
against data loss.

RAID arrays can provide redundancy and fault tolerance, which means that if one or more hard drives fail, data can still be accessed
and recovered from the remaining disks in the array. However, RAID is not a replacement for a proper backup strategy. While RAID
can protect against hardware failures, it cannot protect against data loss due to software issues, human error, or other factors.

It’s important to have a separate backup strategy in place that includes regular backups of important data to an external location or
cloud-based storage. This ensures that if data is lost or corrupted for any reason, it can be easily restored from the backup.

Implementations[edit]
The distribution of data across multiple drives can be managed either by dedicated computer hardware or by software. A software
solution may be part of the operating system, part of the firmware and drivers supplied with a standard drive controller (so-called
"hardware-assisted software RAID"), or it may reside entirely within the hardware RAID controller.
Hardware-based[edit]
Main article: RAID controller

Hardware RAID controllers can be configured through card BIOS or Option ROM before an operating system is booted, and after
the operating system is booted, proprietary configuration utilities are available from the manufacturer of each controller. Unlike
the network interface controllers for Ethernet, which can usually be configured and serviced entirely through the common operating
system paradigms like ifconfig in Unix, without a need for any third-party tools, each manufacturer of each RAID controller usually
provides their own proprietary software tooling for each operating system that they deem to support, ensuring a vendor lock-in, and
contributing to reliability issues.[33]
For example, in FreeBSD, in order to access the configuration of Adaptec RAID controllers, users are required to enable Linux
compatibility layer, and use the Linux tooling from Adaptec,[34] potentially compromising the stability, reliability and security of their
setup, especially when taking the long-term view.[33]
Some other operating systems have implemented their own generic frameworks for interfacing with any RAID controller, and provide
tools for monitoring RAID volume status, as well as facilitation of drive identification through LED blinking, alarm management
and hot spare disk designations from within the operating system without having to reboot into card BIOS. For example, this was the
approach taken by OpenBSD in 2005 with its bio(4) pseudo-device and the bioctl utility, which provide volume status, and allow
LED/alarm/hotspare control, as well as the sensors (including the drive sensor) for health monitoring;[35] this approach has
subsequently been adopted and extended by NetBSD in 2007 as well.[36]
Software-based[edit]
Software RAID implementations are provided by many modern operating systems. Software RAID can be implemented as:

 A layer that abstracts multiple devices, thereby providing a single virtual device (such as Linux kernel's md and OpenBSD's
softraid)
 A more generic logical volume manager (provided with most server-class operating systems such as Veritas or LVM)
 A component of the file system (such as ZFS, Spectrum Scale or Btrfs)
 A layer that sits above any file system and provides parity protection to user data (such as RAID-F)[37]
Some advanced file systems are designed to organize data across multiple storage devices directly, without needing the help of a
third-party logical volume manager:

 ZFS supports the equivalents of RAID 0, RAID 1, RAID 5 (RAID-Z1) single-parity, RAID 6 (RAID-Z2) double-parity, and a triple-
parity version (RAID-Z3) also referred to as RAID 7.[38] As it always stripes over top-level vdevs, it supports equivalents of the
1+0, 5+0, and 6+0 nested RAID levels (as well as striped triple-parity sets) but not other nested combinations. ZFS is the native
file system on Solaris and illumos, and is also available on FreeBSD and Linux. Open-source ZFS implementations are actively
developed under the OpenZFS umbrella project.[39][40][41][42][43]
 Spectrum Scale, initially developed by IBM for media streaming and scalable analytics, supports declustered RAID protection
schemes up to n+3. A particularity is the dynamic rebuilding priority which runs with low impact in the background until a data
chunk hits n+0 redundancy, in which case this chunk is quickly rebuilt to at least n+1. On top, Spectrum Scale supports metro-
distance RAID 1.[44]
 Btrfs supports RAID 0, RAID 1 and RAID 10 (RAID 5 and 6 are under development).[45][46]
 XFS was originally designed to provide an integrated volume manager that supports concatenating, mirroring and striping of
multiple physical storage devices.[47] However, the implementation of XFS in Linux kernel lacks the integrated volume manager.
[48]

Many operating systems provide RAID implementations, including the following:

 Hewlett-Packard's OpenVMS operating system supports RAID 1. The mirrored disks, called a "shadow set", can be in different
locations to assist in disaster recovery.[49]
 Apple's macOS and macOS Server support RAID 0, RAID 1, and RAID 1+0.[50][51]
 FreeBSD supports RAID 0, RAID 1, RAID 3, and RAID 5, and all nestings via GEOM modules and ccd.[52][53][54]
 Linux's md supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and all nestings.[55] Certain reshaping/resizing/expanding
operations are also supported.[56]
 Microsoft Windows supports RAID 0, RAID 1, and RAID 5 using various software implementations. Logical Disk Manager,
introduced with Windows 2000, allows for the creation of RAID 0, RAID 1, and RAID 5 volumes by using dynamic disks, but this
was limited only to professional and server editions of Windows until the release of Windows 8.[57][58] Windows XP can be
modified to unlock support for RAID 0, 1, and 5.[59] Windows 8 and Windows Server 2012 introduced a RAID-like feature known
as Storage Spaces, which also allows users to specify mirroring, parity, or no redundancy on a folder-by-folder basis. These
options are similar to RAID 1 and RAID 5, but are implemented at a higher abstraction level.[60]
 NetBSD supports RAID 0, 1, 4, and 5 via its software implementation, named RAIDframe.[61]
 OpenBSD supports RAID 0, 1 and 5 via its software implementation, named softraid.[62]
If a boot drive fails, the system has to be sophisticated enough to be able to boot from the remaining drive or drives. For instance,
consider a computer whose disk is configured as RAID 1 (mirrored drives); if the first drive in the array fails, then a first-stage boot
loader might not be sophisticated enough to attempt loading the second-stage boot loader from the second drive as a fallback. The
second-stage boot loader for FreeBSD is capable of loading a kernel from such an array.[63]
Firmware- and driver-based[edit]
A SATA 3.0 controller that provides RAID functionality through proprietary firmware and drivers

See also: MD RAID external metadata

Software-implemented RAID is not always compatible with the system's boot process, and it is generally impractical for desktop
versions of Windows. However, hardware RAID controllers are expensive and proprietary. To fill this gap, inexpensive "RAID
controllers" were introduced that do not contain a dedicated RAID controller chip, but simply a standard drive controller chip, or the
chipset built-in RAID function, with proprietary firmware and drivers. During early bootup, the RAID is implemented by the firmware
and, once the operating system has been more completely loaded, the drivers take over control. Consequently, such controllers
may not work when driver support is not available for the host operating system.[64] An example is Intel Rapid Storage Technology,
implemented on many consumer-level motherboards.[65][66]
Because some minimal hardware support is involved, this implementation is also called "hardware-assisted software RAID",[67][68]
[69]
"hybrid model" RAID,[69] or even "fake RAID".[70] If RAID 5 is supported, the hardware may provide a hardware XOR accelerator.
An advantage of this model over the pure software RAID is that—if using a redundancy mode—the boot drive is protected from
failure (due to the firmware) during the boot process even before the operating system's drivers take over.
Active Directory Domain Services: AD DS is the main service in AD, which is a crucial identity and access management (IAM)

solution within the Windows Server operating system (OS) environment. AD DS stores and manages information about users,

services, and devices connected to the network into a tiered structure. AD DS allows IT teams to streamline IAM services by serving

as a centralized point of administration for all the activities on the network.

The servers that host AD DS are domain controllers (DCs). An organization can have multiple DCs, with each one storing a copy of

the AD DS for the entire domain. Most organizations use AD DS to manage on-prem IAM in Windows environments. However, you

can also replicate it in Azure if you’re hosting your applications partly on-prem and partly in Azure.

DNS: The Domain Name System (DNS) is the phonebook of the Internet. Humans access information online through domain
names, like nytimes.com or espn.com. Web browsers interact through Internet Protocol (IP) addresses. DNS translates domain
names to IP addresses so browsers can load Internet resources.
Each device connected to the Internet has a unique IP address which other machines use to find the device. DNS servers eliminate
the need for humans to memorize IP addresses such as 192.168.1.1 (in IPv4), or more complex newer alphanumeric IP addresses
such as 2400:cb00:2048:1::c629:d7a2 (in IPv6).

How does DNS work?


The process of DNS resolution involves converting a hostname (such as www.example.com) into a computer-friendly IP address
(such as 192.168.1.1). An IP address is given to each device on the Internet, and that address is necessary to find the appropriate
Internet device - like a street address is used to find a particular home. When a user wants to load a webpage, a translation must
occur between what a user types into their web browser (example.com) and the machine-friendly address necessary to locate the
example.com webpage.
In order to understand the process behind the DNS resolution, it’s important to learn about the different hardware components a
DNS query must pass between. For the web browser, the DNS lookup occurs "behind the scenes" and requires no interaction from
the user’s computer apart from the initial request.
There are 4 DNS servers involved in loading a webpage:
 DNS recursor - The recursor can be thought of as a librarian who is asked to go find a particular book somewhere in a
library. The DNS recursor is a server designed to receive queries from client machines through applications such as
web browsers. Typically the recursor is then responsible for making additional requests in order to satisfy the client’s
DNS query.
 Root nameserver - The root server is the first step in translating (resolving) human readable host names into IP
addresses. It can be thought of like an index in a library that points to different racks of books - typically it serves as a
reference to other more specific locations.
 TLD nameserver - The top level domain server (TLD) can be thought of as a specific rack of books in a library. This
nameserver is the next step in the search for a specific IP address, and it hosts the last portion of a hostname (In
example.com, the TLD server is “com”).
 Authoritative nameserver - This final nameserver can be thought of as a dictionary on a rack of books, in which a
specific name can be translated into its definition. The authoritative nameserver is the last stop in the nameserver query.
If the authoritative name server has access to the requested record, it will return the IP address for the requested
hostname back to the DNS Recursor (the librarian) that made the initial request.
What's the difference between an authoritative DNS server and a recursive DNS resolver?
Both concepts refer to servers (groups of servers) that are integral to the DNS infrastructure, but each performs a different role and
lives in different locations inside the pipeline of a DNS query. One way to think about the difference is the recursive resolver is at the
beginning of the DNS query and the authoritative nameserver is at the end.

Recursive DNS resolver: The recursive resolver is the computer that responds to a recursive request from a client and takes the

time to track down the DNS record. It does this by making a series of requests until it reaches the authoritative DNS nameserver for

the requested record (or times out or returns an error if no record is found). Luckily, recursive DNS resolvers do not always need to

make multiple requests in order to track down the records needed to respond to a client; caching is a data persistence process that

helps short-circuit the necessary requests by serving the requested resource record earlier in the DNS lookup.
Authoritative DNS server: Put simply, an authoritative DNS server is a server that actually holds, and is responsible for, DNS
resource records. This is the server at the bottom of the DNS lookup chain that will respond with the queried resource record,
ultimately allowing the web browser making the request to reach the IP address needed to access a website or other web
resources. An authoritative nameserver can satisfy queries from its own data without needing to query another source, as it is the
final source of truth for certain DNS records.
It’s worth mentioning that in instances where the query is for a subdomain such as foo.example.com or blog.cloudflare.com, an
additional nameserver will be added to the sequence after the authoritative nameserver, which is responsible for storing the
subdomain’s CNAME record.

There is a key difference between many DNS services and the one that Cloudflare provides. Different DNS recursive resolvers such
as Google DNS, OpenDNS, and providers like Comcast all maintain data center installations of DNS recursive resolvers. These
resolvers allow for quick and easy queries through optimized clusters of DNS-optimized computer systems, but they are
fundamentally different than the nameservers hosted by Cloudflare.
Cloudflare maintains infrastructure-level nameservers that are integral to the functioning of the Internet. One key example is the f-
root server network which Cloudflare is partially responsible for hosting. The F-root is one of the root level DNS nameserver
infrastructure components responsible for the billions of Internet requests per day. Our Anycast network puts us in a unique position
to handle large volumes of DNS traffic without service interruption.
What are the steps in a DNS lookup?
For most situations, DNS is concerned with a domain name being translated into the appropriate IP address. To learn how this
process works, it helps to follow the path of a DNS lookup as it travels from a web browser, through the DNS lookup process, and
back again. Let's take a look at the steps.
Note: Often DNS lookup information will be cached either locally inside the querying computer or remotely in the DNS infrastructure.
There are typically 8 steps in a DNS lookup. When DNS information is cached, steps are skipped from the DNS lookup process
which makes it quicker. The example below outlines all 8 steps when nothing is cached.
The 8 steps in a DNS lookup:
1. A user types ‘example.com’ into a web browser and the query travels into the Internet and is received by a DNS
recursive resolver.
2. The resolver then queries a DNS root nameserver (.).
3. The root server then responds to the resolver with the address of a Top Level Domain (TLD) DNS server (such
as .com or .net), which stores the information for its domains. When searching for example.com, our request is pointed
toward the .com TLD.
4. The resolver then makes a request to the .com TLD.
5. The TLD server then responds with the IP address of the domain’s nameserver, example.com.
6. Lastly, the recursive resolver sends a query to the domain’s nameserver.
7. The IP address for example.com is then returned to the resolver from the nameserver.
8. The DNS resolver then responds to the web browser with the IP address of the domain requested initially.
Once the 8 steps of the DNS lookup have returned the IP address for example.com, the browser is able to make the request
for the web page:
9. The browser makes a HTTP request to the IP address.
10. The server at that IP returns the webpage to be rendered in the browser (step 10).
What is a DNS resolver?

The DNS resolver is the first stop in the DNS lookup, and it is responsible for dealing with the client that made the initial request.
The resolver starts the sequence of queries that ultimately leads to a URL being translated into the necessary IP address.

Note: A typical uncached DNS lookup will involve both recursive and iterative queries.

It's important to differentiate between a recursive DNS query and a recursive DNS resolver. The query refers to the request made to
a DNS resolver requiring the resolution of the query. A DNS recursive resolver is the computer that accepts a recursive query and
processes the response by making the necessary requests.

What are the types of DNS queries?

In a typical DNS lookup three types of queries occur. By using a combination of these queries, an optimized process for DNS
resolution can result in a reduction of distance traveled. In an ideal situation cached record data will be available, allowing a DNS
name server to return a non-recursive query.

3 types of DNS queries:


1. Recursive query - In a recursive query, a DNS client requires that a DNS server (typically a DNS recursive resolver)
will respond to the client with either the requested resource record or an error message if the resolver can't find the
record.
2. Iterative query - in this situation the DNS client will allow a DNS server to return the best answer it can. If the queried
DNS server does not have a match for the query name, it will return a referral to a DNS server authoritative for a lower
level of the domain namespace. The DNS client will then make a query to the referral address. This process continues
with additional DNS servers down the query chain until either an error or timeout occurs.
3. Non-recursive query - typically this will occur when a DNS resolver client queries a DNS server for a record that it
has access to either because it's authoritative for the record or the record exists inside of its cache. Typically, a DNS
server will cache DNS records to prevent additional bandwidth consumption and load on upstream servers.
What is DNS caching? Where does DNS caching occur?
The purpose of caching is to temporarily stored data in a location that results in improvements in performance and reliability for data
requests. DNS caching involves storing data closer to the requesting client so that the DNS query can be resolved earlier and
additional queries further down the DNS lookup chain can be avoided, thereby improving load times and reducing bandwidth/CPU
consumption. DNS data can be cached in a variety of locations, each of which will store DNS records for a set amount of time
determined by a time-to-live (TTL).
Browser DNS caching: Modern web browsers are designed by default to cache DNS records for a set amount of time. The purpose
here is obvious; the closer the DNS caching occurs to the web browser, the fewer processing steps must be taken in order to check
the cache and make the correct requests to an IP address. When a request is made for a DNS record, the browser cache is the first
location checked for the requested record. In Chrome, you can see the status of your DNS cache by going to chrome://net-
internals/#dns.
Operating system (OS) level DNS caching: The operating system level DNS resolver is the second and last local stop before a DNS
query leaves your machine. The process inside your operating system that is designed to handle this query is commonly called a
“stub resolver” or DNS client. When a stub resolver gets a request from an application, it first checks its own cache to see if it has
the record. If it does not, it then sends a DNS query (with a recursive flag set), outside the local network to a DNS recursive resolver
inside the Internet service provider (ISP).
When the recursive resolver inside the ISP receives a DNS query, like all previous steps, it will also check to see if the requested
host-to-IP-address translation is already stored inside its local persistence layer.
The recursive resolver also has additional functionality depending on the types of records it has in its cache:
1. If the resolver does not have the A records, but does have the NS records for the authoritative nameservers, it will
query those name servers directly, bypassing several steps in the DNS query. This shortcut prevents lookups from the
root and .com nameservers (in our search for example.com) and helps the resolution of the DNS query occur more
quickly.
2. If the resolver does not have the NS records, it will send a query to the TLD servers (.com in our case), skipping the
root server.
3. In the unlikely event that the resolver does not have records pointing to the TLD servers, it will then query the root
servers. This event typically occurs after a DNS cache has been purged.
Dynamic Host Configuration Protocol (DHCP): DHCP stands for Dynamic Host Configuration Protocol. It is the critical feature
on which the users of an enterprise network communicate. DHCP helps enterprises to smoothly manage the allocation of IP
addresses to the end-user clients’ devices such as desktops, laptops, cellphones, etc. is an application layer protocol that is used
to provide:

Components of DHCP
The main components of DHCP include:
 DHCP Server: DHCP Server is basically a server that holds IP Addresses and other information related to configuration.
 DHCP Client: It is basically a device that receives configuration information from the server. It can be a mobile, laptop,
computer, or any other electronic device that requires a connection.
 DHCP Relay: DHCP relays basically work as a communication channel between DHCP Client and Server.
 IP Address Pool: It is the pool or container of IP Addresses possessed by the DHCP Server. It has a range of addresses
that can be allocated to devices.
 Subnets: Subnets are smaller portions of the IP network partitioned to keep networks under control.
 Lease: It is simply the time that how long the information received from the server is valid, in case of expiration of the lease,
the tenant must have to re-assign the lease.
 DNS Servers: DHCP servers can also provide DNS (Domain Name System) server information to DHCP clients, allowing
them to resolve domain names to IP addresses.
 Default Gateway: DHCP servers can also provide information about the default gateway, which is the device that packets
are sent to when the destination is outside the local network.
 Options: DHCP servers can provide additional configuration options to clients, such as the subnet mask, domain name, and
time server information.
 Renewal: DHCP clients can request to renew their lease before it expires to ensure that they continue to have a valid IP
address and configuration information.
 Failover: DHCP servers can be configured for failover, where two servers work together to provide redundancy and ensure
that clients can always obtain an IP address and configuration information, even if one server goes down.
 Dynamic Updates: DHCP servers can also be configured to dynamically update DNS records with the IP address of DHCP
clients, allowing for easier management of network resources.
 Audit Logging: DHCP servers can keep audit logs of all DHCP transactions, providing administrators with visibility into
which devices are using which IP addresses and when leases are being assigned or renewed.

1.Hardware length:
This is an 8-bit field defining the length of the physical address in bytes. e.g for Ethernet the value is 6.
2.Hop count:
This is an 8-bit field defining the maximum number of hops the packet can travel.
3.Transaction ID:
This is a 4-byte field carrying an integer. The transcation identification is set by the client and is used to match a reply with the
request. The server returns the same value in its reply.
4.Number of seconds:
This is a 16-bit field that indicates the number of seconds elapsed since the time the client started to boot.
5.Flag:
This is a 16-bit field in which only the leftmost bit is used and the rest of the bit should be set to os.
A leftmost bit specifies a forced broadcast reply from the server. If the reply were to be unicast to the client, the destination. IP
address of the IP packet is the address assigned to the client.
6.Client IP address:
This is a 4-byte field that contains the client IP address . If the client does not have this information this field has a value of 0.
7.Your IP address:
This is a 4-byte field that contains the client IP address. It is filled by the server at the request of the client.
8.Server IP address:
This is a 4-byte field containing the server IP address. It is filled by the server in a reply message.
9.Gateway IP address:
This is a 4-byte field containing the IP address of a routers. IT is filled by the server in a reply message.
10.Client hardware address:
This is the physical address of the client .Although the server can retrieve this address from the frame sent by the client it is more
efficient if the address is supplied explicity by the client in the request message.
11.Server name:
This is a 64-byte field that is optionally filled by the server in a reply packet. It contains a null-terminated string consisting of the
domain name of the server. If the server does not want to fill this filed with data, the server must fill it with all 0s.
12.Boot filename:
This is a 128-byte field that can be optionally filled by the server in a reply packet. It contains a null- terminated string consisting
of the full pathname of the boot file. The client can use this path to retrieve other booting information. If the server does not want
to fill this field with data, the server must fill it with all 0s.
13.Options:
This is a 64-byte field with a dual purpose. IT can carry either additional information or some specific vendor information. The
field is used only in a reply message. The server uses a number, called a magic cookie, in the format of an IP address with the
value of 99.130.83.99. When the client finishes reading the message, it looks for this magic cookie. If present the next 60 bytes
are options.
Working of DHCP
The working of DHCP is as follows:
DHCP works on the Application layer of the TCP/IP Protocol. The main task of DHCP is to dynamically assigns IP Addresses to
the Clients and allocate information on TCP/IP configuration to Clients. For more, you can refer to the Article Working of DHCP.
The DHCP port number for the server is 67 and for the client is 68. It is a client-server protocol that uses UDP services. An IP
address is assigned from a pool of addresses. In DHCP, the client and the server exchange mainly 4 DHCP messages in order to
make a connection, also called the DORA process, but there are 8 DHCP messages in the process.

The 8 DHCP Messages:


1. DHCP discover message: This is the first message generated in the communication process between the server and the
client. This message is generated by the Client host in order to discover if there is any DHCP server/servers are present in a
network or not. This message is broadcasted to all devices present in a network to find the DHCP server. This message is 342 or
576 bytes long

As shown in the figure, the source MAC address (client PC) is 08002B2EAF2A, the destination MAC address(server) is
FFFFFFFFFFFF, the source IP address is 0.0.0.0(because the PC has had no IP address till now) and the destination IP address
is 255.255.255.255 (IP address used for broadcasting). As they discover message is broadcast to find out the DHCP server or
servers in the network therefore broadcast IP address and MAC address is used.
2. DHCP offers a message: The server will respond to the host in this message specifying the unleased IP address and other
TCP configuration information. This message is broadcasted by the server. The size of the message is 342 bytes. If there is more
than one DHCP server present in the network then the client host will accept the first DHCP OFFER message it receives. Also, a
server ID is specified in the packet in order to identify the server.

Now, for the offer message, the source IP address is 172.16.32.12 (server’s IP address in the example), the destination IP
address is 255.255.255.255 (broadcast IP address), the source MAC address is 00AA00123456, the destination MAC address is
FFFFFFFFFFFF. Here, the offer message is broadcast by the DHCP server therefore destination IP address is the broadcast IP
address and destination MAC address is FFFFFFFFFFFF and the source IP address is the server IP address and the MAC
address is the server MAC address.
Also, the server has provided the offered IP address 192.16.32.51 and a lease time of 72 hours(after this time the entry of the
host will be erased from the server automatically). Also, the client identifier is the PC MAC address (08002B2EAF2A) for all the
messages.
3. DHCP request message: When a client receives an offer message, it responds by broadcasting a DHCP request message.
The client will produce a gratuitous ARP in order to find if there is any other host present in the network with the same IP
address. If there is no reply from another host, then there is no host with the same TCP configuration in the network and the
message is broadcasted to the server showing the acceptance of the IP address. A Client ID is also added to this message.

Now, the request message is broadcast by the client PC therefore source IP address is 0.0.0.0(as the client has no IP right now)
and destination IP address is 255.255.255.255 (the broadcast IP address) and the source MAC address is 08002B2EAF2A (PC
MAC address) and destination MAC address is FFFFFFFFFFFF.
Note – This message is broadcast after the ARP request broadcast by the PC to find out whether any other host is not using that
offered IP. If there is no reply, then the client host broadcast the DHCP request message for the server showing the acceptance
of the IP address and Other TCP/IP Configuration.
4. DHCP acknowledgment message: In response to the request message received, the server will make an entry with a
specified client ID and bind the IP address offered with lease time. Now, the client will have the IP address provided by the
server.

Now the server will make an entry of the client host with the offered IP address and lease time. This IP address will not be
provided by the server to any other host. The destination MAC address is FFFFFFFFFFFF and the destination IP address is
255.255.255.255 and the source IP address is 172.16.32.12 and the source MAC address is 00AA00123456 (server MAC
address).
5. DHCP negative acknowledgment message: Whenever a DHCP server receives a request for an IP address that is invalid
according to the scopes that are configured, it sends a DHCP Nak message to the client. Eg-when the server has no IP address
unused or the pool is empty, then this message is sent by the server to the client.
6. DHCP decline: If the DHCP client determines the offered configuration parameters are different or invalid, it sends a DHCP
decline message to the server. When there is a reply to the gratuitous ARP by any host to the client, the client sends a DHCP
decline message to the server showing the offered IP address is already in use.
7. DHCP release: A DHCP client sends a DHCP release packet to the server to release the IP address and cancel any remaining
lease time.
8. DHCP inform: If a client address has obtained an IP address manually then the client uses DHCP information to obtain other
local configuration parameters, such as domain name. In reply to the DHCP inform message, the DHCP server generates a
DHCP ack message with a local configuration suitable for the client without allocating a new IP address. This DHCP ack
message is unicast to the client.
Note – All the messages can be unicast also by the DHCP relay agent if the server is present in a different network.
Advantages of DHCP
The advantages of using DHCP include:
 Centralized management of IP addresses.
 Centralized and automated TCP/IP configuration.
 Ease of adding new clients to a network.
 Reuse of IP addresses reduces the total number of IP addresses that are required.
 The efficient handling of IP address changes for clients that must be updated frequently, such as those for portable devices
that move to different locations on a wireless network.
 Simple reconfiguration of the IP address space on the DHCP server without needing to reconfigure each client.
 The DHCP protocol gives the network administrator a method to configure the network from a centralized area.
 With the help of DHCP, easy handling of new users and the reuse of IP addresses can be achieved.
Disadvantages of DHCP
The disadvantage of using DHCP is:
 IP conflict can occur.
 The problem with DHCP is that clients accept any server. Accordingly, when another server is in the vicinity, the client may
connect with this server, and this server may possibly send invalid data to the client.
 The client is not able to access the network in absence of a DHCP Server.
 The name of the machine will not be changed in a case when a new IP Address is assigned.
Group Policy: Group Policy is used to regulate user and computer configurations within Windows Active Directory (AD) domains. It
is a policy-based approach that can be applied to the whole organization or selectively applied to certain departments or groups in
organizations. Group Policies are enforced by Group Policy Objects (GPOs).
What are GPOs?
GPOs comprise of the user and computer configuration settings that will be applied to domains or organizational units (OUs). GPOs
need to be linked to an AD unit (domain or OU) to be applicable. Both the user and computer configuration policies have Software
Settings, Windows Settings, and Administrative Templates. The Windows Settings contain important security policies like password
and account lockout policies, software restriction, and registry settings. Administrative Templates are used to regulate access to the
Control Panel, system settings, and network resources.
What is Group Policy used for in AD?
As mentioned earlier, Group Policies centralize management of organizational resources. Some Group Policy examples include
execution of login scripts upon startup of a computer, user password settings, disabling users from changing the system time, and
many other user and computer configurations. Group Policy benefits include:
Wide scope of application: These policies can be applied based on organizational hierarchy by linking them to AD sites, domains,
and OUs.
Ease of management: Group Policy settings can be easily managed via GPOs. Multiple GPOs can be linked to one domain. A
single GPO can be linked to multiple domains. When linked to parent units, say a domain, the policies are applied to all child units
within the domain.
Priority-based application: GPOs have link order precedence, which helps resolve clashing policy settings. For example, a GPO
with link order "1" will take precedence over another GPO with link order "2." Thus, the GPO with link order "1" will be applied last,
overriding all the other GPOs. The link order can be changed by sysadmins in the Group Policy Management Console (GPMC).
Hierarchical application: Besides link order precedence, Group Policy adheres to a strict hierarchy. Always, policies are processed
in this order: Local > Site > Domain > OU. Further, computer configuration policies override user configuration policies regardless of
link or precedence order. These features ensure that the most relevant settings for the smallest unit (OU) are pushed.
Types of Group Policy:
Group Policies can be categorized into three segments based on where or how they can be applied. The three types include:

Local Group Policy Group Policy in AD Starter Group Policy

Starter Group Policies are templates


Local Group Policy manages These are an aggregate set of policies
to be used within AD. Sysadmins
policies for individual (non-domain) that can be applied to all domain-
can create one starter policy and
computers. More than one local joined computers. Both user and
then go on to create multiple similar
GPO can be created for different computer configurations for all domain
Group Policies based on the starter
local users. users can be managed centrally.
policy.

The policy is stored on the computer


The Group Policies can be managed
on which it is configured. The
from the GPMC in the domain Starter Group Policies are available
settings can be managed using the
controller. Note that for domain-joined within the GPMC in the Server
local Group Policy editor on the
machines, AD Group Policies override Manager tools.
computer. (Run gpedit.msc to open
local Group Policy settings.
the editor.)

Active Directory Domain Services: Active Directory Domain Services (AD DS) is a server role in Active Directory that allows
admins to manage and store information about resources from a network, as well as application data, in a distributed database.

AD DS helps admins manage network elements -- both computing devices and users -- and reorder them into a custom hierarchical

structure. AD DS also integrates security by authenticating logons and controlling access to directory resources.
How is Active Directory Domain Services used?
Active Directory is a directory service that runs on Microsoft Windows Server. It is used for identity and access
management. AD DS stores and organizes information about the people, devices and services connected to a network.
AD DS serves as a locator service for those objects and as a way for organizations to have a central point of
administration for all activity on the corporate network.
AD DS is used in on-premises Windows environments, and Microsoft Azure AD DS is used in cloud-based Windows
environments. They can be used together in hybrid cloud environments.
How does AD DS work?

AD DS is the core component of Active Directory that enables users to authenticate and access resources on the
network. Active Directory organizes objects into a hierarchy, which lets various Domain Services connect with them and
users access or manage them. The hierarchical structure includes the following:

 Domains. A group of objects, such as users or groups of devices, that share the same AD database makes up
a domain.

 Organizational units. Within a domain, organizational units are used to organize objects within the domains.

 Active Directory trees. Multiple domains grouped together in a logical hierarchy make up an AD tree. The bonds between
domains in a tree are known as "trusts."

 Active Directory forests. This AD functional level is made up of multiple trees grouped together. Trees in an AD forest
share trusts, just like domains in a tree share trusts. Trusts enable constituent parts of a tree or forest to share things like

directory schemas and configuration specifications.

Trust forms the relationship


between domains in a forest, which are composed of domain trees.
What services does AD DS provide?
Active Directory covers a range of services. AD Domain Services is the main service that encompasses these five
services.
Domain Services
Domain Services stores centralized directory information and lets users and domains communicate. When a user
attempts to connect to a device or resource on a network, this service provides login authentication, verifying the user's
login credentials and access permissions.
Lightweight Directory Services (LDS)
AD LDS is similar to Domain Services, but it uses Lightweight Directory Access Protocol (LDAP), which has fewer restrictions. AD

LDS enables cross-platform capabilities that, for instance, let Linux-based computers function on the network.
Active Directory Federation Services (AD FS)
AD FS provides single sign-on authentication, enabling users to sign in once to access multiple applications in the same session.
Rights Management
This service controls data access policies and provides access rights management. For example, Rights Management determines

which folders users can access.


Certificate Services
Certificate Services allows the domain controller to create and manage digital certificates, signatures and public key cryptography.
What are the benefits of Active Directory Domain Services?

The four key benefits of AD DS include the following:

1. Hierarchical structure. This is the main benefit of AD DS, providing the organizational structure for the information contained
in Active Directory.

2. Flexibility. AD DS gives users flexibility in determining how data is organized on the network. It simplifies administrative tasks
by centralizing services like user and rights management and provides some security. Users can access Active Directory from

any computer on the network.

3. Single point of access. Domain Services creates a single point of access to network resources. This lets IT teams
collaborate more efficiently and limit the access points to sensitive resources.

4. Redundancy. AD DS has built in replication and redundancy If one domain controller fails, another automatically takes over its
responsibilities.

What are Active Directory Domain Services terms to know?

Some common AD DS related terms and concepts include the following:

 Global catalog. The Global catalog holds all AD DS objects. Administrators can find directory information -- such as a
username -- across any domain.

 LDAP. This protocol provides the language that servers and clients within the directory use to communicate with each other.
 Multi-master replication. A function that ensures all domain controllers on a network are updated with any changes made to
Active Directory.

 Objects. These are the pieces of information that Active Directory organizes. There are two types of objects: Container objects
are organizational units, such as forests and trees that hold other objects inside of them. Leaf objects represent things like

users, computers and other devices on the network.

 Query and index mechanism. This mechanism enables users to search the global catalog for directory information.

 Schema. The schema is a set of rules a user establishes to define classes of objects and attributes in the directory. These
rules also dictate the characteristics of object instances and naming formats.

 Sites. The physical groupings of IP subnets. They enable the easy replication of information among the domain controllers and
the deployment of group policies.

What role do domain controllers play in AD DS?


Domain controllers are physical servers that host AD DS and newer Windows services like Kerberos Key Distribution Center,

Netlogon, Intersite Messaging and Windows Time. Active Directory requires at least one domain controller to respond to

authentication requests and verify users on the network.

Domain controllers also replicate the AD DS database inside an AD forest. Changes made in a directory on one domain controller --

such as a password change or account deletion -- replicate to other domain controllers on the network.

Groups and user authentication on Windows


Users are defined on Windows by creating user accounts using the Windows administration tool called the “User Manager”. An
account containing other accounts, also called members, is a group.

Groups give Windows administrators the ability to grant rights and permissions to the users within the group at the same time,
without having to maintain each user individually. Groups, like user accounts, are defined and maintained in the Security Access
Manager (SAM) database.

There are two types of groups:

Local groups. A local group can include user accounts created in the local accounts database. If the local group is on a machine that
is part of a domain, the local group can also contain domain accounts and groups from the Windows domain. If the local group is
created on a workstation, it is specific to that workstation.
Global groups. A global group exists only on a domain controller and contains user accounts from the domain's SAM database. That
is, a global group can only contain user accounts from the domain on which it is created; it cannot contain any other groups as
members. A global group can be used in servers and workstations of its own domain, and in trusting domains.
Integrated Lights-Out (iLO)
Integrated Lights-Out (iLO) is a proprietary management technology built into HPE products that allows for remote control access to
ProLiant servers, even without being connected to the organization’s main network, the origin of the “Lights Out” designation.

How does out-of-band management work?


Out-of-band management works via a standard network connection, but it is a connection that exists exclusively for the purpose of
remote management. Out-of-band management works much like a side door or secret entrance, wherein access is limited to a finite
number of IT professionals, making the out-of-band network substantially more secure than the standard “in-band” network.
In addition to a much more secure point of access, out-of-band management also functions even if the server or network device is
powered down, in sleep mode, or otherwise offline and unavailable via conventional network access. This can be used to remotely
access devices on weekends, holidays, or outside of work hours, or it could be used to reboot or restore devices that have crashed.
Why would an organization need iLO?
In today’s data-rich and compute-heavy environment, enterprise organizations rely on 24x7 uptime to maintain functionality and
critical digital services. If the network goes down, or if a device is powered off, in sleep mode, hibernating, or otherwise inaccessible,
iLO provides an option to remotely reboot critical IT assets like routers, firewalls, servers, switchers, storage, power, and telecom
hardware to bring the network back online.
If an organization’s IT administrator needs to access these devices, they would traditionally use an Ethernet network, but that single
point of access can be insufficient during times of digital crisis. These moments of crisis are further exacerbated if the IT assets are
offsite, locked in a server room, or separated from the IT professionals needed to rectify the situation.
For network administrators, 24x7 uptime connectivity is critical. That connectivity can be interrupted by any number of things, like
configuration errors, human errors, weather events, or a distributed denial-of-service attack. iLO is a necessity in an organization’s
toolkit for avoiding costly downtime that can bring business to a halt.
How is HPE iLO unique?
The key capabilities of HPE Integrated Lights-Out (iLO) are embedded in all ProLiant Gen8 and Gen9 servers, the solution’s
scalable licensing offerings, and mobile-app features that support IT staff—anywhere, anytime. HPE iLO simplifies server setup,
provides access to a wealth of server health information, enables management at scale, and improves server power and thermal
control, as well as providing basic remote administration.
To help you manage your servers more easily, this embedded management process runs on a separate microprocessor chip (which
is why it is called “out-of-band management”). This way, HPE iLO remains available, even when the server suffers a failure. You can
use iLO to determine precisely what went wrong and then fix it quickly and efficiently, even if you are unable to power up your
server. HPE iLO Management provides a rich set of capabilities that automate and simplify system provisioning, troubleshooting,
and firmware updates.
Exceeding its predecessor, HPE SmartStart, HPE Intelligent Provisioning is a server deployment and maintenance capability
embedded across HPE ProLiant Gen8 and Gen9 servers. The key benefit is that you no longer need to insert or deal with any
physical media.
With HPE Intelligent Provisioning, you can deploy servers faster to overcome the complexity of server maintenance and deployment
by:
• Providing step-by-step HPE ProLiant server deployment assistance in a simple and consistent manner.
• Simplifying system configuration with guided, profile-driven, or scripted approaches for seamless integration with standard IT
processes.
• Recognizing automatically if the system software is out-of-date and downloading the latest update.
Integrated Dell Remote Access Controller (iDRAC)
The Integrated Dell Remote Access Controller (iDRAC) is designed for secure local and remote server management and helps IT
administrators deploy, update and monitor PowerEdge servers anywhere, anytime.
iDRAC offers:

 iDRAC9 Telemetry Streaming


 Secured Component Verification
 iDRAC RESTFul API with Redfish support
 Agent-free embedded server management

Explore the Key Benefits of iDRAC9

Telemetry Streaming:
Telemetry streaming, which requires the iDRAC9 Datacenter license, allows you to discover trends, fine tune operations, and create
predictive analytics to optimize your infrastructure. Using tools such as Splunk or ELK Stack, you can perform deep analysis of
server telemetry including storage, networking and memory parametric data for proactive decision making and decreased downtime.
Telemetry streaming can be used for system customization, optimization, risk management, and predictive analytics.

Scalable Automation:
By utilizing standards-based APIs such as Redfish and robust scripting tools like RedHat Ansible and Racadm, you can efficiently
manage thousands of servers and increase productivity.

Secure Management:
SELinux and configurable options like HTTPS, TLS 1.2, smart card authentication, LDAP, and Active Directory integration provide
security in your working environment. You can enhance security incident prevention with features like RSA SecurID 2-Factor
Authentication, Automatic Certificate Enrollment, and advanced password security. You can also protect your system from
unwanted configuration changes via System Lockdown Mode.

Streamlined Support:
With embedded SupportAssist tools, you can view a continuously updated health and status report that monitors 5,000+ system
parameters. You can easily view the SupportAssist Collection details of your system without uploading to the cloud

Hyper-V Technology
Hyper-V is Microsoft's hardware virtualization product. It lets you create and run a software version of a computer, called a virtual
machine. Each virtual machine acts like a complete computer, running an operating system and programs. When you need
computing resources, virtual machines give you more flexibility, help save time and money, and are a more efficient way to use
hardware than just running one operating system on physical hardware.
Hyper-V runs each virtual machine in its own isolated space, which means you can run more than one virtual machine on the same
hardware at the same time. You might want to do this to avoid problems such as a crash affecting the other workloads, or to give
different people, groups or services access to different systems.
Some ways Hyper-V can help you
Hyper-V can help you:

 Establish or expand a private cloud environment. Provide more flexible, on-demand IT services by moving to or
expanding your use of shared resources and adjust utilization as demand changes.

 Use your hardware more effectively. Consolidate servers and workloads onto fewer, more powerful physical computers
to use less power and physical space.

 Improve business continuity. Minimize the impact of both scheduled and unscheduled downtime of your workloads.

 Establish or expand a virtual desktop infrastructure (VDI). Use a centralized desktop strategy with VDI can help
you increase business agility and data security, as well as simplify regulatory compliance and manage desktop operating
systems and applications. Deploy Hyper-V and Remote Desktop Virtualization Host (RD Virtualization Host) on the same
server to make personal virtual desktops or virtual desktop pools available to your users.

 Make development and test more efficient. Reproduce different computing environments without having to buy or
maintain all the hardware you'd need if you only used physical systems.

Hyper-V and other virtualization products


Hyper-V in Windows and Windows Server replaces older hardware virtualization products, such as Microsoft Virtual PC, Microsoft
Virtual Server, and Windows Virtual PC. Hyper-V offers networking, performance, storage and security features not available in
these older products.
Hyper-V and most third-party virtualization applications that require the same processor features aren't compatible. That's because
the processor features, known as hardware virtualization extensions, are designed to not be shared. For details, see Virtualization
applications do not work together with Hyper-V, Device Guard, and Credential Guard.
What features does Hyper-V have?
Hyper-V offers many features. This is an overview, grouped by what the features provide or help you do.
Computing environment - A Hyper-V virtual machine includes the same basic parts as a physical computer, such as memory,
processor, storage, and networking. All these parts have features and options that you can configure different ways to meet different
needs. Storage and networking can each be considered categories of their own, because of the many ways you can configure them.
Disaster recovery and backup - For disaster recovery, Hyper-V Replica creates copies of virtual machines, intended to be
stored in another physical location, so you can restore the virtual machine from the copy. For backup, Hyper-V offers two types. One
uses saved states and the other uses Volume Shadow Copy Service (VSS) so you can make application-consistent backups for
programs that support VSS.
Optimization - Each supported guest operating system has a customized set of services and drivers, called integration services,
that make it easier to use the operating system in a Hyper-V virtual machine.
Portability - Features such as live migration, storage migration, and import/export make it easier to move or distribute a virtual
machine.
Remote connectivity - Hyper-V includes Virtual Machine Connection, a remote connection tool for use with both Windows and
Linux. Unlike Remote Desktop, this tool gives you console access, so you can see what's happening in the guest even when the
operating system isn't booted yet.
Security - Secure boot and shielded virtual machines help protect against malware and other unauthorized access to a virtual
machine and its data.
For a summary of the features introduced in this version, see What's new in Hyper-V on Windows Server. Some features or parts
have a limit to how many can be configured. For details, see Plan for Hyper-V scalability in Windows Server 2016.
How to get Hyper-V
Hyper-V is available in Windows Server and Windows, as a server role available for x64 versions of Windows Server. For server
instructions, see Install the Hyper-V role on Windows Server. On Windows, it's available as feature in some 64-bit versions of
Windows. It's also available as a downloadable, standalone server product, Microsoft Hyper-V Server.
Supported operating systems
Many operating systems will run on virtual machines. In general, an operating system that uses an x86 architecture will run on a
Hyper-V virtual machine. Not all operating systems that can be run are tested and supported by Microsoft, however. For lists of
what's supported, see:
 Supported Linux and FreeBSD virtual machines for Hyper-V on Windows
 Supported Windows guest operating systems for Hyper-V on Windows Server
How Hyper-V works
Hyper-V is a hypervisor-based virtualization technology. Hyper-V uses the Windows hypervisor, which requires a physical processor
with specific features. For hardware details, see System requirements for Hyper-V on Windows Server.
In most cases, the hypervisor manages the interactions between the hardware and the virtual machines. This hypervisor-controlled
access to the hardware gives virtual machines the isolated environment in which they run. In some configurations, a virtual machine
or the operating system running in the virtual machine has direct access to graphics, networking, or storage hardware.
What does Hyper-V consist of?
Hyper-V has required parts that work together so you can create and run virtual machines. Together, these parts are called the
virtualization platform. They're installed as a set when you install the Hyper-V role. The required parts include Windows hypervisor,
Hyper-V Virtual Machine Management Service, the virtualization WMI provider, the virtual machine bus (VMbus), virtualization
service provider (VSP) and virtual infrastructure driver (VID).
Hyper-V also has tools for management and connectivity. You can install these on the same computer that Hyper-V role
is installed on, and on computers without the Hyper-V role installed. These tools are:
 Hyper-V Manager
 Hyper-V module for Windows PowerShell
 Virtual Machine Connection (sometimes called VMConnect)
 Windows PowerShell Direct
Related technologies
These are some technologies from Microsoft that are often used with Hyper-V:
 Failover Clustering
 Remote Desktop Services
 System Center Virtual Machine Manager
Various storage technologies: cluster shared volumes, SMB 3.0, storage spaces direct
Windows containers offer another approach to virtualization. See the Windows Containers library on MSDN.

Types of Backups

There are three main backup types used to back up all digital assets:

 Full backup: The most basic and comprehensive backup method, where all data is sent to another location.
 Incremental backup: Backs up all files that have changed since the last backup occurred.
 Differential backup: Backs up only copies of all files that have changed since the last full backup.

Not all IT organizations can support all backup types since network capability may vary from organization to organization.
Choosing the right backup method requires a tactical approach — one that can help organizations get the best level of data
protection without demanding too much from the network. However, before determining which backup method best suits the
needs of your business, you need to understand the ins and outs of the three main backup types mentioned above.

Full Backup

A full backup involves the creation of a complete copy of an organization’s files, folders, SaaS data and hard drives. Essentially,
all the data is backed up into a single version and moved to a storage device. It’s the perfect protection against data loss when
you factor in recovery speed and simplicity. However, the time and expense required to copy all the data (all the time) may make
it an undesirable option for many IT professionals.

How does full backup work?

Let’s say you have to back up photos from Monday to Friday.


 Monday: You perform a full backup for 100 photos. You get an image file of 100 photos.
 Tuesday: You add another 100 photos and perform a full backup. You get an image file of 200 photos.
 Wednesday: You delete 100 photos and then perform a full backup. You get an image file of 100 photos.
 Thursday: You make no changes to your photos and perform a full backup. You get an image file of 100 photos.
 Friday: You add 200 photos and perform a full backup. You get an image file of 300 photos.

You get five backup files containing 800 photos. Should a data loss incident occur and you need to recover all the photos, simply
restore the last version to get all 800 photos.

Full Backup: Pros and Cons

Here are the advantages and disadvantages of running a full backup method:

Pros

 Quick restore time


 Storage management is easy since all the data is stored on a single version
 Easy version control allows you to maintain and restore different versions without breaking
a sweat
 File search is easy as it gets

Cons

 Demands the most storage space comparatively


 Depending on their size, it takes a long time to back up files
 The need for additional storage space makes it the most expensive backup method
 The risk of data loss is high since all the data is stored in one place

When should you use full backup?

Small businesses that deal consistently with a small amount of data may find full backup a good
fit since it won’t eat up their storage space or take too much time to back up.

Incremental Backup

Incremental backup involves backing up all the files, folders, SaaS data and hard drives that
have changed since the last backup activity. This could be the most recent full backup in the
chain or the last incremental backup. Only the recent changes (increments) are backed up,
consuming less storage space and resulting in a speedy backup. However, the recovery time is
longer since more backup files will need to be accessed.

How does incremental backup work?

Let’s say you have to back up photos from Monday to Thursday.

 Monday: You add 100 photos and perform a full backup. You get an image file of 100
photos.
 Tuesday: You add another 100 photos (now you have 200 photos) and perform an
incremental backup. You get an image file of 100 photos.
 Wednesday: You make no changes and perform an incremental backup. You get an empty
image file.
 Thursday: You delete 100 photos and edit the other 100 photos there and perform an
incremental backup. You get an image file of only the edited 100 photos.

You get three image files containing 300 photos in total. In case you need to recover all the
photos, restore all the image files since the last full backup, including the last full backup and the
later incremental backups, to get your 200 photos (including the deleted 100 photos).

Incremental Backup: Pros and Cons

Here are the advantages and disadvantages of running an incremental backup method:

Pros

 Efficient use of storage space since files are not duplicated in their entirety
 Lightning-fast backups
 Can be run as often as desired, with each increment being an individual recovery point

Cons

 Time-consuming restoration since data must be pieced together from multiple backups
 Successful recovery is only possible if all the backup files are damage-proof
 File search is cumbersome – you need to scout more than one backup set to restore a
specific file

When should you use incremental backup?

Businesses that deal with large volumes of data and cannot dedicate time to the backup process
will find incremental backup methods effective since they take up less storage space and
encourage fast backups.

Differential Backup

Differential backup falls between full backup and incremental backup. It involves backing up
files, folders and hard drives that were created or changed since the last full backup (compared
to just the changes since the last incremental backup). Only a small volume of data is backed up
between the time interval of the last backup and the current one, consuming less storage space
and requiring less time and investment.

How does differential backup work?

Let’s say you have to back up photos from Monday to Thursday.

 Monday: You have 200 photos and perform a full backup. You get an image file of 200
photos.
 Tuesday: You add another 200 photos (a total of 400 photos) and perform a differential
backup. You get an image file of the newly added 200 photos.
 Wednesday: You make no changes and perform a differential backup on the existing 400
photos. You get an image file of the newly added 200 photos on Tuesday.
 Thursday: You delete 100 photos and edit another 100 photos (total of 300 photos) and
perform a differential backup. You get image files of 100 photos, 200 photos and 300
photos.
Recovering 100 photos: Both deletion and editing happen to the added 200 photos. The
differential backup will back up the edited 100 photos.

Recovering 200 photos: If you delete 100 photos from the added photos and edit 100 photos
from the original photos, the differential backup will back up the edited 100 photos and the 100
added photos (left after deletion).

Recovering 300 photos: The differential backup will back up the edited 100 photos and the
added 200 photos.

Differential Backup: Pros and Cons

Here are the advantages and disadvantages of running a differential backup method:

Pros

 Takes less space than full backups


 Faster restoration than incremental backups
 Much faster backups than full backups

Cons

 Potential for failed recovery if any of the backup sets are incomplete
 Compared to incremental backups, the backup takes longer and requires more storage
space
 Compared to full backups, restoration is slow and complex

When should you use differential backup?

Small and medium-sized organizations that want to process large volumes of valuable data but
cannot perform constant backups will find the differential backup method useful.

Windows Operating System


The Windows Operating System (OS) is one of the most popular and widely used operating
systems in the world. Developed by Microsoft Corporation, Windows Operating System
has become the go-to choice for both personal and business computing. In this article, we
will explore the various features of the Windows Operating System, different versions,
important commands in the Windows Operating System, and the key differences between
Linux and Windows Operating Systems

What is Windows Operating System?

Windows Operating System (OS) is a graphical user interface (GUI) based operating system
developed by Microsoft Corporation. It is designed to provide users with a user-friendly interface
to interact with their computers. The first version of the Windows Operating System was
introduced in 1985, and since then, it has undergone many updates and upgrades. Windows
Operating System is compatible with a wide range of hardware and software applications, making
it a popular choice for both personal and business computing. It has a built-in security system to
protect the computer from malware and viruses and provides a comprehensive file management
system that makes it easy for users to organize and access their files. Windows Operating
System also allows users to run multiple applications simultaneously, making it easy to work on
multiple tasks at the same time.

Features of Windows Operating System

Here are some features of the Windows Operating System:

1. Control Panel: The control Panel is a centralized location within Windows where users can
manage various system settings, including security and privacy, display, hardware and sound,
and programs. It provides users with access to a range of tools and settings, making it easy to
customize the Windows experience.
2. Internet Browser: An Internet Browser is a software application that allows users to access and
browse the Internet. Windows provides a built-in internet browser called Microsoft Edge, which
includes features such as tabbed browsing, search suggestions, and web notes.
3. File Explorer: File Explorer is a file management tool that allows users to browse, open, and
manage files and folders on their computers. It provides a user-friendly interface for users to view
and manage files and includes features such as search, copy, move, and delete.
4. Taskbar: Taskbar is a horizontal bar that appears at the bottom of the Windows desktop. It
provides quick access to frequently used applications and displays open windows and programs.
The taskbar also includes system icons such as volume, network, and battery status.
5. Microsoft Paint: Microsoft Paint is a graphics editing software that allows users to create and
edit images. It provides users with basic drawing tools such as a pencil, brush, and eraser, and
allows users to add shapes, text, and images to their designs.
6. Start Menu: Start Menu is a menu that appears when users click the Start button on the Windows
taskbar. It provides access to frequently used applications, settings, and files, and includes a
search bar that allows users to quickly find files and applications.
7. Task Manager: Task Manager is a system tool that allows users to view and manage running
applications and processes. It provides users with information about CPU and memory usage and
allows users to end unresponsive programs and processes.
8. Disk Cleanup: Disk Cleanup is a system tool that allows users to free up space on their hard
drives by removing unnecessary files and data. It scans the system for temporary files, cache,
and other unnecessary data, and provides users with the option to remove them.
9. Cortana: Cortana is a virtual assistant software that allows users to interact with their computers
using voice commands. It provides users with access to information, and reminders, and can
perform tasks such as sending emails and setting reminders.
Various Versions of Windows Operating System

Here are some of the major versions of the Windows Operating System:
1. Windows 1.0: This was the first version of the Windows Operating System, released in 1985. It
was a graphical user interface (GUI) for MS-DOS and included features such as a calculator,
calendar, and notepad.

2. Windows 2.0: This version was released in 1987, and introduced features such as support for
VGA graphics, keyboard shortcuts, and improved memory management.

3. Windows 3.0: This version was released in 1990, and was the first widely successful version of
the Windows Operating System. It introduced features such as Program Manager, and File
Manager, and improved support for graphics and multimedia.
4. Windows 95: This version was released in 1995, and was a major milestone for Windows. It
introduced the Start menu, taskbar, and support for plug-and-play devices. It also included the
Internet Explorer web browser.

5. Windows 98: This version was released in 1998, and included improvements to the Start menu
and taskbar, as well as support for USB devices.
6. Windows 2000: This version was released in 2000, and was designed for business use. It
included features such as Active Directory, improved network support, and support for the NTFS
file system.

7. Windows XP: This version was released in 2001, and was a major overhaul of the Windows
interface. It introduced a new visual style, improved performance, and support for wireless
networks.
8. Windows Vista: This version was released in 2006, and included a new interface called Aero, as
well as improved security features.

9. Windows 7: This version was released in 2009, and included improvements to the Start menu,
taskbar, and Aero interface. It also introduced new features such as Jump Lists and Libraries.
10. Windows 8: This version was released in 2012, and was designed for touchscreens and
tablets. It introduced the Start screen, as well as new apps and features such as Charms and
Snap.

11. Windows 10: This version was released in 2015. It includes a redesigned Start menu,
support for virtual desktops, and new apps and features such as Cortana and the Edge browser.

12. Windows 11: It is the latest version of the Windows operating system, released by
Microsoft in October 2021. It builds upon the foundation of Windows 10, with a focus on
enhancing the user experience and improving performance and security.
Each version of the Windows Operating System has brought new features, improvements, and
changes.

List of Commands for Windows Operating System

Below is the list of some important commands for the Windows Operating System:

1. cd: This command is used to change the current directory. For example, you can use "cd
Documents" to change to the Documents directory.
2. cls: This command is used to clear the screen of any text or commands that were previously
entered.
3. dir: This command is used to display a list of files and directories in the current directory.
4. move: This command is used to move a file from one location to another.
5. ipconfig: This command displays the current network configuration of your computer, including
the IP address, subnet mask, and default gateway.
6. ping: This command is used to test the connection between your computer and another device
on the network. It sends packets of data to the device and measures the response time.
7. nslookup: This command is used to query the Domain Name System (DNS) to retrieve
information about a specific domain or hostname.
8. tracert: This command is used to trace the path that data takes from your computer to another
device on the network. It shows the routers and other devices that the data passes through.
9. sfc: This command scans and repairs system files that have been corrupted or modified.
10. attrib: This command is used to change the attributes of a file or directory, such as read-
only or hidden.
11. copy: This command is used to copy files and directories from one location to another.
12. find: This command is used to search for a specific string of text within a file.
13. del: This command is used to delete a file or directory.
Difference between Linux and Windows Operating System

Here is a table comparing some of the key differences between Linux and Windows operating
systems:

Feature Windows Operating System Linux Operating System

Command Allows use of command line, Offers more features for


Line but not as powerful as Linux. administration and daily tasks.

More reliable and secure than


Reliability Less reliable than Linux.
Windows.

The installation process is


Easier to use, but the
complicated but once installed
Usability installation process can take
it can perform complex tasks
more time.
easily.

Vulnerable to malware and More secure than Windows,


Security viruses, but security is with less vulnerability to
improved in recent years. malware and viruses.

Gives users full control over


Regular updates can be
updates, which are quicker to
Updates frustrating for users, and take
install and do not require a
longer to install.
reboot.

Does not allow modification of


Allows modification and reuse
software, only available on
Licensing of source code on any number
systems with a Windows
of systems.
license key.

Conclusion
In conclusion, the Windows operating system has evolved over the years to become one of the most widely used operating systems
in the world, with a range of features and functionalities that cater to the needs of different users. From its intuitive graphical user
interface to its command-line interface, Windows offers a variety of options for users to interact with the system.
FAQs

Here are some frequently asked questions about the Windows Operating System:

Q1: What is the purpose of Windows Task Manager in the Windows Operating System?
Answer: The purpose of Windows Task Manager in the Windows Operating System is to provide users with information about
running processes and applications on their system. It allows users to monitor system performance and can be used to end
unresponsive programs and processes.

Q2: What is the Windows Registry in Windows Operating System?


Answer: The Windows Registry is a database that stores settings and configurations for the Windows Operating System. It
includes information about user accounts, software applications, system settings, and hardware configurations.

Q3: What is Windows Defender in Windows Operating System?


Answer: Windows Defender is a built-in antivirus software that provides protection against malware and viruses. It is included
with Windows 10 and is available for download on other versions of the Windows Operating System.

Q4: What is the purpose of Disk Cleanup in the Windows Operating System?
Answer: The purpose of Disk Cleanup in the Windows Operating System is to free up space on your hard drive by removing
unnecessary files and data. It scans the system for temporary files, cache, and other unnecessary data, and provides users with the
option to remove them.

Q5: Can I use multiple users accounts on Windows Operating System?


Answer: Yes, you can use multiple user accounts on Windows Operating System. You can create separate user accounts for
each user, and each user can have their own settings and preferences.

Q6: What is the difference between Windows 10 Home and Windows 10 Pro?
Answer: Windows 10 Home is designed for home users and includes basic features such as Windows Defender, Cortana, and
the Start menu. Windows 10 Pro is designed for business users and includes additional features such as Remote Desktop,
BitLocker, and Hyper-V.

Linux operating system


The Linux Operating System is a type of operating system that is similar to Unix, and it is built upon the Linux Kernel. The Linux
Kernel is like the brain of the operating system because it manages how the computer interacts with its hardware and resources.
It makes sure everything works smoothly and efficiently. But the Linux Kernel alone is not enough to make a complete operating
system. To create a full and functional system, the Linux Kernel is combined with a collection of software packages and utilities,
which are together called Linux distributions. These distributions make the Linux Operating System ready for users to run their
applications and perform tasks on their computers securely and effectively. Linux distributions come in different flavors, each
tailored to suit the specific needs and preferences of users.

What is Linux
Linux is a powerful and flexible family of operating systems that are free to use and share. It was created by a person named
Linus Torvalds in 1991. What’s cool is that anyone can see how the system works because its source code is open for everyone
to explore and modify. This openness encourages people from all over the world to work together and make Linux better and
better. Since its beginning, Linux has grown into a stable and safe system used in many different things, like computers,
smartphones, and big supercomputers. It’s known for being efficient, meaning it can do a lot of tasks quickly, and it’s also cost-
effective, which means it doesn’t cost a lot to use. Lots of people love Linux, and they’re part of a big community where they
share ideas and help each other out. As technology keeps moving forward, Linux will keep evolving and staying important in the
world of computers.
Linux Distribution
Linux distribution is an operating system that is made up of a collection of software based on Linux kernel or you can say
distribution contains the Linux kernel and supporting libraries and software. And you can get Linux based operating system by
downloading one of the Linux distributions and these distributions are available for different types of devices like embedded
devices, personal computers, etc. Around 600 + Linux Distributions are available and some of the popular Linux distributions
are:
 MX Linux
 Manjaro
 Linux Mint
 elementary
 Ubuntu
 Debian
 Solus
 Fedora
 openSUSE
 Deepin
Architecture of Linux
Linux architecture has the following components:

Linux Architecture

1. Kernel: Kernel is the core of the Linux based operating system. It virtualizes the common
hardware resources of the computer to provide each process with its virtual resources. This
makes the process seem as if it is the sole process running on the machine. The kernel is
also responsible for preventing and mitigating conflicts between different processes. Different
types of the kernel are:
 Monolithic Kernel
 Hybrid kernels
 Exo kernels
 Micro kernels
2. System Library:Linux uses system libraries, also known as shared libraries, to implement
various functionalities of the operating system. These libraries contain pre-written code that
applications can use to perform specific tasks. By using these libraries, developers can save
time and effort, as they don’t need to write the same code repeatedly. System libraries act as
an interface between applications and the kernel, providing a standardized and efficient way
for applications to interact with the underlying system.
3. Shell:The shell is the user interface of the Linux Operating System. It allows users to interact
with the system by entering commands, which the shell interprets and executes. The shell
serves as a bridge between the user and the kernel, forwarding the user’s requests to the
kernel for processing. It provides a convenient way for users to perform various tasks, such
as running programs, managing files, and configuring the system.
4. Hardware Layer: The hardware layer encompasses all the physical components of the
computer, such as RAM (Random Access Memory), HDD (Hard Disk Drive), CPU (Central
Processing Unit), and input/output devices. This layer is responsible for interacting with the
Linux Operating System and providing the necessary resources for the system and
applications to function properly. The Linux kernel and system libraries enable
communication and control over these hardware components, ensuring that they work
harmoniously together.
5. System Utility: System utilities are essential tools and programs provided by the Linux
Operating System to manage and configure various aspects of the system. These utilities
perform tasks such as installing software, configuring network settings, monitoring system
performance, managing users and permissions, and much more. System utilities simplify
system administration tasks, making it easier for users to maintain their Linux systems
efficiently.
Advantages of Linux
 The main advantage of Linux is it is an open-source operating system. This means the
source code is easily available for everyone and you are allowed to contribute, modify and
distribute the code to anyone without any permissions.
 In terms of security, Linux is more secure than any other operating system. It does not mean
that Linux is 100 percent secure, it has some malware for it but is less vulnerable than any
other operating system. So, it does not require any anti-virus software.
 The software updates in Linux are easy and frequent.
 Various Linux distributions are available so that you can use them according to your
requirements or according to your taste.
 Linux is freely available to use on the internet.
 It has large community support.
 It provides high stability. It rarely slows down or freezes and there is no need to reboot it after
a short time.
 It maintains the privacy of the user.
 The performance of the Linux system is much higher than other operating systems. It allows
a large number of people to work at the same time and it handles them efficiently.
 It is network friendly.
 The flexibility of Linux is high. There is no need to install a complete Linux suite; you are
allowed to install only the required components.
 Linux is compatible with a large number of file formats.
 It is fast and easy to install from the web. It can also install it on any hardware even on your
old computer system.
 It performs all tasks properly even if it has limited space on the hard disk.
Disadvantages of Linux
 It is not very user-friendly. So, it may be confusing for beginners.
 It has small peripheral hardware drivers as compared to windows.
Frequently Asked Questions in Linux Operating System
What is Linux Operating System?
Linux is an open-source operating system developed by Linus Torvalds in 1991. It provides a
customizable and secure alternative to proprietary systems. With its stable performance, Linux
is widely used across devices, from personal computers to servers and smartphones. The
collaborative efforts of its developer community continue to drive innovation, making Linux a
dominant force in the world of computing.
Is There Any Difference between Linux and Ubuntu?
The answer is YES. The main difference between Linux and Ubuntu is Linux is the family of
open-source operating systems which is based on Linux kernel, whereas Ubuntu is a free open-
source operating system and the Linux distribution which is based on Debian. Or in other words,
Linux is the core system and Ubuntu is the distribution of Linux. Linux is developed by Linus
Torvalds and released in 1991 and Ubuntu is developed by Canonical Ltd. and released in
2004.
How do I install software on Linux Operating System?
To install software on Linux, we can use package managers specific to your Linux distribution.
For example,
In Ubuntu, you can use the “apt” package manager,
while on Fedora, you can use “dnf.”
You can simply open a terminal and use the package manager to search for and install
software.
For example,
To install the text editor “nano” on Ubuntu, you can use the command
sudo apt install nano
Can we dual-boot Linux with another operating system?
Yes, we can dual-boot Linux with another operating system, such as Windows. During the
installation of Linux, we can allocate a separate partition for Linux, and a boot manager (like
GRUB) allows us to choose which operating system to boot when starting our computer.
How can I update my Linux distribution?
We can update our Linux distributionusing the package manager of our specific distribution. For
instance, on Ubuntu, we can run the following commands to update the package list and
upgrade the installed packages:
sudo apt update
sudo apt upgrade
What are the essential Linux commands for beginners?
Some essential Linux commands for beginners include:
 ls: List files and directories
 cd: Change directory
 mkdir: Create a new directory
 rm: Remove files or directories
 cp: Copy files and directories
 mv: Move or rename files and directories
 cat:
Display file content
 grep: Search for text in files
 sudo: Execute commands with administrative privileges
How do I access the command-line interface in Linux Operating System?
To access the command-line interface in Linux , we can open a terminal window. In most Linux
distributions, we can press Ctrl + Alt + T to open the terminal. The terminal allows us to execute
commands directly, providing more advanced control over our system.

Windows Server

What Is Windows Server?

Designed by Microsoft, Windows Server is a group of operating systems to support


enterprises and small and medium-sized businesses with data storage, communications, and
applications.
Windows Server Definition

Windows Server is a line of Microsoft operating systems (OSes) comprised of extremely powerful machines.
Windows Server was first launched in April 2003. It’s typically installed on heavy-use servers serving as a
backbone for most IT companies, applications, and services. The server handles the administrative group-
related activities on a network. It organizes, stores, sends, and receives files from devices connected to a
network.

Windows Server versions

When it comes to networking, Windows Server has become the standard. For the last 16 years, Microsoft has
released a major version of Windows Server every four years and a minor version every two years. The minor
versions can be recognized with the suffix R2. The Windows operating system is persistently updated to add
new functionality to match the needs of today's users. Administrators need to understand how their server has
evolved and upgraded. The list of all major and minor Microsoft Windows Server versions is as follows:

o Windows Server 2000: Microsoft dropped the NT version from its system to emphasize new Windows capabilities. This
edition included networking features, such as XMP support and the ability to create active server pages. This edition
also created specialized versions for server environments with the help of its Advanced Server and a Datacenter edition.
o Windows Server 2003: This was the first version of Windows developed by Microsoft as a part of its NT family of
operating systems. The release of Windows Server 2003 brought a significant difference. The objective of Windows
Server 2003 was to reduce the need to reboot the system. It provided the ability to install updates without restarting the
system. Another feature of Windows Server 2003 was its ability to define server roles, which enabled IT teams to
customize operating systems for specific tasks like DNS servers. Windows Server 2003 came with multiple versions,
including the Standard, Advanced, and Datacenter versions.
o Windows Server 2008: This server edition was the third release of the Windows Server operating system. It brought on
the Windows Server operating system, which included improvements to Active Directory (AD) and changes in the OS
software support features and network services. One of the significant enhancements observed was the Microsoft
Hyper-V system. This enabled users to create virtual machines (VMs) to give an advantage to Windows users in the
competitive market. This version also included new administration tools known as Event Viewer and Server Manager to
provide more control to administrators over important server activities.
o Windows Server 2008 R2: Windows Server 2008 R2 was an updated 2008 edition in 2009. The significant changes
found in this version were due to the transition from Windows Vista to being based on Windows 7. This change not only
brought the system to a 64-bit environment but included other technical updates on supporting services. This version
brought additional updates to AD, better group policy implementation, and new services. It also provided better server
access to users in remote locations with DirectAccess and BranchCache.
o Windows Server 2012: This version is the fifth edition of the Windows Server operating system. Unlike its predecessor,
this version has four editions (Foundation, Essentials, Standard, and Datacenter) with various improved features, such
as an IP address management role, an updated version of Hyper-V, an all-new Windows Task Manager, updated
versions of PowerShell and Server Core, and a new file system known as ReFS. Microsoft added new functionalities to
Windows Server 2012 and marketed the new version as Cloud OS to become more competitive in the cloud. The
improved functionality enabled users to employ the Hyper-V architecture easily with other new cloud technologies. The
changes made to support this included updates to the storage system, the addition of the Hyper-V Virtual Switch , and
the inclusion of Hyper-V Replica.
o Windows Server 2012 R2: Windows Server 2012 R2 was an updated version of Windows 2012. It was released in
2013 with many changes and improvements to Windows 12 functionalities so it could integrate with cloud services. One
of these changes included rewrites to both network services and security protocols. Updates also included the
introduction of PowerShell and Desired State Configuration systems designed to enhance the network configuration
management. Another update improved the functionality of storage systems, provided better and easier access for file
sharing, and enhanced distributed file replication.
o Windows Server 2016: Windows Server 2016 is the seventh edition of the Windows Server operating system. It was
the successor to the Windows 8-based Windows Server 2012 and was developed concurrently with Windows 10. This
version introduced a new server, Nano Server. This server was a scaled-down version with a limited interface designed
to make it secure. This release also introduced Network Controller, which administrators could use to help them manage
physical and virtual network devices from a single location. This release also enhanced the VM system to support the
use of containers, make their interaction with Docker easier, and support encryption for Hyper-V. Windows Server 2016
came with two editions: Standard and Datacenter.
o Windows Server 2019: Windows Server 2019 is the most used Windows Server version. It was released in October
2018 and included comprehensive features to meet emerging networking requirements, including the following:
1. Windows Admin Center: The Windows Admin Center was designed to centralize server management. It also includes
several tools IT teams can use daily for things such as configuration management, performance monitoring, and
managing services running on different servers.
2. Hyper Converged Infrastructure (HCI): Microsoft moved to virtualization after adding Hyper-V in Windows Server
2008. VMs in the latest Windows version included enhanced HCI features built to give network administrators the ability
to manage virtualized services.
3. Microsoft Defender Advanced Threat Protection: One of the major concerns of businesses today is cybersecurity,
particularly advanced persistent threats. Attackers use whaling, spear phishing, and social media profiling to gain entry
to the network, and antivirus systems can help prevent these attacks. This provides advanced threat protection against
emerging cyberattacks. Microsoft released Microsoft Defender ATP as part of Windows Server 2019. It not only
monitors accounts for suspicious activity but tracks the activities of users, prevents unauthorized changes, and
automatically investigates attacks. It also provides options for remediation.

Top performance metrics to monitor for Windows Server

The top performance metrics to monitor for Windows Server performance include the following:

o CPU utilization: Regular CPU monitoring can be crucial for analyzing the CPU load and overcoming performance
issues. CPU usage and monitoring statistics help identify outages and more, so you can more easily drill down to the
root cause of downtime or CPU spikes to better ensure high performance.
o Memory utilization: Memory usage monitoring helps identify underused and excessive use of servers and server
overloads to redistribute loads more effectively.
o Processor queue length: The processor queue length can be defined as the number of threads each processor
serves. Continuously monitoring these processors can help you find out whether a processor can optimally handle the
number of threads.
o Disk usage with a capacity plan: Getting an idea of disk usage can be critical for your system to keep track of irregular
or sudden spikes. Measuring these metrics can help you plan and tab disk utilization and resolve the issue before it
becomes critical and affects your server's overall performance.
o Top process by CPU and memory: It is important to analyze the CPU usage to get an insight into how much load is
being placed on the servers’ processor at any given time. Based on this data, you can solve performance problems by
adding more CPU's, upgrading the hardware or shutting down unnecessary services.

Windows Server Performance Monitoring Best Practices

Windows Server performance monitoring refers to different processes through which you can accurately
measure key metrics. With the basic built-in tools in Windows Server, you can analyze and troubleshoot
common issues such as CPU, memory, hard disk, and more. However, you need third-party tools to monitor
your Windows Server, measure critical metrics, and identify issues.

Let's look at some monitoring best practices to help ensure your server is efficient, accurate, and useful.

o Define a baseline: A best practice is to keep track of your server activities. Make sure you have set baselines and
measurements for performing a system-level analysis by examining the entire system, not just a single metric or
component at a time.
o Monitor consistently: Windows Server performance monitoring should be done consistently. Monitoring processes can
help you watch critical components and their metrics. You can also automate and schedule monitoring processes to
look for errors and server downtime.
o Use tools: Measuring specific performance statistics and monitoring relevant metrics can be crucial to pinpoint
problems. Organizations may utilize various tools such as patch management to automate the most strenuous
processes, helping their servers stay up-to-date, checking for failed patches, and quickly fixing issues.

You might also like