Question & Answer Bank
Question & Answer Bank
III YEAR–05THSEMESTER
CCS335CLOUDCOMPUTING
TABLEOFCONTENT
CCS335CLOUDCOMPUTING
Syllabus
CLOUD ARCHITECTURE MODELS AND 3
I
INFRASTRUCTURE
II VIRTUALIZATIONBASICS 38
III VIRTUALIZATIONINFRASTRUCTUREANDDOCKER 67
IV CLOUDDEPLOYMENTENVIRONMENT 88
V CLOUD SECURITY 107
COURSE OBJECTIVES:
Tounderstandtheprinciplesofcloudarchitecture,modelsandinfrastructure. To
understand the concepts of virtualization and virtual machines.
Togain knowledgeabout virtualization Infrastructure.
ToexploreandexperimentwithvariousClouddeploymentenvironments. To
learn about the security issues in the cloud environment.
UNITIIVIRTUALIZATIONBASICS 6
Virtual Machine Basics – Taxonomy of Virtual Machines – Hypervisor – Key Concepts –
Virtualization structure – Implementation levels of virtualization – Virtualization Types: Full
Virtualization – Para Virtualization – Hardware Virtualization –Virtualization of CPU, Memory and
I/O devices.
UNITIIIVIRTUALIZATIONINFRASTRUCTUREANDDOCKER 7
Desktop Virtualization – Network Virtualization – Storage Virtualization – System-level of
Operating Virtualization – Application Virtualization – Virtual clusters and Resource Management
–Containersvs.VirtualMachines –Introduction toDocker–DockerComponents–Docker Container –
Docker Images and Repositories.
UNITIVCLOUDDEPLOYMENT ENVIRONMENT 6
Google App Engine – Amazon AWS – Microsoft Azure; Cloud Software Environments –
Eucalyptus – OpenStack.
UNITVCLOUD SECURITY 5
Virtualization System-SpecificAttacks:Guesthopping –VM migration attack – hyperjacking. Data
Security and Storage; Identity and Access Management (IAM) - IAM Challenges - IAM
Architecture and Practice. 30PERIODS
1
PRACTICALEXERCISES: 30 PERIODS
1. InstallVirtualbox/VMware/EquivalentopensourcecloudWorkstationwithdifferentflavours of
Linux or Windows OS on top of windows 8 and above.
2. InstallaCcompilerinthevirtualmachinecreatedusingavirtualboxandexecuteSimple Programs
3. InstallGoogleAppEngine.Createahelloworldappandothersimplewebapplicationsusing python/java.
4. UsetheGAElauncherto launchthewebapplications.
5. Simulate a cloud scenario using CloudSim and run a scheduling algorithm that is not present in
CloudSim.
6. Findaprocedure totransfer thefilesfromone virtualmachine toanothervirtual machine.
7. InstallHadoopsinglenodecluster andrunsimpleapplicationslike wordcount.
8. CreatingandExecutingYourFirstContainerUsingDocker.
9. RunaContainerfromDocker Hub
COURSE OUTCOMES:
CO1: Understand the design challenges in the cloud.
CO2:Applytheconceptofvirtualizationanditstypes.
CO3:ExperimentwithvirtualizationofhardwareresourcesandDocker.
CO4:Develop anddeployserviceson thecloudand setupacloudenvironment.
CO5:Explain security challengesinthecloudenvironment. TOTAL:60PERIODS
TEXTBOOKS
1. KaiHwang,GeoffreyCFox,Jack GDongarra,“DistributedandCloudComputing, From Parallel
Processing to the Internet of Things”, Morgan Kaufmann Publishers, 2012.
2. JamesTurnbull,“TheDockerBook”,O’ReillyPublishers,2014.
3. Krutz,R. L., Vines, R. D,“Cloud security. AComprehensiveGuide toSecure Cloud
Computing”, Wiley Publishing, 2010.
REFERENCES
1. James E.Smith,RaviNair, “VirtualMachines:VersatilePlatforms forSystems andProcesses”,
Elsevier/Morgan Kaufmann, 2005.
2. Tim Mather, Subra Kumaraswamy, andShahed Latif, “CloudSecurity and Privacy: an
enterprise perspective on risks and compliance”, O’Reilly Media, In
CCS335CLOUDCOMPUTING
QuestionBank
UNIT1
CLOUDARCHITECTUREMODELSANDINFRASTRUCTURE
SYLLABUS:Cloud Architecture: System Models for Distributed and Cloud Computing – NIST
Cloud Computing Reference Architecture – Cloud deployment models – Cloud service models;
Cloud Infrastructure: Architectural Design of Compute and Storage Clouds – Design Challenges
PARTA
2Marks
1. WhatisCloudComputing?BTL1
Cloud Computing is defined as storing and accessing of data and computing services over the
internet. It doesn’t store any data on your personal computer. It is the on-demand availability of
computer services like servers, data storage,networking, databases, etc. The main purpose of cloud
computing is to give access to data centers to many users. Users can also access data from a
remote server.
ExamplesofCloudComputingServices:AWS,Azure,
2. Writedowncharacteristicofcloudcomputing?BTL1
The National Institute of StandardsTechnology(NIST)listsfive essential characteristics ofcloud
computing:on-demand self-service, broad network access,resourcepooling,rapid elasticity, and
measured service.
3. WhatareallCloudComputingServicesB
TL1
ThethreemajorCloudComputingOfferingsare
SoftwareasaService(SaaS)
PlatformasaService (PaaS)
InfrastructureasaService (IaaS)
4. Descirbethetypeofcloudcomputing?BTL1
ThereAreFourMainTypesOfCloudComputing:
Private Clouds,
Public Clouds,
HybridClouds,
Multiclouds.
5. Writedownadvandagesofcloudcomputing?Advantages(or)Pros
ofCloudComputing?BTL1
1. Improved Performance
2. LowerITInfrastructureCosts
3. FewerMaintenanceIssues
4. LowerSoftware Costs
5. InstantSoftware Updates
6. IncreasedComputingPower
6. Writedowndisadvantageofcloudcomputing?BTL1
1. RequiresaconstantInternet connection
2. Doesnotworkwellwithlow-speed connections
3. Canbe slow
4. Storeddatamightnot besecure
5. Storeddatacanbelost
7. WhatarethecomputingParadigmDistinctions?BTL1
➢ Centralizedcomputing
➢ ParallelComputing
➢ DistributedComputing
➢ Cloud Computing
8. WhatarethedifferencesbetweenGridcomputingandcloudcomputing?BTL2
Gridcomputing Cloudcomputing
9. ifferencebetweenCloudComputingandDistributedComputing:BTL2
Insimpledistributedcomputingcanbesaid as
In simple cloud computing can be said a computing technique which allows to
as a computing technique that delivers multiple computers to communicate and
hosted services over the internet to its work to solve a single problem.
02. users/customers.
Itisclassifiedinto3differenttypessuchas
Itisclassifiedinto4differenttypessuch as Distributed Computing Systems,
Public Cloud, Private Cloud, Distributed Information Systems and
Community Cloud and Hybrid Cloud. Distributed PervasiveSystems.
03.
10. liststheactorsdefinedintheNISTcloudcomputingreferencearchitecture?BTL1
TheNISTcloudcomputing referencearchitecturedefinesfivemajor actors:
cloud consumer, cloud provider, cloud carrier, cloud auditor and cloud broker. Each actor is an
entity (a person or an organization) that participates in a transaction or process and/or performs
tasks in cloud computing.
11. DiscussgeneralactivityofactorsinNISTarchitecture?BTL2
12. WhatisaCloudDeploymentModel?BTL1
Cloud Deployment Model functions as a virtual computing environment with a
deployment architecture that varies depending on the amount of data you want to store and who
has access to the infrastructure.
13. WhatistheRightChoiceforCloudDeploymentModel?BTL1
• Cost: Cost is an important factor for the cloud deployment modelasit tells how much
amount you want to pay for these things.
• Scalability: Scalability tells about the currentactivitystatusandhow muchwe can
scale it.
• Easytouse:Ittellshowmuchyourresourcesaretrainedandhoweasilycan
youmanagethesemodels.
• Compliance: Compliance tells about the laws andregulations which impactthe
implementation of the model.
• Privacy:Privacytellsaboutwhatdatayougatherforthemodel.
14. WhataredifferentModelsofCloudComputing?BTL1
CloudComputinghelpsinrenderingseveralservicesaccordingtoroles,companies,etc.
Cloudcomputingmodelsareexplainedbelow.
• Infrastructure asaservice(IaaS)
• Platformasaservice (PaaS)
• Software asaservice(SaaS)
15. DefineInfrastructureasaservice(IaaS)?BTL1
16. DefinePlatformasaservice(PaaS)?BTL1
Platform as a Service (PaaS)is a type of cloud computing that helps developers to build
applications and services over the Internet by providing them with a platform.PaaS helps in
maintaining control over their business applications.
17. DefineSoftwareasaservice(SaaS)?BTL1
Software as a Service (SaaS) is a type of cloud computing model that is the work of
delivering services and applications over the Internet. The SaaS applications are called Web-
Based Software or Hosted Software.
18. Listthedisadvantagesofthepubliccloudmodel?BTL1
Thedisadvantagesofthepublic cloudmodelare:
• DataSecurityandPrivacyConcerns:Becauseitisopentothepublic,itdoesnot provide
complete protection against cyber-attacks and may expose weaknesses.
• IssueswithReliability:Becausethesameservernetworkisaccessibletoawide range
of users, it is susceptible to failure and outages.
• LimitationonService/License:Whiletherearenumerousresourcesthatyoumay
sharewithrenters,thereisa limitonhowmuchyoucanuse.
19. Listthedisadvantagesofthehybridcloudmodel?BTL1
Thedisadvantagesofthehybridcloudmodelare:
• Maintenance: A hybrid cloud computing strategy may necessitate additional
maintenance, resulting in a greater operational expense for your company.
• Difficult Integration: When constructing a hybrid cloud, data, and application
integration might be difficult. It’s also true that combining two ormore infrastructures
will offset a significant upfront cost.
20. Listthedisadvantagesoftheprivatecloudmodel?BTL1
Thedisadvantagesoftheprivatecloudmodelare
• Restricted Scalability: Private clouds have restricted scalability because they are
scaled within the confines of internally hosted resources. The choice of underlying
hardware has an impact on scalability.
• Higher Cost: Due to the benefits you would receive, your investment will be higher
than the public cloud(pay for software, hardware, staffing, etc).
21. WhataretheCloudinfrastructurecomponents?BTL1
Different components of cloud infrastructure supports the computing requirements of a
cloud computing model. Cloud infrastructure has number of key components but not limited to
only server, software, network and storage devices. Still cloud infrastructure is categorized into
three parts in general i.e.
1. Computing
2. Networking
3. Storage
PARTB
13Marks
1. Explainindetailsaboutarchitectureofcloudcomputing?BTL4
(Definition:2marks,Diagram:4marks,Explanation:7marks)
ArchitectureofCloudComputing
2. Discussaboutsystemmodelsfordistributedandcloudcomputing?BTL2
(Definition:2marks,Diagram:3marks,Tabularcolumn:3marks,Explanation:5marks)
Distributed and cloud computing systems are built over a large number of autonomous computer
nodes.These node machines are interconnected by SANs, LANs, orWANs in a hierarchical man-
ner. With today’s networking technology, a few LAN switches can easily connect hundreds of
machines as a working cluster. A WAN can connect many local clusters to form a very large
cluster of clusters. In this sense, one can build a massive system with millions of computers
connected to edge networks.
Massive systems are considered highly scalable, and can reach web-scale connectivity, either
physically or logically. In Table 1.2, massive systems are classified into four groups: clusters,
P2P networks, computing grids, and Internet clouds over huge data centers. In terms of node
number, these four system classes may involve hundreds, thousands, or even millions of
computers as participating nodes. These machines work collectively, cooperatively, or
collaboratively at various levels. The table entries characterize these four system classes in
various technical and application aspects.
1. ClustersofCooperativeComputers
Acomputing cluster consists of interconnected stand-alone computers which work cooperatively
as a single integrated computing resource. In the past, clustered computer systems have
demonstrated impressive results in handling heavy workloads with large data sets.
ClusterArchitecture
Figure 1.15 shows the architecture of a typical server cluster built around a low-latency,
high-bandwidth interconnection network. This network can be as simple as a SAN (e.g.,Myrinet)
or a LAN (e.g.,Ethernet).To build a larger cluster with more nodes,the interconnection network
can be built with multiple levels of Gigabit Ethernet, Myrinet, or InfiniBand switches. Through
hierarchical construction using a SAN, LAN, or WAN, one can build scalable clusters with an
increasing number of nodes.The cluster is connected to the Internet via a virtual private network
(VPN) gateway. The gateway IP address locates the cluster. The system image of a computer is
decided by the way the OS manages the shared cluster resources. Most clusters have loosely
coupled node computers. All resources of a server node are managed by their own OS. Thus,
most clusters have multiple system images as a result of having many autonomous nodes under
different OS control.
Single-SystemImage
Greg Pfister[38] has indicated that anideal clustershould mergemultiple system imagesinto a
single-system image (SSI). Cluster designers desire a cluster operating system or some middle-
ware to support SSI at various levels, including the sharing of CPUs, memory, and I/O across all
cluster nodes.An SSI is an illusion created by software or hardware that presents a collection of
resources as one integrated, powerful resource. SSI makes the cluster appear like a single
machine to the user. A cluster with multiple system images is nothing but a collection of inde-
pendent computers.
Hardware,Software,andMiddlewareSupport
In Chapter 2, we will discuss cluster design principles for both small and large clusters. Clusters
exploring massive parallelism are commonly known as MPPs. Almost all HPC clusters in the
Top 500 list are also MPPs.The building blocks are computer nodes (PCs, workstations, servers,
or SMP), special communication software such as PVM or MPI, and a network interface card in
each computer node. Most clusters run under the Linux OS. The computer nodes are
interconnected by a high-bandwidth network (such as Gigabit Ethernet, Myrinet, InfiniBand,
etc.).
Special cluster middleware supports are needed to create SSI or high availability (HA). Both
sequential and parallel applications can run on the cluster, and special parallel environments are
needed to facilitate use of the cluster resources. For example, distributed memory has multiple
images. Users may want all distributed memory to be shared by all servers by formingdistributed
shared memory (DSM). Many SSI features are expensive or difficult to achieve at various cluster
operational levels. Instead of achieving SSI, many clusters are loosely coupled machines. Using
virtualization, one can build many virtual clusters dynamically, upon user demand. We will
discuss virtual clusters in Chapter 3 and the use of virtual clusters for cloud computing in
Chapters 4, 5, 6, and 9.
MajorClusterDesignIssues
Unfortunately, a cluster-wide OS for complete resourcesharing is not available yet. Middleware
or OS extensions were developed at the user space to achieve SSI at selected functional levels.
Without this middleware, cluster nodes cannot work together effectively to achieve cooperative
computing. The software environments and applications must rely on the middleware to achieve
high performance. The cluster benefits come from scalable performance, efficient message
passing, high system availability, seamless fault tolerance, and cluster-wide job management, as
summarized in Table 1.3. We will address these issues in Chapter 2.
2. GridComputingInfrastructures
In the past 30 years, users have experienced a natural growth path from Internet to web and grid
computing services. Internet services such as the Telnet command enables a local computer to
connect to a remote computer. A web service such as HTTP enables remote access of remoteweb
pages. Grid computing is envisioned to allow close interaction among applications running
ondistantcomputerssimultaneously.ForbesMagazinehasprojectedtheglobalgrowthoftheIT- based
economy from $1 trillion in 2001 to $20 trillion by 2015. The evolution from Internet to web and
grid services is certainly playing a major role in this growth.
ComputationalGrids
Like an electric utility power grid, a computing grid offers an infrastructure that couples
computers, software/middleware, special instruments, and people and sensors together. The grid
isoftencon-structedacrossLAN,WAN,orInternetbackbonenetworksataregional,national,or global
scale. Enterprises or organizations present grids as integrated computing resources. They can
also be viewed as virtual platforms to support virtual organizations. The computers used in a grid
are pri-marily workstations, servers, clusters, and supercomputers. Personal computers, laptops,
and PDAs can be used as access devices to a grid system.
Figure 1.16 shows an example computational grid built over multiple resource sites owned by
different organizations. The resource sites offer complementary computing resources, including
workstations, large servers, a mesh of processors, and Linux clusters to satisfy a chain of
computational needs. The grid is built across various IP broadband networks including
LANsandWANsalreadyusedbyenterprisesororganizationsovertheInternet.Thegridispresentedto
users as an integrated resource pool as shown in the upper half of the figure.
ManynationalandinternationalgridswillbereportedinChapter7,theNSFTeraGridinUS, EGEE in Europe,
and ChinaGrid in China for various distributed scientific grid applications.
GridFamilies
3. Peer-to-PeerNetworkFamilies
An example of a well-established distributed system is the client-server architecture. In this sce-
nario, client machines (PCs and workstations) are connected to a central server for compute, e-
mail, file access, and database applications. The P2P architecture offers a distributed model of
networked systems. First, a P2P network is client-oriented instead of server-oriented. In this
section, P2P sys-tems are introduced at the physical level and overlay networks at the logical
level.
P2PSystems
In a P2P system, every node acts as both a client and a server, providing part of the system
resources. Peer machines are simply client computers connected to the Internet. All client
machines act autonomously to join or leave the system freely. This implies that no master-slave
relationship exists among the peers. No central coordination or central database is needed. In
other words, no peer machine has a global view of the entire P2P system. The system is self-
organizing with distributed control.
Figure 1.17 shows the architecture of a P2P network at two abstraction levels. Initially, the
peers are totally unrelated. Each peer machine joins or leaves the P2Pnetwork voluntarily. Only
the participating peers form the physical network at any time. Unlike the cluster or grid, a P2P
networkdoes not use a dedicated interconnection network. The physical network is simply an ad
hoc network formed at various Internet domains randomly using the TCP/IP and NAI protocols.
Thus, the physical network varies in size and topology dynamically due to the free membership
in the P2P network.
OverlayNetworks
Data items or files are distributed in the participating peers. Based on communication or file-
sharing needs, the peer IDs form an overlay network at the logical level. This overlay is a virtual
network
formed by mapping each physical machine with its ID, logically, through a virtual mapping as
shown in Figure 1.17. When a new peer joins the system, its peer ID is added as a node in the
overlay network. When an existing peer leaves the system, its peer ID is removed from the
overlay network automatically. Therefore, it is the P2P overlay network that characterizes the
logical connectivity among the peers.
There are two types of overlay networks: unstructured and structured. An unstructured
overlay network is characterized by a random graph.There is no fixed route to send messages or
files among the nodes. Often, flooding is applied to send a query to all nodes in an unstructured
overlay, thus resulting in heavy network traffic and nondeterministic search results. Structured
overlay net-works follow certain connectivity topology and rules for inserting and removing
nodes (peer IDs) from the overlay graph. Routing mechanisms are developed to take advantageof
the structured overlays.
P2PApplicationFamilies
Basedonapplication,P2Pnetworksareclassifiedintofourgroups,asshowninTable
1.5. The first family is for distributed file sharing of digital contents (music, videos, etc.) on the
P2P network. This includes many popular P2P networks such as Gnutella, Napster, and
BitTorrent, among others. Colla-boration P2P networks include MSN or Skype chatting, instant
messaging, and collaborative design, among others. The third family is for distributed P2P
computing in specific applications. For example, SETI@home provides 25 Tflops of distributed
computing power, collectively,over 3millionInternet hostmachines. OtherP2Pplatforms, such as
JXTA,
.NET, and FightingAID@home, support naming, discovery, communication, security, and
resource aggregation in some P2P applications. We will dis-cuss these topics in more detail in
Chapters 8 and 9.
P2PComputingChallenges
P2P computing faces three types of heterogeneity problems in hardware, software, and
network requirements. There are too many hardware models and architectures to select from;
incompatibility exists between software and the OS; and different network connections and
protocols
make it too complex to apply in real applications. We need system scalability as the workload
increases. System scaling is directly related to performance and bandwidth. P2Pnetworks do have
these properties. Data location is also important to affect collective performance. Data locality,
network proximity,andinteroperabilityare threedesign objectivesindistributed P2Papplications. P2P
performance is affected by routing efficiency and self-organization by participating peers.
Fault tolerance, failure management, and load balancing are other important issues in using
overlay networks. Lack of trust among peers poses another problem. Peers are strangers to one
another. Security, privacy, and copyright violations are major worries by those in the industry in
terms of applying P2P technology in business applications [35]. In a P2P network, all clients
provide resourcesincluding computingpower,storage space,and I/Obandwidth.The distributed
nature of P2P net-works also increases robustness, because limited peer failures do not form a
single point of failure.
By replicating data in multiple peers, one can easily lose data in failed nodes. On the other
hand,disadvantages ofP2Pnetworksdoexist.Becausethesystemisnotcentralized,managingit is
difficult. In addition, the system lacks security. Anyone can log on to the system and cause
damage or abuse. Further, all client computers connected to a P2Pnetwork cannot be considered
reliable or virus-free. In summary, P2P networks are reliable for a small number of peer nodes.
They are only useful for applications that require a low level of security and have no concern for
data sensitivity. We will discuss P2P networks in Chapter 8, and extending P2P technology to
social networking in Chapter 9.
4. CloudComputingovertheInternet
GordonBell,JimGray,andAlexSzalay[5]haveadvocated:“Computationalscienceischanging to be
data-intensive. Supercomputers must be balanced systems, not just CPU farms but also petascale
I/O and networking arrays.” In the future, working with large data sets will typically mean
sending the computations (programs) to the data, rather than copying the data to the
workstations. This reflects the trend in IT of moving computing and data from desktops to large
data centers, where there is on-demand provision of software, hardware, and data as a service.
This data explosion has promoted the idea of cloud computing.
Cloud computing has been defined differently by many users and designers. For example,
IBM, a major player in cloud computing, has defined it as follows: “A cloud is a pool of
virtualized computer resources. A cloud can host a variety of different workloads, including
batch-stylebackendjobsandinteractiveanduser-facingapplications.”Basedonthisdefinition,a cloud
allows workloads to be deployed and scaled out quickly through rapid provisioning of virtual or
physical machines. The cloud supports redundant, self-recovering, highly scalable
programmingmodels thatallowworkloadstorecoverfrommanyunavoidablehardware/software
failures. Finally, the cloud system should be able to monitor resource use in real time to enable
rebalancing of allocations when needed.
InternetClouds
Cloud computing applies a virtualized platform with elastic resources on demand byprovisioning
hardware, software, and data sets dynamically (see Figure 1.18). The idea is to move desktop
computing to a service-oriented platform using server clusters and huge databases at data centers.
Cloud computing leverages its low cost and simplicity to benefit both users and providers.
Machine virtualization has enabled such cost-effectiveness. Cloud computing intends to satisfy
many user
REGULATION2021 ACADEMICYEAR2023-2024
3. ExplainindetailaboutLayeredCloudArchitectureDesign?BTL4
(Definition:2marks,Explanation8marks,Diagram3marks)
● ◆Thearchitectureofacloudisdevelopedatthreelayers:infrastructure,
platform and application as demonstrated in Figure 1.15. These three development layers are
implementedwithvirtualizationandstandardizationofhardwareandsoftwareresourcesprovisioned in
the cloud.The services to public, private and hybrid clouds are conveyed to users through
networking support over the Internet and intranets involved.
● ◆Theplatformshouldbeabletoassureusersthattheyhavescalability,
dependability,andsecurityprotection.Inaway,thevirtualizedcloudplatformservesasa"system
middleware" between the infrastructure and application layers of the cloud.The application layer is
formed with a collection of all needed software modules for SaaS applications.
Serviceapplications inthis layer includedailyofficemanagementworksuch asinformation retrieval,
document processing and calendar and authentication services.
◆ ●ThebestexampleofthisistheSalesforce.comCRMserviceinwhichtheprovidersupplies
notonlythehardwareatthebottomlayerandthesoftwareatthetoplayerbutalsotheplatform and
software tools for user application development and monitoring.
• In Market Oriented Cloud Architecture, as consumers rely on cloud providers to meet more of
their computing needs, they will require a specific level of QoS to be maintained by their providers,
in order to meettheir objectives and sustain their operations. Market-oriented resourcemanagement
is necessary to regulate the supply and demand of cloud resources to achieve market equilibrium
between supply and demand.
● ◆Thiscloudisbasicallybuiltwiththefollowingentities:
Usersorbrokersactingonuser'sbehalfsubmitservicerequestsfromanywhereintheworldtothe
data center and cloud to be processed.The request examiner ensures that there is no overloading of
resources whereby many service requests cannot be fulfilled successfully due to limited resources.
o The Pricing mechanism decides how service requests are charged. For instance, requests can be
charged based on Submission time (peak/off-peak), pricing Rates fixed/changing),(supply/demand)
of availability Of resources
• The VM Monitor mechanism keeps track ofthe availability of VMs and
theirresourceentitlements.
TheAccountingmechanismmaintainstheactualusageofresourcesbyrequestssothatthefinal cost can be
computed and charged to users.
Inaddition,themaintainedhistoricalusageinformationcanbeutilizedbytheServiceRequest Examiner
and Admission Control mechanism to improve resource allocation decisions.
4. Explainindetailaboutarchitecturaldesignchallengesof
(i) serviceavailabilityanddatalockinproblem
(ii) DataPrivacyandSecurityConcerns?BTL4
(ConceptExplanation(i)7marks,ConceptExplanation (ii)6marks)
Challenge1:ServiceAvailabilityandDataLock-inProblem
The management of a cloud service by a single company is often the source of single points
of failure.
• To achieve HA, one can consider using multiple cloud providers. Even if a company has
multiple data centers located in different geographic regions, it may have common software
infrastructure and
accountingsystems.
● ◆ Therefore, using multiple cloud providers may provide more protection from
failures. Another availability obstacle is distributed denial of service (DDoS)
attacks.◆●CriminalsthreatentocutofftheincomesofSaaSprovidersby
makingtheirservicesunavailable.SomeutilitycomputingservicesofferSaaSprovidersthe opportunity
to defend against DDoS attacks by using quick scale ups. • Software stacks have improved
interoperability among different cloud platforms, but the APIs itself are still proprietary. Thus,
customers cannot easily extract their data and programs from one site to run on another.
The obvious solution is to standardize theAPIs so that a SaaS developer can deploy services and
dataacrossmultiplecloudproviders.◆●Thiswillrescuethelossofalldataduetothefailure
ofasinglecompany.Inadditiontomitigatingdatalock-inconcerns,standardizationof
Challenge2:
DataPrivacyandSecurity Concerns
Current cloud offerings are essentially public (rather than private) networks, exposing the system
to more attacks.
Many obstacles can be overcome immediately with well understood technologies such
asencrypted storage, virtual LANs, and network middle boxes (e.g., firewalls, packet filters).
● ◆Forexample,theendusercouldencryptdatabeforeplacingitinacloud.Manynations have
laws requiring SaaS providers to keep
spyware,malware,rootkits,Trojanhorses,andworms.◆●Inacloudenvironment,newer attacks
may result from hypervisor
malware,guesthoppingandhijackingorVMrootkits.
5. Explainindetailaboutarchitecturaldesignchallengesof
(i) UnpredictablePerformanceandBottlenecks
(ii) DistributedStorageandWidespreadSoftwareBugs?BTL4
(ConceptExplanation,Concept(i)7marks,ConceptExplanation (ii)6marks)
Challenge3:UnpredictablePerformanceandBottlenecks
● ◆Forexample,torun75EC2instanceswiththeSTREAMbenchmarkrequiresa mean
bandwidth of 1,355 MB/second.
However,foreachofthe75EC2instancestowrite1GBfilestothelocaldiskrequiresamean
diskwritebandwidthofonly55
MB/second.◆●ThisdemonstratestheproblemofI/OinterferencebetweenVMs.
One solutionistoimproveI/Oarchitecturesandoperatingsystemstoefficientlyvirtualize
interruptsandI/Ochannels.
● ◆Internetapplicationscontinuetobecomemoredataintensive.●◆Ifweassume
applications to be pulled apart across the boundaries
of clouds, this may complicate data placement and transport. ◆●Cloud users and providers have
to think about the implications of
Challenge4:DistributedStorageandWidespreadSoftwareBugs The
database is always growing in cloud applications.
● ◆Theopportunityistocreateastoragesystemthatwillnotonlymeetthisgrowth butalso
combineitwiththecloud advantageofscalingarbitrarilyupand downondemand.
● ◆ThisdemandsthedesignofefficientdistributedSANS.●◆Datacenters mustmeet
programmer'sexpectationsintermsofscalability,datadurabilityandHA.
DataconsistencecheckinginSANconnecteddatacentersisamajor
challenge in cloud computing. Large scale distributed bugs cannot be reproduced, so thedebugging
must occur at a scale in the production data centers.No data center will provide sucha convenience.
One solution may be a reliance on using VMs in cloud computing.The level of virtualization may
make it possible to capture valuable information in ways that are impossible without using VMs.
6. Explainindetailaboutarchitecturaldesignchallengesof
(i) CloudScalability,Interoperability
(ii) SoftwareLicensingandReputation?BTL4
(ConceptExplanation(i)8marks,ConceptExplanation(ii)5marks)
Challenge5:CloudScalability,Interoperability,
Standardization
● ◆ The pay as you go model applies to storage and network bandwidth; both are counted in
terms of the number of bytes used.
● ◆Computationisdifferentdependingonvirtualizationlevel.
● ◆AWSchargesbythehourforthenumberofVMinstancesused,even
● ◆This VM format does not rely on the use of a specific host platform, virtualization platform
or guest operating system.
● ◆Theapproachistoaddressvirtualplatformisagnosticpackagingwithcertification
andintegrityofpackagedsoftware.Thepackagesupportsvirtualappliancestospanmorethan one
VM.
● ◆OVFalsodefinesatransportmechanismforVMtemplatesandtheformatcan applyto
different virtualization platforms with different levels of virtualization.
● ◆Intermsofcloudstandardization,theabilityforvirtualappliancestorunonanyvirtualplatform.Theus
erisalsoneedtoenableVMstorunonheterogeneoushardware platform hypervisors.
◆ ●This requires hypervisor-agnostic VMs. And also the user need to realize cross platform
live migration between x86 Intel and AMD technologies and support legacy hardware forload
balancing..
● ◆Alltheseissuesarewideopenforfurtherresearch.
PARTC
15Marks
1. ExplainindetailsaboutModelsofCloudComputing?BTL4
(Definition:2marks,Diagram
:3marks,Concept,Explanation:6marks,Advantages:2marks,Disadvantages:2marks)
CloudComputinghelpsinrenderingseveralservicesaccordingtoroles,companies,etc.
Cloudcomputingmodelsareexplainedbelow.
• Infrastructureasaservice (IaaS)
• Platformasaservice (PaaS)
• Software asaservice(SaaS)
1. Infrastructureasaservice(IaaS)
Platform as a Service (PaaS)is a type of cloud computing that helps developers to build
applications and services over the Internet by providing them with a platform.
PaaShelps inmaintainingcontrolover theirbusinessapplications.
AdvantagesofPaaS
• PaaS is simple and very much convenient for theuser as itcan be accessed viaa web
browser.
• PaaShasthecapabilitiestoefficientlymanagethelifecycle.
DisadvantagesofPaaS
• PaaS has limited control over infrastructure as they have less control over the
environment and are not able to make some customizations.
• PaaShasa highdependence onthe provider.
3. Softwareasaservice(SaaS)
Software as a Service (SaaS)is a type of cloud computing model that is the work of delivering
services and applications over the Internet. The SaaS applications are called Web-BasedSoftware
or Hosted Software.
SaaShasaround60percent ofcloudsolutionsanddue tothis, itismostlypreferredbycompanies.
AdvantagesofSaaS
• SaaScanaccessappdatafromanywhereonthe Internet.
• SaaSprovides easyaccessto features andservices.
DisadvantagesofSaaS
• SaaS solutions have limited customization, which means they have some restrictions
within the platform.
• SaaShaslittlecontroloverthedataoftheuser.
• SaaS are generally cloud-based, they require a stable internet connection for
properworking.
Cloudinfrastructure
Cloud Computing which is one of the demanding technology of current scenario and which has
beenprovedasarevolutionarytechnologytrend forbusinessesofallsizes.Itmanagesabroadand complex
infrastructure setup to provide cloud services and resources to the customers. Cloud Infrastructure
which comes under the backend part of cloud architecture represents the hardware and software
component such as server, storage, networking, management software, deployment software and
virtualization software etc. In backend, cloud infrastructure enables the complete cloud computing
system.
WhyCloudComputingInfrastructure:
Cloudcomputingreferstoprovidingondemandservicestothecustomeranywhereandanytime
irrespective of everything where the cloud infrastructure represents the one who activates the
completecloudcomputingsystem.Cloudinfrastructurehasmorecapabilitiesofprovidingthe same
services as the physical infrastructure to the customers. It is available for private cloud,public
cloud, and hybrid cloud systemswith low cost, greater flexibility and scalability.
Cloudinfrastructurecomponents:
Different components of cloud infrastructure supports the computing requirements of a cloud
computing model. Cloud infrastructure has number of key components but not limited to only
server,software,networkandstoragedevices.Stillcloudinfrastructureiscategorizedintothree parts
in general i.e.
1. Computing
2. Networking
3. Storage
Themostimportantpointisthatcloudinfrastructureshouldhavesomebasicinfrastructural constraints like
transparency, scalability, security and intelligent monitoring etc.
Thebelowfigurerepresentscomponentsofcloudinfrastructure
ComponentsofCloudInfrastructure
1. Hypervisor:
Hypervisorisafirmwareoralowlevelprogramwhichisakeytoenablevirtualization.Itisused to divide
and allocate cloud resources between several customers. As it monitors and manages cloud
services/resources that’s why hypervisor is called as VMM (Virtual Machine Monitor) or
(Virtual Machine Manager).
2. ManagementSoftware:
Management software helps in maintaining and configuring the infrastructure. Cloud
managementsoftwaremonitorsandoptimizesresources,data,applicationsandservices.
3. DeploymentSoftware:
Deploymentsoftwarehelps in deploying andintegrating theapplication on thecloud. So, typically
it helps in building a virtual computing environment.
4. Network:
It is one of the key component of cloud infrastructure which is responsible for connecting cloud
services over the internet. For the transmission of data and resources externally and internally
network is must required.
5. Server:
Server which represents the computing portion of the cloud infrastructure is responsible for
managing and delivering cloud services for various services and partners, maintainingsecurity
etc.
6. Storage:
Storagerepresentsthestoragefacilitywhichisprovidedtodifferentorganizationsforstoringand
managing data. It provides a facility of extracting another resource if one of the resource fails as
it keeps many copies of storage.
Along with this, virtualization is also considered as one of important component of cloud
infrastructure. Because it abstracts the available data storage and computing power away from
theactualhardwareand theusersinteractwith their cloud infrastructurethroughGUI(Graphical User
Interface).
2. ExplainaboutNISTreferencearchitecture?BTL4
(Definition:2marks,Diagram:4marks,Explanation:9marks)
NISTstandsforNationalInstitute ofStandardsandTechnology
computingoCloudcomputingtargetbusinessusecasesworkgroup o
group
o Cloudcomputingstandardsroadmapworkgroup
o CloudComputingsecuritywork group
architectureIllustrateandunderstandthevariouslevelof
services
o Toprovidetechnicalreference
o Categorizeandcompareservicesofcloud computing
o Analysisofsecurity,interoperatabilityandportability
● ◆Ingeneral,NISTgeneratesreportforfuturereferencewhichincludessurvey,analysis of
existingcloudcomputingreferencemodel,vendorsandfederalagencies.
The conceptual reference architecture shown in figure 1.4 involves five actors. Each actor as entity
participates in cloud computing
Cloudconsumer:Apersonoranorganizationthatmaintainsabusinessrelationshipwithandusesa services
from cloud providers
Cloudprovider:Aperson,organizationorentityresponsibleformakingaserviceavailableto interested
parties
Cloud auditor: A party that conduct independent assessment of cloud services, information system
operation, performance and security of cloud implementation
● ◆Cloudbroker:Anentitythatmanagestheperformanceanddeliveryofcloudservices and
negotiates relationship between cloud provider and consumer.
Figure 1.5 illustrates the common interaction exist in between cloud consumer and provider where
as the broker used to provide service to consumer and auditor collects the audit information.
The interaction between the actors may lead to different use case scenario.
3
Figure 1.6 shows one kind of scenario in which the Cloud consumer may request service from a
cloud broker instead of contacting serviceprovider directly. In this case, acloud broker can createa
new service by combining multiple services
Figure 1.7 illustrates the usage of different kind of Service Level Agreement (SLA) between
consumer, provider and carrier.
Cloud consumer is a principal stake holder for the cloud computing service and requires service
level agreements to specify the performance requirements fulfilled by a cloud provider.
● ◆ TheservicelevelagreementcoversQualityofServiceandSecurity
aspects.Consumershavelimitedrightstoaccessthesoftwareapplications.
Therearethreekindsofcloudconsumers:SaaSconsumers,PaaSConsumersandIaaS consumers.
◆ ●SaaS consumers are members directly access the software application. For example,
document management, content management, social networks, financial billing and so on.
PaaS consumers are used to deploy, test, develop and manage applications hosted in cloud
environment. Database application deployment, development and testing is an example for these
kind of consumer.
● ◆laaSConsumercanaccessthevirtualcomputer,storageandnetworkinfrastructure. For
example, usage of Amazon EC2 instance to deploy the web application.
On the other hand, Cloud Providers have complete rights to access software applications. In
Software as a Service model, cloud provider is allowed to configure, maintain and update the
operations of software application.
● ◆Normally, the service layer defines the interfaces for cloud consumers to access the
computing services.
• Resource abstraction and control layer contains the system components that cloud provider use to
provide and mange access to the physical computing resources through software abstraction.
• Resource abstraction covers virtual machine management and virtual storage
management.Controllayerfocusonresourceallocation,accesscontrolandusage
monitoring.
• Physical resourcelayerincludesphysicalcomputing resourcessuch asCPU, Memory,Router,
Switch, Firewalls and Hard Disk Drive.
Service orchestration describes the automated arrangement, coordination and management of
complex computing system
• In cloud service management, business support entails the set of business related services dealing
with consumer and supporting services which includes content management, contract management,
inventory management, accounting service, reporting service and rating service.
• Provisioning of equipments, wiring and transmission is mandatory to setup a new service that
provides a specific application to cloud consumer. Those details are described in Provisioning and
Configuring management.
Portabilityenforcestheabilitytoworkinmorethanonecomputingenvironmentwithoutmajor task.
Similarly, Interoperatability means the ability of the system work with other system.
• Securityfactor isapplicabletoenterpriseandGovernment.Itmayincludeprivacy.
Privacyisoneappliestoacloudconsumer'srightstosafeguardhis informationfromotherconsumers are
parties.
3. ExplainindetailsaboutCloudDeploymentModels?BTL4
(Diagram3marks,Explanation:6marks,Advantages2marks,Disadvantages2marks,Tabular
column2marks)
In cloud computing, we have access to a shared pool of computer resources (servers, storage,
programs, and so on) in the cloud. You simply need to request additional resources when you
require them. Getting resources up and running quickly is a breeze thanks to the clouds. It is
possible to release resources that are no longer necessary. This method allows you to just pay for
what you use.Your cloud provider is in charge of all upkeep.
CloudDeploymentModel
CloudDeploymentModelfunctionsasavirtualcomputingenvironmentwithadeployment
architecture that varies depending on the amount of data you want to store and who has access to
the infrastructure
TypesofCloudComputingDeploymentModels
The cloud deployment model identifies the specific type of cloud environment based on
ownership, scale, and access, as well as the cloud’s nature and purpose. The location of the
servers you’re utilizing and who controls them are defined by a cloud deployment model. It
specifies how your cloud infrastructure will look, what you can change, and whether you will be
given services or will have to create everything yourself. Relationships between theinfrastructure
and your users are also defined by cloud deployment types. Different types ofcloudcomputing
deployment models are described below.
• PublicCloud
• PrivateCloud
• Hybrid Cloud
• CommunityCloud
• Multi-Cloud
PublicCloud
The public cloud makes it possible for anybody to access systems and services.The public cloud
may be less secure as it is open to everyone. The public cloud is one in which cloudinfrastructure
services are provided over the internet to the general people or major industry groups. The
infrastructure in this cloud model is owned by the entity that delivers the cloud services, not by
the consumer. It is a type of cloud hosting that allows customers and users to easily access
systems and services. This form of cloud computing is an excellent example of cloud hosting, in
which service providers supply services to a variety of customers. In this arrangement, storage
backup and retrieval services are given for free, as a subscription, or on a per-user basis. For
example, Google App Engine etc.
PublicCloud
AdvantagesofthePublicCloudModel
• MinimalInvestment: Because it is a pay-per-use service, there is no substantial
upfront fee, making it excellent for enterprises that require immediate access to
resources.
• Nosetupcost:Theentireinfrastructureisfullysubsidizedbythecloudservice providers,
thus there is no need to set up any hardware.
• InfrastructureManagementisnotrequired:Usingthepublicclouddoesnot necessitate
infrastructure management.
• Nomaintenance: Themaintenance workis donebythe serviceprovider(not users).
• DynamicScalability:Tofulfillyourcompany’sneeds,on-
demandresourcesareaccessible.
DisadvantagesofthePublicCloudModel
• Lesssecure: Public cloud is less secure as resources are public so there is no
guarantee of high-level security.
• Lowcustomization: It is accessed by many public so it can’t be customized
according to personal requirements.
PrivateCloud
Theprivatecloud deploymentmodelistheexactoppositeofthepublicclouddeploymentmodel. It’s a
one-on-one environment for a single user (customer). There is no need to share your hardware
with anyone else. The distinction between private and public cloudsis in how you handle all of
the hardware. It is also called the “internal cloud” & it refers to the ability to access systems and
services within a given border or organization. The cloud platform is implementedin a cloud-
based secure environment that is protected by powerful firewalls and under the supervision of an
organization’s IT department. The private cloud gives greater flexibility of control over cloud
resources.
PrivateCloud
AdvantagesofthePrivateCloudModel
• BetterControl: You are the sole owner of the property. You gaincomplete command
over service integration, IT operations, policies, and user behavior.
• DataSecurityandPrivacy: It’s suitable for storing corporate information to which
only authorized staff have access. By segmenting resources within the same
infrastructure, improved access and security can be achieved.
• SupportsLegacySystems: This approach is designed to work with legacy systems
that are unable to access the public cloud.
• Customization: Unlikeapublicclouddeployment,aprivatecloudallowsacompany to
tailor its solution to meet its specific needs.
DisadvantagesofthePrivateCloudModel
• Lessscalable: Privatecloudsarescaled withinacertainrangeasthereislessnumber of
clients.
• Costly:Privatecloudsaremorecostlyastheyprovidepersonalized facilities.
HybridCloud
By bridging the public and private worlds with a layer of proprietary software, hybrid cloud
computing gives the best of both worlds. With a hybrid solution, you may host the app in a safe
environment while taking advantage of the public cloud’s cost savings. Organizations can move
data and applications between different clouds using a combination of two or more cloud
deployment methods, depending on their needs.
HybridCloud
AdvantagesoftheHybridCloudModel
• Flexibilityandcontrol: Businesseswithmoreflexibilitycandesignpersonalized
solutions that meet their particular needs.
• Cost:Becausepubliccloudsprovidescalability,you’llonlyberesponsibleforpaying for
the extra capacity if you require it.
• Security: Because data is properly separated, the chances of data theft by
attackersare considerably reduced.
DisadvantagesoftheHybridCloudModel
• Difficulttomanage: Hybrid clouds are difficult to manage as it is a combination of
both public and private cloud. So, it is complex.
• Slowdatatransmission: Data transmission in the hybrid cloud takes place through
the public cloud so latency occurs.
CommunityCloud
It allows systems and services to be accessible by a group of organizations. It is a distributed
system that is created by integrating the services of different clouds to address the specific needs
of a community, industry, or business. The infrastructure of the community could be shared
between the organization which has shared concerns or tasks. It is generally managed by a third
party or by the combination of one or more organizations in the community.
CommunityCloud
AdvantagesoftheCommunityCloudModel
• CostEffective: It is cost-effective because the cloud is shared by multiple
organizations or communities.
• Security:Communitycloudprovidesbetter security.
• Sharedresources: It allows you to share resources, infrastructure, etc. with multiple
organizations.
• Collaborationanddatasharing:Itissuitableforbothcollaborationanddata sharing.
DisadvantagesoftheCommunityCloudModel
• LimitedScalability: Community cloud is relatively less scalable as many
organizations share the same resources according to their collaborative interests.
• Rigidincustomization: As the data and resources are shared among different
organizations according to their mutual interests if an organization wants some
changes according to their needs they cannot do so because it will have an impact on
other organizations.
Multi-Cloud
We’re talking about employing multiple cloud providersat the same time under this
paradigm, as the name implies. It’s similar to the hybrid cloud deployment approach, which
combines public and private cloud resources. Instead of merging private and public clouds,multi-
cloud uses many public clouds. Although public cloud providers provide numerous tools to
improve the reliability of their services, mishaps still occur. It’s quite rare that two distinct clouds
would have an incident at the same moment. As a result, multi-cloud deployment improves the
high availability of your services even more.
Multi-Cloud
AdvantagesoftheMulti-CloudModel
• You can mix and match the best features of each cloud provider’s services to suit the
demands of your apps, workloads, and business by choosing different cloudproviders.
• ReducedLatency: To reduce latency and improve user experience, you can choose
cloud regions and zones that are close to your clients.
• Highavailabilityofservice: It’s quite rare that two distinct clouds would have an
incident at the same moment. So, the multi-cloud deployment improves the high
availability of your services.
DisadvantagesoftheMulti-CloudModel
• Complex:Thecombinationofmanycloudsmakesthesystemcomplexand
bottlenecks may occur.
• Securityissue:Duetothecomplexstructure,theremaybeloopholestowhicha hacker can
take advantage hence, makes the data insecure.
OverallAnalysisofCloudDeploymentModels
TheoverallAnalysisofthesemodelswithrespecttodifferentfactorsisdescribedbelow.
CommunityClo
Factors Public Cloud PrivateCloud ud HybridCloud
Scalability
and High High Fixed High
Flexibility
CommunityClo
Factors Public Cloud PrivateCloud ud HybridCloud
UNIT2
VIRTUALIZATIONBASICS
PARTA
2Marks
1. Define
virtualmachine?BTL1
AVMisavirtualizedinstanceofacomputerthatcanperformalmostallofthesamefunctionsas a
computer, including running applications and operating systems. Virtual machines run on a
physical machine and access computing resources from software called a hypervisor.
2. Defineacloudvirtualmachine?BTL1
Acloudvirtualmachine isthedigitalversion ofaphysicalcomputerthatcanruninacloud.Like
aphysicalmachine,itcanrunanoperatingsystem,storedata,connecttonetworks,anddoallthe other
computing functions.
3. ListtheAdvantagesofcloudvirtualmachine?BTL1
Therearemanyadvantagestousingcloudvirtualmachinesinsteadofphysicalmachines,
REGULATION2021 ACADEMICYEAR2023-2024
including:
Lowcost:Itischeapertospinoffavirtualmachineinthecloudsthantoprocurea physical
machine.
Easy scalability: We can easily scale in or scale out the infrastructure of a cloud
virtualmachine based on load.
Easeofsetup andmaintenance:Spinningoffvirtualmachines is veryeasyascompared to
buying actual hardware. This helps us get set up quickly.
4
Shared responsibility: Disaster recovery becomes the responsibility of theCloud provider.
We don’t need a different disaster recovery site incase our primary site goes down.
4. ListtheBenefitsofVirtualization?BTL1
Moreflexibleand efficient allocation of resources.
Enhance development productivity.
It lowers thecost ofIT infrastructure.
Remote access and rapid scalability.
Highavailabilityanddisasterrecovery.
Pay peruse of theIT infrastructure on demand.
Enables running multiple operating systems.
5. Isthereanylimittono.ofvirtualmachinesonecaninstall?BTL1
In general there is no limit because it depends on the hardware of yoursystem. As the VM
is using hardware of your system, if it goes out of it’s capacity then it will limit you not to install
further virtual machines.
6. CanoneaccessthefilesofoneVMfromanother?BTL1
In general No, but as an advanced hardware feature, we can allow the file-sharing for
different virtual machines.
7. WhatareTypesofVirtualMachines?BTL1
wecanclassifyvirtualmachinesintotwotypes:
1. SystemVirtual Machine
2. Process VirtualMachine
8. WhatareTypesofVirtualization?BTL1
1. ApplicationVirtualization
2. Network Virtualization
3. DesktopVirtualization
4. StorageVirtualization
5. ServerVirtualization
6. Datavirtualization
9. DefineUsesofVirtualization?BTL1
Data-integration
Business-integration
Service-orientedarchitecturedata-services
Searching organizational data
10. Whatismeanbyhypervisor?BTL1
Ahypervisor, also known as a virtual machine monitor or VMM, is software that creates
and runs virtual machines (VMs). A hypervisor allows one host computer to support multiple
guest VMs by virtually sharing its resources, such as memory and processing.
11. ListdownthedifferenttypesofVMM?BTL1
VMWareESXi
Xen.
KVM
12. Whatarethetypesofhypervisor?BTL1
Type 1hypervisorsrun directlyonthesystemhardware.Theyare oftenreferredtoasa "native"or
"bare metal" or "embedded" hypervisors in vendor literature.
Type2hypervisorsrunonahostoperatingsystem.
13. WhatisVirtualizedInfrastructureManager(VIM).?BTL1
The virtualized infrastructure manager (VIM) in a Network Functions Virtualization
(NFV) implementation manages the hardware and software resources that the service provider
uses to create service chains and deliver network services to customers.
14. DifferentiatebetweensystemVMandProcessVM?BTL2
A Process virtual machine, sometimes called an application virtual machine, runs as a
normal application inside a host OS andsupports a single process. Itis created when that process
is started and destroyed when it exits. Its purpose is to provide a platform-independent
programming environment that abstracts away details of the underlying hardware or operating
system, and allows a program to executein the same way on any platform.
A System virtual machine provides a complete system platform which supports the
execution ofa complete operating system (OS),Just like you said VirtualBox is one example.
15. MentionthesignificationofNetworkVirtualization?BTL1
Network virtualization helps organizations achieve major advances in speed, agility, and security
by automating and simplifying many of the processes that go into running a data center network
and managing networking and security in the cloud. Here are some of the key benefits of network
virtualization: Reduce network provisioning time from weeks to minutes
Achieve greater operational efficiency by automating manual processes Place and move workloads
independently of physical topology Improve network security within the data center
16. Listtheimplementationlevelsofvirtualization[R]?
BTL1
Instructionsetarchitecture(ISA)level
Hardware abstraction
layer(HAL) level Operating
SystemLevelLibrary(user-level
API) level Application level
17. Explainhypervisorarchitecture?BTL1
A hypervisor or virtual machine monitor (VMM) is a piece ofcomputer software,
firmware orhardware that creates and runs virtual machines.
18. Definepara-virtualization?BTL1
Para-virtualization is avirtualization technique that presentsasoftware interfaceto virtual
machines that is similar, but not identical to that of the underlying hardware.
19. What are the two types
ofhypervisor ?BTL1
PARTB
13Marks
1. ExplainindetailsaboutVirtualizationinCloudComputingandTypes?BTL4
(Definition:2marks,Diagram:3marks,Explanation:8marks)
Virtualization is a technique how to separate a service from the underlying physical deliveryof that
service. It is the process of creating a virtual version of something like computer hardware. It
wasinitiallydevelopedduringthemainframeera.Itinvolvesusingspecializedsoftwaretocreatea virtual
or software-created version of a computing resource rather than the actual version of the same
resource. With the help of Virtualization, multiple operating systems and applicationscan run on
the same machine and its same hardware at the same time, increasing the utilization and flexibility
of hardww`q1are.In other words, one of the main cost-effective, hardware-reducing, andenergy-
savingtechniquesusedbycloudprovidersisVirtualization.Virtualizationallows
sharing of asinglephysicalinstanceof aresourceor an application among multiplecustomers and
organizations at one time. It does this by assigning a logical name to physical storage and
providing a pointer to that physical resource on demand. The term virtualization is often
synonymous with hardware virtualization, which plays a fundamental role inefficiently delivering
Infrastructure-as- a-Service (IaaS) solutions for cloud computing.Moreover, virtualization
technologies provide a virtual environment for not only executing applications but also forstorage,
memory, and networking.
Virtualization
• Host Machine: The machine on which the virtual machine is going to be built is
known as Host Machine.
• GuestMachine:Thevirtualmachineisreferred toasaGuestMachine.
WorkofVirtualizationinCloudComputing
Virtualization has a prominent impact on Cloud Computing. In the case of cloud computing,
usersbutwiththehelpofVirtualization,usershavetheextrabenefitofsharingtheinfrastructure. Cloud
Vendors take care of the required physical resources, but these cloud providers charge a huge
amount for these services which impacts every user or organization. Virtualization helps Users or
Organisations in maintaining those services which are required by a company through external
(third-party) people, which helps in reducing costs to the company. This is the way through
which Virtualization works in Cloud Computing.
Benefitsof Virtualization
• Moreflexibleand efficientallocationof resources.
• Enhancedevelopmentproductivity.
• Itlowersthecost ofIT infrastructure.
• Remoteaccessandrapid scalability.
• Highavailabilityanddisaster recovery.
• PayperuseoftheIT infrastructureondemand.
• Enablesrunningmultipleoperating
systems. Drawback of Virtualization
• High Initial Investment: Clouds have a very high initial investment, but it is also true
that it will help in reducing the cost of companies.
• Learning New Infrastructure: As the companies shifted from Servers to Cloud, it
requires highly skilled staff who have skills to work with the cloud easily, and for this,
you have to hire new staff or provide training to current staff.
• Risk of Data: Hosting data on third-party resources can lead to putting the data atrisk,
it has the chance of getting attacked by any hacker or cracker very easily.
For more benefits and drawbacks, you can refer to the Pros and Cons
ofVirtualization.Characteristics of Virtualization
• Increased Security: The ability to control the execution of a guest program in a
completely transparent manner opens new possibilities for delivering a secure,
controlled execution environment. All the operations of the guest programs are
generallyperformedagainstthevirtualmachine,whichthentranslatesandappliesthem to the
host programs.
• Managed Execution: In particular, sharing, aggregation, emulation, and isolation are
the most relevant features.
• Sharing:Virtualizationallowsthecreationofaseparatecomputingenvironment
within the same host.
• Aggregation:Itispossibletosharephysicalresourcesamongseveralguests,but
virtualization also allows aggregation, which is the opposite process.
Formorecharacteristics,youcanrefertoCharacteristicsofVirtualization.Types
of Virtualization
1. ApplicationVirtualization
2. Network Virtualization
3. DesktopVirtualization
4. StorageVirtualization
5. Server Virtualization
6. Datavirtualization
TypesofVirtualization
ServerVirtualization
6. DataVirtualization: This is the kind of virtualization in which the data is collected from
various sources and managed at a single place without knowing more about the technical
information like how data is collected, stored & formatted then arranged that data logically so
that its virtual view can be accessed by its interested people and stakeholders, and users through
the various cloud services remotely. Many big giant companies are providing their services like
Oracle, IBM, At scale, Cdata, etc.
UsesofVirtualization
• Data-integration
• Business-integration
• Service-orientedarchitecturedata-services
• Searchingorganizationaldata
2. WhatarethedifferencebetweenCloudcomputingandVirtualization:-BTL2
(Comparison:13marks)
7. The total cost of cloud computing is The total cost of virtualization is lower
higher than virtualization. than Cloud Computing.
S.NO CloudComputing Virtualization
8. Cloudcomputingrequiresmany Whilesinglededicatedhardwarecan do a
dedicated hardware. great job in it.
3. Explainindetailsabouthypervisoranditistypes?BTL4
(Definition:2marks,ConceptExplanation:8marks,Diagram:3marks)
Hypervisor
● ◆ In this model, the guest is represented by the operating system, the host by the physical
computerhardware,thevirtualmachinebyitsemulationandthevirtualmachinemanagerbythe
hypervisor.
Hardwarelevelvirtualizationisalsocalledsystemvirtualization,sinceitprovidesISAtovirtual machines,
which is the representation of the hardware interface of a system.
Thisistodifferentiateitfromprocessvirtualmachines,whichexposeABItovirtualmachines.
2.3showsdifferenttypeof hypervisors.
oTypeIhypervisorsrundirectlyontopofthehardware.
■ Type I hypervisor take the place of the operating systems and interact directly with the ISA
interfaceexposedbytheunderlyinghardwareandtheyemulatethisinterfaceinordertoallowthe
management of guest operating systems.
Thistypeofhypervisorisalsocalledanativevirtualmachinesinceitrunsnativelyonhardware.o Type II
hypervisors require the support of an operating system to provide virtualization services.
■ Thistypeofhypervisorisalsocalledahostedvirtual
5
4. WhataretheTaxonomyofvirtualmachines?BTL1
(Conceptexplanation:10marks,Diagram:3marks)
Virtualizationismainlyusedtoemulateexecutionenvironments,storageandnetworks.
Execution virtualization techniques into two major categories by considering the type of host they
require.
Processleveltechniquesareimplementedontop ofan existing
operating system, which has full control of the hardware. System level techniquesare implemented
directly on hardware and do not require or require a minimum of support from existing operating
system.
Within these two categories we can list various techniques that offer the guest a different type of
virtual computation environment:
◆ ●Bare hardware
oOperatingsystemresourcesoLowlevelprogramminglanguage Application
libraries
referencemodeldescribedinFigure2.1.
At the bottom layer, the model for the hardware is expressed in terms of the Instruction Set
Architecture(ISA),whichdefinestheinstructionsetfortheprocessor,registers,memoryandan interrupt
management.
● ◆ISAistheinterfacebetweenhardwareandsoftware.
● ◆ ISAisimportanttotheoperatingsystem(OS)developer(SystemISA)anddevelopers of
applications that directly manage the underlying hardware (User ISA).
● ◆ The application binary interface (ABI) separates the operating system layer from
the applications and libraries, which are managed by the
OS.•ABIcoversdetailssuchaslowleveldatatypes, alignment,call
conventions and defines a format for executable programs. ◆●System calls are defined at this
level.This interface allows portability of applications and libraries across operating systems that
implement the same ABI.
● ◆ The highest level of abstraction is represented by the application programming interface (API),
which interfaces applications to libraries and the underlying operating system.
● ◆For thispurpose,theinstructionsetexposedbythehardwarehasbeen
dividedintodifferentsecurityclassesthatdefinewhocanoperatewiththem.Thefirstdistinction can be
made between privileged and non
privilegedinstructions.
• Privileged instructions are those that are executed under specific restrictions and are mostly
usedforsensitiveoperations,whichexpose(behavior-sensitive)ormodify(control-sensitive)the
privileged state.
Forinstance,apossibleimplementationfeaturesahierarchyofprivilegesillustrateinthefigure2.2 in the
form of ring-based security: Ring 0, Ring 1, Ring 2, and Ring 3;
Ring0isinthe mostprivilegedleveland Ring3 intheleastprivilegedlevel.
Recentsystemssupportonlytwolevels,withRing0for
llthecurrentsystemssupportatleasttwodifferentexecution modes:
levelresourcesThedistinctionbetweenuserandsupervisormodeallowsusto
5. WritetheKeyConceptsofvirtualization?BTL1
(ConceptExplanation:13marks)
● ◆HypervisorvsIncreasedsecurity
o Theabilitytocontroltheexecutionofaguestinacompletelytransparentmanneropens new
possibilities for delivering a
secure,controlledexecutionenvironment.
o Thevirtualmachinerepresentsanemulatedenvironment in
• ManagedexecutionVirtualizationoftheexecutionenvironmentnotonlyallowsincreased
security, but a wider range of features also can be implemented.
Inparticular,sharing,aggregation,emulation,andisolationarethemostrelevantfeature.
Sharing
Inthiswayitispossibletofullyexploitthecapabilitiesofapowerfulguest,whichwouldotherwise be
underutilized.
• AggregationoNotonlyisitpossibleto sharephysicalresourceamongseveralguestsbut
virtualizationalsoallowsaggregation,whichistheopposite process.
Emulation
o Guestprogramsareexecutedwithinanenvironmentthatiscontrolledbythevirtualization layer,
which ultimately is a program.
• Thisallowsforcontrollingandtuningtheenvironmentthatisexposed toguests.
Isolation
o Virtualizationallowsprovidingguestswhetherthey are
operating systems, applications, or other entities with a completely separate environment, in which
theyareexecuted.•Theguestprogramperformsits activitybyinteractingwithanabstractionlayer, which
provides access to the underlying resources.
o Benefitsof Isolation
Firstitallowsmultiple gueststorunonthesame host
withoutinterferingwitheachother.■Second,itprovidesaseparationbetweenthehostand the
guest.
◆ ●It becomes easier to control the performance of the guest by finely tuning the properties of
the resources exposed through the virtual environment.
o Theconceptofportabilityappliesindifferentways accordingtothespecifictypeof
virtualization considered.
6. Explainindetailaboutvirtualizationstructures?BTL4
(Conceptexplanation10marks,Diagram:3marks)
Virtualizationlayerisresponsibleforconvertingportionsoftherealhardwareintovirtual machine
● ◆ Therefore,differentoperatingsystemssuchasLinuxand
Windowscanrunonthesamephysicalmachine,simultaneously.
• Dependingonthepositionofthevirtualizationlayer,thereareseveralclassesofVM
architectures, namely the hypervisor architecture, paravirtualization and host based
virtualization. The hypervisor is also known as the VMM (Virtual Machine
Monitor). Theybothperformthesamevirtualizationoperations.
HypervisorandXenarchitecture
● ◆Thehypervisorsoftwaresitsdirectlybetweenthephysicalhardware
anditsOS.ThisvirtualizationlayerisreferredtoaseithertheVMMorthe hypervisor.
The hypervisor provides hypercalls for the guest OSes and applications. Depending on the
functionality,ahypervisorcanassumemicrokernelarchitectureliketheMicrosoftHyper-V.
● ◆ A micro kernel hypervisorincludes only the basic and unchanging functions (such as physical
memory management and processor scheduling).
● ◆Thedevicedriversandotherchangeablecomponentsareoutsidethehypervisor.
● ◆Thehypervisorsoftwaresitsdirectlybetweenthephysical
hardwareanditsOS.Thisvirtualizationlayerisreferredtoaseither the
VMM
orthehypervisor.
Thehypervisorprovideshypercalls fortheguestOSesandapplications.
Dependingonthefunctionality,ahypervisorcanassumemicro kernel
VMwareESXforservervirtualization.
● ◆ A micro kernel hypervisor includes only the basicand unchanging functions (such as
physical memory management and processor scheduling).
● ◆Thedevicedriversandotherchangeablecomponentsareoutsidethehypervisor.
Amonolithichypervisorimplementsalltheaforementionedfunctions,includingthoseofthe
devicedrivers.Therefore,thesizeofthehypervisorcodeofamicro-kernelhypervisorissmaller than that
of a monolithic hypervisor.
Essentially,ahypervisormustbeabletoconvertphysicaldevicesinto virtual
Xenarchitecture
• XenisanopensourcehypervisorprogramdevelopedbyCambridge
University.•Xenisamicrokernelhypervisor,whichseparatesthepolicyfromthe
mechanism.
● ◆Asaresult,thesizeoftheXenhypervisoriskeptrathersmall.
• Xenprovidesavirtualenvironmentlocatedbetweenthe hardwareandtheOS.
hecorecomponentsofaXensystemarethehypervisor,kernel,and
important.
◆ ●Like other virtualization systems, many guest OSes can run on top of
the hypervisor.
● ◆Domain0isaprivilegedguestOSofXen.ItisfirstloadedwhenXen
boots without any file system drivers being available.Domain 0 is designed to access hardware
directlyandmanagedevices.Therefore,oneoftheresponsibilitiesofDomain0isto allocateand map
hardware resources for the guest domains (the Domain U domains).
● ◆ Forexample, Xen isbased on Linux and its security levelis C2. Its management VM is named
Domain 0 which has the privilege to manage other VMs implemented on the same host.
● ◆ If Domain 0 is compromised, the hacker can control the entire system. So, in the VM system,
securitypoliciesareneededtoimprovethesecurityofDomain0.
● ◆ Domain 0, behaving as a VMM, allows users to create, copy, save, read, modify, share,
migrate and roll back VMs as easily as manipulating a file, which flexibly provides tremendous
benefits for users.
Binarytranslationwithfullvirtualization
● ◆ Full virtualization does not need to modify the host OS. It relies on binary translation to trap
and to virtualize the execution of certain
sensitive,nonvirtualizableinstructions.TheguestOSesandtheirapplicationsconsistofnoncritical and
critical instructions.
Withfullvirtualization,noncriticalinstructionsrunonthehardwaredirectlywhilecritical instructions
are discovered and replaced with
traps into the VMM to be emulated by software. ●◆Both the hypervisor and VMM approaches
are considered full
virtualization.
• The VMM scans the instruction stream and identifies the privileged, control and behavior
sensitiveinstructions.Whentheseinstructionsareidentified,theyaretrappedinto theVMM, which
emulates the behavior of these instructions.
• The method used in this emulation is called binary translation. ◆●Full virtualization
combines binary translation and direct execution.
● ◆TheguestOSesareinstalledandrunontopofthevirtualizationlayer.
next.oFirst,theusercaninstallthisVMarchitecturewithoutmodifyingthehost OS. o
Paravirtualizationwithcompilersupport
● ◆Whenx86processorisvirtualized,avirtualizationlayerbetweenthehardware andtheOS.
◆ ●According to the x86 ring definitions, the virtualization layer should also be installed at Ring
0.
DifferentinstructionsatRing0maycause some
problems.
Althoughparavirtualizationreducestheoverhead,ithasincurredother problems.
Finally,theperformanceadvantageofparavirtualizationvariesgreatlyduetoworkload variations.
easyandmorepractical.Themainprobleminfullvirtualizationisits low
version 2.6.20 kernel. ◆●In KVM, Memory management and scheduling activities are
carried out by the existing Linux kernel.
● ◆ The KVM does the rest, which makesitsimpler than the hypervisorthat controls the entire
machine.
KVMisahardwareassistedandparavirtualizationtool,whichimprovesperformanceandsupports
unmodified guest OSes such as Windows, Linux, Solaris, and other UNIX variants.
6
Unlike the full virtualization architecture which intercepts
andemulatesprivilegedandsensitiveinstructionsatruntime,
● ◆ The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0. This
impliesthattheguestOSmaynotbeabletoexecutesomeprivilegedandsensitiveinstructions.
Theprivilegedinstructionsareimplementedbyhypercallstothehypervisor.
7. WhataretheTypesofVirtualization?BTL1
(Definition:3marks,Conceptexplanation:10marks)
1Fullvirtualization
● ◆ Full virtualization refers to the ability to run a program, most likely an operating system,
directlyontopofavirtualmachineandwithoutanymodification,asthoughitwererunontheraw
hardware.Tomakethispossible,virtualmachinemanagersarerequiredto
provide a complete emulation of the entire underlying hardware. ◆●The principal advantage of
full virtualization is complete isolation, which leads to enhanced security, ease of emulation of
differentarchitecturesandcoexistenceofdifferentsystemsonthesameplatform.
● ◆ Whereas it is a desired goal for many virtualization solutions, full virtualization poses
important concerns related to performance and technical implementation.
I/O instructions: Since they change the state of the resources exposed
bythehost, theyhavetobecontainedwithin thevirtualmachine manager.
● ◆ A simple solution to achieve full virtualization is to provide a virtual environment for all the
instructions,thusposingsomelimitsonperformance.
• Asuccessfulandefficientimplementationoffullvirtualizationisobtainedwithacombinationof
hardware and software, not allowing potentially harmful instructions to be executed directly onthe
host.
2. Paravirtualization
• Paravirtualizationisanottransparentvirtualizationsolution
● ◆Theaimofparavirtualizationistoprovidethecapabilityto
demandtheexecutionofperformancecriticaloperationsdirectlyon the
host,
thuspreventingperformancelossesthatwouldotherwisebeexperiencedinmanagedexecution.
Thisallowsasimplerimplementationofvirtualmachinemanagers that
ThistechniquehasbeensuccessfullyusedbyXenforprovidingvirtualizationsolutionsforLinux- based
operating systems specifically ported to run on Xen hypervisors.
• Operating systems that cannot be ported can still take advantage of para virtualization by
usingadhocdevicedriversthatremaptheexecutionofcriticalinstructionstotheparavirtualizationAPIs
exposed by the hypervisor. Xen provides this solution for running Windows based operating
systems on x86 architectures.
3. Hardwareassistedvirtualization
● ◆ This technique was originally introduced in the IBM System/370. . At present, examples of
hardwareassistedvirtualizationaretheextensionstothex86architectureintroducedwithIntel-VT
(formerly known as Vanderpool) andAMD-V (formerly known as Pacifica).These extensions,
whichdifferbetweenthetwovendors,aremeanttoreducetheperformancepenaltiesexperienced by
emulating x86
hardwarewith hypervisors.
instructionsandprovideanemulatedversion.•ProductssuchasVMwareVirtualPlatform, introduced in
1999 by
Hyper-V,SunXVM,Parallels,andothers.
4. Partialvirtualization
● ◆ Partial virtualization provides a partial emulation of the underlying hardware, thus not
allowingthecompleteexecutionoftheguestoperatingsystemincompleteisolation.
● ◆ Historically, partial virtualization has been an important milestone for achieving full
virtualization, and it was implemented on the experimental IBM M44/44X.
PARTC
15marks
1. Whataretheimplementationlevelsofvirtualization?BTL1
(Definition:2marks,Diagram:5marks,ConceptExplanation:8marks)
1. LevelsofVirtualizationImplementation
A traditional computer runs with a host operating system specially tailored for its
hardware architecture, as shown in Figure 3.1(a).After virtualization, different user applications
managed by their own operating systems (guest OS) can run on the same hardware, independent
of the host OS. This is often done by adding additional software, called a virtualization layer as
shown in Figure 3.1(b). This virtualization layer is known as hypervisor or virtual machine
monitor (VMM) [54]. The VMs are shown in the upper boxes, where applications run with their
own guest OS over the virtualized CPU, memory, and I/O resources.
Themainfunctionofthesoftwarelayerforvirtualizationistovirtualizethephysicalhardwareof a host
machine into virtual resources to be used by the VMs, exclusively. This can be implemented at
various operational levels, as we will discuss shortly.The virtualization software creates the
abstraction of VMs by interposing a virtualization layer at various levels of a computer system.
Common virtualization layers include the instruction set architecture (ISA) level, hardware level,
operating system level, library support level, and application level (see Figure 3.2).
InstructionSetArchitectureLevel
At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host
machine. Forexample, MIPSbinary codecan run onan x86-based hostmachinewith thehelp of ISA
emulation. With this approach, it is possible to run a large amount of legacy binary code writ- ten
for various processors on any given new hardware host machine. Instruction set emulation leads
to virtual ISAs created on any hardware machine.
The basic emulation method is through code interpretation.An interpreter program interprets
the source instructions to target instructions one by one. One source instruction may require tens
or hundreds of native target instructions to perform its function. Obviously, this process is
relatively slow. For better performance, dynamic binary translation is desired. This approach
translatesbasic blocks ofdynamicsourceinstructions to target instructions.The basicblocks can
also be extended to program traces or super blocks to increase translation efficiency. Instruction
set emulation requires binary translation and optimization. A virtual instruction set architecture
(V-ISA) thus requires adding a processor-specific software translation layer to the compiler.
HardwareAbstractionLevel
Hardware-level virtualization is performed right on top of the bare hardware. On the one hand,
this approach generates a virtual hardware environment for aVM.On the other hand,the process
manages the underlying hardware through virtualization. The idea is to virtualize a computer’s
resources, such as its processors, memory, and I/O devices. The intention is to upgrade the
hardware utilization rate by multiple users concurrently. The idea was implemented in the IBM
VM/370 in the 1960s. More recently, the Xen hypervisor has been applied to virtualize x86-
based machines to run Linux or other guest OS applications. We will discuss hardware
virtualization approaches in more detail in Section 3.3.
OperatingSystemLevel
This refers to an abstraction layer between traditional OS and user applications. OS-level
virtualiza-tion creates isolated containers on a single physical server and the OS instances to
utilize the hard-ware and software in data centers. The containers behave like real servers. OS-
level virtualization is commonly used in creating virtual hosting environments to allocate
hardware resources among a large number of mutually distrusting users. It is also used, to alesser
extent, in consolidating server hardware by moving services on separate hosts into containers or
VMs on one server. OS-level virtualization is depicted in Section 3.1.3.
LibrarySupportLevel
MostapplicationsuseAPIsexportedbyuser-levellibraries ratherthanusinglengthysystemcalls by the
OS. Since most systems provide well-documented APIs, such an interface becomes another
candidate for virtualization. Virtualization with library interfaces is possible by controlling the
communication link between applications and the rest of a system through API hooks.The
software toolWINE has implemented this approach tosupportWindows applications on top of
UNIX hosts. Another example is the vCUDA which allows applications executing within VMs to
leverage GPU hardware acceleration. This approach is detailed in Section 3.1.4.
User-ApplicationLevel
Virtualization at the application level virtualizes an application as a VM. On a traditional OS, an
application oftenrunsasa process.Therefore, application-level virtualization isalsoknown as
process-level virtualization. The most popular approach is to deploy high level language (HLL)
VMs. In this scenario, the virtualization layer sits as an application program on top of the
operating system, andthelayerexports an abstractionofaVM thatcan runprograms writtenand
compiled to a particular abstract machine definition. Any program written in the HLL and
compiled for this VM will be able to run on it. The Microsoft .NET CLR and Java Virtual
Machine (JVM) are two good examples of this class of VM.
Otherformsofapplication-levelvirtualizationareknownas application isolation, application
sandboxing, or application streaming. The process involves wrapping the application in a layer
that is isolated from the host OS and other applications. The result is an application that is much
easier to distribute and remove from user workstations. An example is the LANDesk application
virtuali-zation platform which deploys software applications as self- contained, executable files
in an isolated environment without requiring installation, system modifications, or elevated
security privileges.
2. Explainindetailaboutvirtualizationofcpu,memoryandI/Odevices?BTL4
(Definition:2marks,Diagram:5marks,ConceptExplanation:8marks)
To support virtualization, processors such as the x86 employ a special running mode and
instructions,knownashardware-assistedvirtualization.Inthisway,theVMMandguestOSrunin
differentmodesandallsensitiveinstructions oftheguestOSanditsapplicationsaretrappedinthe VMM.
To save processor states, mode switching is completed by hardware. For the x86 architecture, Intel
and AMD have proprietary technologies for hardware-assisted virtualization.
1. HardwareSupportforVirtualization
Modern operating systems and processors permit multiple processes to run simultaneously. If
there is no protection mechanism in a processor, all instructions from different processes will
access thehardware directlyandcause asystemcrash.Therefore,allprocessors haveat least two
modes, user mode and supervisor mode, to ensure controlled access of critical hardware.
Instructions running in supervisor mode are called privileged instructions. Other instructions are
unprivilegedinstructions.Inavirtualizedenvironment,itismoredifficulttomakeOSesand
applications run correctly because there are more layers in the machine stack. Example 3.4
discusses Intel’s hardware support approach.
At the time of this writing, many hardware virtualization products were available. The
VMware Workstation is a VM software suite for x86 and x86-64 computers. This software suite
allows users to set up multiple x86 and x86-64 virtual computers and to use one or more of these
VMs simultaneously with the host operating system. The VMware Workstation assumes thehost-
based virtualization. Xen is a hypervisor for use in IA-32, x86-64, Itanium, and PowerPC
970hosts.Actually,Xen modifiesLinuxasthelowestandmostprivilegedlayer,orahypervisor.
OneormoreguestOScanrunontopofthehypervisor.KVM(Kernel-basedVirtualMachine) is a
Linux kernel virtualization infrastructure. KVM can support hardware-assisted virtualization and
paravirtualization by using the Intel VT-x or AMD-v and VirtIO framework, respectively. The
VirtIO framework includes a paravirtual Ethernet card, a disk I/O controller, a balloondevice for
adjusting guest memory usage, and a VGAgraphics interface using VMware drivers.
Example3.4HardwareSupportforVirtualizationintheIntelx86Processor
Since software-based virtualization techniques are complicated and incur performance overhead,
Intel provides a hardware-assist technique to make virtualization easy and improve performance.
Figure 3.10 provides an overview of Intel’s full virtualization techniques. For processor
virtualization, Intel offers theVT-x orVT-i technique.VT-x adds aprivileged mode (VMX Root
Mode) and some instructions to processors. This enhancement traps all sensitive instructions in
the VMM automatically. For memory virtualization, Intel offers the EPT, which translates the
virtual address to the machine’s physical addresses to improve performance. For I/O
virtualization, Intel implements VT-d and VT-c to support this.
2. CPUVirtualization
A VM is a duplicate of an existing computer system in which a majority of the VM
instructions are executedonthe host processorinnativemode.Thus,unprivileged instructions of
VMs run directly on the host machine for higher efficiency. Other critical instructions should be
handled carefully for correctness and stability. The criticalinstructionsaredividedintothree
categories: privileged instructions, control-sensitive instructions, and behavior-sensitive
instructions. Privileged instructions execute in a privileged mode and will be trapped if executed
outsidethismode. Control-sensitiveinstructionsattempttochangetheconfigurationofresources used.
Behavior-sensitive instructions have different behaviors depending on the configuration of
resources, including the load and store operations over the virtual memory.
A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and
unprivileged instructions in the CPU’s user mode while the VMM runs in supervisor mode.
When the privileged instructions including control- and behavior-sensitive instructions of a VM
are exe- cuted, they are trapped in the VMM. In this case, the VMM acts as a unified mediatorfor
hardware access from different VMs to guarantee the correctness and stability of the whole
system. However, not all CPU architectures are virtualizable. RISC CPU architectures can be
naturally virtualized because all control- and behavior-sensitive instructions are privileged
instructions. On the contrary, x86 CPU architectures are not primarily designed to support
virtualization.This is because about 10 sensitive instructions, such as SGDT and SMSW, are not
privilegedinstructions.Whentheseinstruc-tions executeinvirtualization, theycannotbetrapped in
the VMM.
On a native UNIX-like system, a system call triggers the 80h interrupt and passes control to
theOSkernel.Theinterrupthandlerinthekernelistheninvokedtoprocess thesystemcall.Ona para-
virtualization system such as Xen, a system call in the guest OS first triggers the 80h interrupt
nor-mally. Almost at the same time, the 82h interrupt in the hypervisor is triggered. Incidentally,
control is passed on to the hypervisor as well. When the hypervisor completes its task for the
guest OS system call, it passes control back to the guest OS kernel. Certainly, the guest OS
kernel may also invoke the hypercall while it’s running. Although paravirtualization ofa CPU lets
unmodified applications run in the VM, it causes a small performance penalty.
Hardware-AssistedCPUVirtualization
This technique attempts to simplify virtualization because full or paravirtualization is
complicated. Intel and AMD add an additional mode called privilege mode level (some people
call it Ring-1) to x86 processors. Therefore, operating systems can still run at Ring 0 and the
hypervisor can run at Ring -1. All the privileged and sensitive instructions are trapped in the
hypervisor automatically. This technique removes the difficulty of implementing binary
translation of full virtualization. It also lets the operating system run in VMs without
modification.
REGULATION2021 ACADEMICYEAR2023-2024
Example3.5IntelHardware-AssistedCPUVirtualization
Although x86 processors are not virtualizable primarily, great effort is taken to virtualize them.
They are used widely in comparing RISC processors that the bulk of x86-based legacy systems
cannot discard easily. Virtuali-zation of x86 processors is detailed in the following sections.
Intel’sVT-xtechnologyisanexampleofhardware-assistedvirtualization,asshowninFigure
3. 11.Intelcallstheprivilegelevelofx86processorstheVMXRootMode.Inordertocontrolthe start and
stop of a VM and allocate a memory page to maintain the
CPU state for VMs, a set of additional instructions is added. At the time of this writing, Xen,
VMware, and the Microsoft Virtual PC all implement their hypervisors by using the VT-x
technology.
Generally, hardware-assisted virtualization should have high efficiency. However, since the
transition from the hypervisor to the guest OS incurs high overhead switches between processor
modes, it sometimes cannot outperform binary translation. Hence, virtualization systems such as
VMware now use a hybrid approach, in which a few tasks are offloaded to the hardware but the
rest is still done in software. In addition, para-virtualization and hardware-assisted virtualization
can be combined to improve the performance further.
3. MemoryVirtualization
7
Virtual memory virtualization is similar to the virtual memory support provided by modern
operat- ing systems. In a traditional execution environment, the operating system maintains
mappings of virtual memory to machine memory using page tables, which is a one-stagemapping
from virtual memory to machine memory. All modern x86 CPUs include a memory management
unit (MMU) and a translation lookaside buffer (TLB) to optimize virtual memory performance.
However, in a virtual execution environment, virtual memory virtualization involves sharing the
physical system memory in RAM and dynamically allocating it to the physical memory of the
VMs.
That means a two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine memory.
Furthermore,MMUvirtualization shouldbesupported,which istransparenttotheguestOS.The
guestOScontinuesto controlthemapping of virtualaddresses tothephysicalmemory addresses of
VMs. But the guest OS cannot directly access the actual machine memory. The VMM is
responsible for mapping the guest physical memory to the actual machine memory. Figure 3.12
shows the two-level memory mapping procedure.
Since each page table of the guest OSes has a separate page table in the VMM corresponding
to it, the VMM page table is called the shadow page table. Nested page tables add another layer
of indirection to virtual memory. The MMU already handles virtual-to-physical translations as
defined by the OS. Then the physical memory addresses are translated to machine addresses
using another set of page tables defined by the hypervisor. Since modern operating systems
maintain a set of page tables for every process, the shadow page tables will get flooded.
Consequently, the perfor-mance overhead and cost of memory will be very high.
Example3.6ExtendedPageTablebyIntelforMemory Virtualization
Since the efficiency of the software shadow page table technique was too low, Intel developed a
hardware-based EPT technique to improve it, as illustrated in Figure 3.13. In addition, Inteloffers
a Virtual Processor ID (VPID) to improve use of the TLB. Therefore, the performance of
memory virtualization is greatly improved. In Figure 3.13, the page tables of the guest OS and
EPT are all four-level.
When a virtual address needs to be translated, the CPU will first look for the L4 page table
pointed to by Guest CR3. Since the address in Guest CR3 is a physical address in the guest OS,
the CPU needs to convert the Guest CR3 GPAto the host physical address (HPA) using EPT. In
this procedure, the CPU will check the EPT TLB to see if the translation is there. If there is no
required translation in the EPTTLB, the CPU will look for it in the EPT. If the CPU cannot find
the translation in the EPT, an EPT violation exception will be raised.
When the GPAof the L4 page table is obtained, the CPU will calculate the GPAof the L3 page
table by using the GVA and the content of the L4 page table. If the entry corresponding to the
GVA in the L4
page table is a page fault, the CPU will generate a page fault interrupt and will let the guest OS
kernel handle the interrupt. When the PGA of the L3 page table is obtained, the CPU will lookfor
the EPT to get the HPA of the L3 page table, as described earlier. To get the HPA corresponding
to a GVA, the CPU needs to look for the EPTfive times, and each time, the memory needs to be
accessed four times. There-fore, there are 20 memory accesses in the worst case, which is still
very slow. To overcome this short-coming, Intel increased the size of the EPT TLB to decrease
the number of memory accesses.
4. I/OVirtualization
I/O virtualization involves managing the routing of I/O requests between virtual devices and the
shared physical hardware. At the time of this writing, there are three ways to implement I/O
virtualization: full device emulation, para-virtualization, and direct I/O. Full device emulation is
the first approach for I/O virtualization. Generally, this approach emulates well-known, real-
world devices.
All the functions of a device or bus infrastructure, such as device enumeration, identification,
interrupts,andDMA,arereplicated insoftware.This softwareislocatedin theVMMandactsas
avirtualdevice.TheI/OaccessrequestsoftheguestOSaretrappedintheVMMwhichinteracts with the
I/O devices. The full device emulation approach is shown in Figure 3.14.
A single hardware device can be shared by multiple VMs that run concurrently. However,
software emulation runs much slower than the hardware it emulates [10,15]. The para-
virtualization method of I/O virtualization is typically used in Xen. It is also known as the split
driver model consisting of a frontend driver and a backend driver.The frontend driver is running
in DomainU andthe backenddri-veris runningin Domain0.They interactwith each othervia a
block of shared memory.Thefrontend driver manages theI/Orequests oftheguestOSes and the
backend driver is responsible for managing the real I/O devices and multiplexing the I/O data of
different VMs. Although para-I/O-virtualization achieves better device performance than full
device emulation, it comes with a higher CPU overhead.
Direct I/O virtualization lets the VM access devices directly. It can achieve close-to-native
performancewithouthigh CPUcosts.However,currentdirectI/Ovirtualization implementations
focus on networking for mainframes. There are a lot of challenges for commodity hardware
devices. For example, when a physical device is reclaimed (required by workload migration) for
later reassign-ment, it may have been set to an arbitrary state (e.g., DMA to some arbitrary
memory locations) that can function incorrectly or even crash the whole system. Since software-
basedI/Ovirtualizationrequiresaveryhighoverheadofdeviceemulation,hardware-assistedI/O
virtualization is critical. Intel VT-d supports the remapping of I/O DMA transfers and device-
generated interrupts. The architecture of VT-d provides the flexibility to support multiple usage
models that may run unmodified, special-purpose, or “virtualization-aware” guest OSes.
AnotherwaytohelpI/Ovirtualizationisviaself-virtualizedI/O(SV-IO)[47].Thekeyideaof SV-IO
is to harness the rich resources of a multicore processor. All tasks associated with virtualizing an
I/O device are encapsulated in SV-IO. It provides virtual devices and an associated access API to
VMs and a management API to the VMM. SV-IO defines one virtual interface (VIF) for every
kind of virtua-lized I/O device, such as virtual network interfaces, virtual block devices (disk),
virtual camera devices, and others. The guest OS interacts with the VIFs via VIF device drivers.
Each VIF consists of two mes-sage queues. One is for outgoing messages to the devices and the
other is for incoming messages from the devices. In addition, each VIF has a unique ID for
identifying it in SV-IO.
3.ExplainindetailaboutHypervisorandXenarchitecture?BTL4
(Definition:2marks,Diagram:3marks,Conceptexplanation:10marks)
● ◆Thehypervisorsoftwaresitsdirectlybetweenthephysicalhardware
anditsOS.ThisvirtualizationlayerisreferredtoaseithertheVMMorthe hypervisor.
The hypervisor provides hypercalls for the guest OSes and applications. Depending on the
functionality,ahypervisorcanassumemicrokernelarchitectureliketheMicrosoftHyper-V.
● ◆ A micro kernel hypervisor includes only the basicand unchanging functions (such as
physical memory management and processor scheduling).
● ◆Thedevicedriversandotherchangeablecomponentsareoutsidethehypervisor.
● ◆ The hypervisor supports hardware level virtualization on bare
● ◆Thehypervisorsoftwaresitsdirectlybetweenthephysical
hardwareanditsOS.Thisvirtualizationlayerisreferredtoaseither the
VMM
orthehypervisor.
Dependingonthefunctionality,ahypervisorcanassumemicro kernel
VMwareESXforservervirtualization.
● ◆ A micro kernel hypervisor includes only the basicand unchanging functions (such as
physical memory management and processor scheduling).
● ◆Thedevicedriversandotherchangeablecomponentsareoutsidethehypervisor.
Amonolithichypervisorimplementsalltheaforementionedfunctions,includingthoseofthe
devicedrivers.Therefore,thesizeofthehypervisorcodeofamicro-kernelhypervisorissmaller than that
of a monolithic hypervisor.
Essentially,ahypervisormustbeabletoconvertphysicaldevicesinto virtual
Xenarchitecture
• XenisanopensourcehypervisorprogramdevelopedbyCambridge
University.•Xenisamicrokernelhypervisor,whichseparatesthepolicyfromthe
mechanism.
• Xenprovidesavirtualenvironmentlocatedbetweenthe hardwareandtheOS.
hecorecomponentsofaXensystemarethehypervisor,kernel,and
important.
● ◆ Like other virtualization systems, many guest OSes can run on top of
the hypervisor.
● ◆Domain0isaprivilegedguestOSofXen.ItisfirstloadedwhenXen
boots without any file system drivers being available.Domain 0 is designed to access hardware
directlyandmanagedevices.Therefore,oneoftheresponsibilitiesofDomain0isto allocateand map
hardware resources for the guest domains (the Domain U domains).
● ◆ Forexample, Xen isbased on Linux and its security levelis C2. Its management VM is named
Domain 0 which has the privilege to manage other VMs implemented on the same host.
● ◆ If Domain 0 is compromised, the hacker can control the entire system. So, in the VM system,
securitypoliciesareneededtoimprovethesecurityofDomain0.
● ◆ Domain 0, behaving as a VMM, allows users to create, copy, save, read, modify, share,
migrate and roll back VMs as easily as manipulating a file, which flexibly provides tremendous
benefits for users.
UNITIII
VIRTUALIZATIONINFRASTRUCTUREANDDOCKER
SYLLABUS:Desktop Virtualization – Network Virtualization – Storage
Virtualization – System-levelof Operating Virtualization – Application Virtualization
–VirtualclustersandResourceManagement–Containersvs.VirtualMachines–
Introduction to Docker – Docker Components – Docker Container – Docker Images
and Repositories.
PARTA
2Marks
2. Howtoimplementinternalnetworkvirtualization?BTL1
The guest can share the same network interface of the host and use NetworkAddress
Translation (NAT) to access the network;The virtual machine manager can emulate,
and install on the host, an additional network device, together with the driver.The
guest can have a private network only with the guest.
3. WhatisHardware-levelvirtualization?BTL1
4. Definehypervisor?BTL1
Thehypervisorisgenerallyaprogramoracombinationofsoftwareandhardwarethat allows
the abstraction of the underlying physical hardware.
Hypervisors is a fundamental element of hardware virtualization is the hypervisor, or
virtual machine manager (VMM).
5. MentiontheadvantagesofSAN?BTL1
There are different techniques for storage virtualization, one of the most popularbeing
network based virtualization by means of storage area networks (SANs).SANS use a
network accessible device through a large bandwidth connection to provide storage
facilities.
6. WhatisOperatingsystem-levelvirtualization?BTL1
7. Whatisstoragevirtualization?BTL1
7
technique, users do not have to be worried about the specific location of their data,
which can be identified using a logical path.
8. DefineDesktopvirtualization?BTL1
• Desktopvirtualizationprovidesthesameoutcomeofhardwarevirtualizationbut serves a
different purpose.
9. WhatisNetworkMigration?BTL1
10. DifferentiatebetweenphysicalandvirtualclusterBTL2
11. Listtheissuesinmigrationprocess?BTL1
Memory Migration
FileSystemMigration
Network Migration
12. Howtomanageavirtualcluster?BTL1
cluster manager resides on a guest system Cluster manager resides on thehost
systems. The host-based manager supervises the guest systems and can restart the
guest system on another physical machine.
Use an independent cluster manager on both the host and guest systems Use an
integrated cluster on the guest and host systems. This means the manager must be
designed to distinguish between virtualized resources and physical resources.
REGULATION2021 ACADEMICYEAR2023-2024
13. DifferentiatebetweenContainersandvirtualmachines?BTL2
Containersandvirtualmachinesaretwotypesofvirtualizationtechnologiesthatshare many
similarities.
Virtualization is a process that allows a single resource, such as RAM, CPU, Disk, or
Networking, to be virtualized and represented as multiple resources.
However, the main difference between containers and virtual machines is that virtual
machinesvirtualizetheentiremachine,includingthehardwarelayer,whilecontainers only
virtualize software layers above the operating system level.
14. ListthedifferenttypesofDockernetworks?BTL1
Bridge: This is the default network driver and is suitable for different containers that
need to communicate with the same Docker host.
Host: This network is used when there is no need for isolation between the container
and the host.
Overlay:Thisnetworkallowsswarmservicestocommunicatewitheachother. None:
This network disables all networking.
Macvlan: This assigns a Media Access Control (MAC) address to containers, which
looks like a physical address.
15. WhatisthepurposeofDockerHub?BTL1
The Docker Hub is a cloud-based repository service where users can push their
DockerContainerImagesandaccessthemfromanywhereviatheinternet.Itoffersthe option
to push images as private or public and is primarily used by DevOps teams.
The Docker Hub is an open-source tool that is available for all operating systems. It
functions as a storage system for Docker images and allows users to pull the required
images when needed.
PARTB
13Marks
1. WriteashortnotesonDesktopvirtualization?BTL1
(Definition:2marks,Conceptexplanation:11marks)
8
environmentaccessiblefromeverywhere.
Although the term desktop virtualization strictly refers to the ability- environment,
generally the desktop
Although the term desktop virtualization strictly refers to the ability to remotely
access a desktop environment, generally the desktop environment is stored ina remote
server or a data center that provides a high availability infrastructure and ensures the
accessibility and persistence of the data.
In this scenario, an infrastructure supporting hardware virtualization isfundamental to
provide access to multiple desktop environments hosted on the same server.A specific
desktop environment is stored in a virtual machine image that is loaded and started
ondemand when aclientconnects to thedesktop environment.This isatypical cloud
computing scenario in which the user leverages the virtual infrastructure for
performing the daily tasks on his computer. The advantages of desktop virtualization
are high availability, persistence, accessibility, and ease of management.
The basic services for remotely accessing a desktop environment are implemented in
software components such as Windows Remote Services, VNC, and X Server.
Infrastructures for desktop virtualization based on cloud computing solutions include
Sun Virtual Desktop Infrastructure (VDI), Parallels Virtual Desktop Infrastructure
(VDI), Citrix XenDesktop, and others.
2. ExplainindetailaboutNetworkvirtualization?BTL4
(Definition:2marks,Conceptexplanation:11marks)
● ◆Networkvirtualizationcombineshardwareappliancesandspecific
softwareforthecreationandmanagementofavirtualnetwork.Network
virtualization can aggregate different physical networks into a single logical network
(external network virtualization) or provide network like functionality to an operating
system partition (internal network virtualization). The result of external network
virtualizationisgenerallyavirtualLAN(VLAN).
◆ ●AVLANisanaggregationofhoststhatcommunicatewitheachotheras
thoughtheywerelocatedunderthesamebroadcastingdomain.Internalnetwork
virtualization is generally applied together with hardware and operating system-level
virtualization, in which the guests obtain a virtual network interface to communicate
with. • There are several options for implementing internal network virtualization:
1. The guest can share the same network interface of the host and use Network
Address Translation (NAT) to access the network; The virtual machine manager
can emulate, and install on the host, an additional network device, together with
the driver.
2. Theguestcanhavea privatenetworkonlywiththe guest.
3. WriteashortnotesonStoragevirtualization?BTL1
(Definition:2marks,Conceptexplanation:11marks)
Storagevirtualizationallowsustoharnessawiderangeofstoragefacilitiesand represent
them under a single logical file system.
Therearedifferenttechniquesforstoragevirtualization,oneofthemostpopular being
network based virtualization by means of storage
areanetworks (SANS).
● ◆SANSuseanetwork accessibledevicethroughalargebandwidthconnection to
provide storage facilities.
4. ExplainindetailaboutOperatingsystemlevelvirtualization?BTL4
(Definition:2marks,Conceptexplanation:11marks)
● Operatingsystemlevelvirtualizationofferstheopportunitytocreatedifferentand
separatedexecutionenvironmentsforapplicationsthataremanagedconcurrently.
● ◆Differentlyfromhardwarevirtualization,thereisnovirtualmachinemanager or
hypervisor and the virtualization is done within a single operating system where
the OS kernel allows for multiple isolated user space instances.
● The kernel is also responsible for sharing the system resources among instances
and for limiting the impact of instances on each other.
● A user space instance in general contains a proper view of the file system whichis
completely isolated and separate IP addresses, software configurations and access
to devices
● Operating systems supporting this type of virtualization are general purpose,
timeshared operating systems with the capability to provide stronger namespace
and resource isolation.
● This virtualization technique can be considered an evolution of the chroot
mechanism in Unix systems.
● The chroot operation changes the file system root directory for a process and its
children to a specific directory.As a result, the process and its children cannot
have access to other portions of the file system than those accessible under the
new root directory.
● BecauseUnixsystemsalsoexposedevicesaspartsofthefilesystem,byusing
thismethoditispossibletocompletely isolateasetof processes.
● Following the same principle, operating system level virtualization aims to
provide separated and multiple execution containers for running applications.
● This technique is an efficient solution for server consolidation scenarios in which
multiple application servers share the same technology: operating system,
application server framework, and other components.
● Examples of operating system-level virtualizations are FreeBSD Jails, IBM
Logical Partition (LPAR), SolarisZones and Containers, Parallels Virtuozzo
Containers, OpenVZ, iCore Virtual Accounts, Free Virtual Private Server
(FreeVPS), and other
5. Explainindetailaboutapplicationlevelvirtualization?BTL4
(Definition:2marks,Conceptexplanation:11marks)
● Application level virtualization is a technique allowing applications to be run in
runtime environments that do not natively support all the featuresrequired by such
applications.
● In this scenario, applications are not installed in the expectedruntime environment
but are run as though they were.
● In general, these techniques are mostly concerned with partial file systems,
libraries, and operating system component emulation. Such emulation is
performed by a thin layer called a program or an operating systemcomponent that
is in charge of executing the application.
● Emulation can also be used to execute program binaries compiled for different
hardware architectures.
● Inthiscase, oneof thefollowing strategiescanbe implemented:
● Interpretation: In this technique every source instruction is interpreted by an
emulator for executing native ISA instructions, leading to poor performance.
Interpretation has a minimal startup cost but a huge overhead, since each
instruction is emulated.
● Binary translation: In this technique every source instruction isconverted to native
instructions with equivalent functions. After a block of instructions is translated,
it is cached and reused.
◆ ●Applicationvirtualizationisagoodsolutioninthecaseofmissinglibrariesin the
host operating system
In this case a replacement library canbe linked with the application or library calls can
be remapped to existing functions available in the host system.
Another advantage is that in this case the virtual machine manager is much lighter
since it provides a partial emulation of the runtime environment comparedto hardware
virtualization.
● ◆Comparedtoprogramminglevelvirtualization,whichworksacrossall the
applications developed for that virtual machine, application level virtualization works
for a specific environment.
◆ ●Itsupportsalltheapplicationsthatrunontopofaspecificenvironment.
OneofthemostpopularsolutionsimplementingapplicationvirtualizationisWine,
which is a software application allowing Unix-like operating systems to execute
programs written for the Microsoft
Windows platform. Wine features a software application acting as a container for the
guestapplicationandasetoflibraries,calledWinelib,thatdeveloperscanuseto
compile applications to be ported on Unix systems. ◆●Wine takes its inspiration
from a similar product from Sun, WindowsApplication Binary Interface (WABI)
which implements theWin16
APIspecificationson Solaris.
• AsimilarsolutionfortheMacOSXenvironmentisCrossOver,whichallows running
Windows applications directly on the Mac OS X operating system.
● ◆ VMware ThinApp is another product in this area, allows capturing the setup of
an installedapplicationandpackagingitintoanexecutableimageisolatedfrom
the
hostingoperatingsystem.
6. ExplainindetailaboutVirtualClustersandResourceManagement?BTL4
(Definition:2marks,Diagram:4marks,Conceptexplanation:7marks)
● ◆Virtualclusterspresentdesignchallengessuchaslivemigrationof virtual
machines,memory andfilemigrations,anddynamicdeploymentofvirtual clusters.
Virtual clusters are built using virtual machines installed across one or more physical
clusters, logically interconnected by a virtual network across several physical
networks.
Virtual cluster sizes can grow or shrink dynamically, similar to overlay networks in
peer-to-peer networks.
Physical node failures may disable some virtual machines, but virtualmachine failures
will not affect the host system
The system should be capable of quick deployment, which involves creating and
distributing software stacks (including the OS, libraries, and applications) to physical
nodes within clusters, as well as rapidly
switchingruntimeenvironmentsbetweenvirtualclustersfordifferentusers.
2. High-PerformanceVirtualStorage
Virtual clustering provides a flexible solution for building clusters consisting of both
physical and virtual machines.
It is widely used in various computing systems such as cloud platforms, high-
performance computing systems, and computational grids.
Virtual clustering enables the rapid deployment of resources upon user demand or in
response to node failures.There are four different ways to manage virtual clusters,
including having the cluster manager reside on the guest or host systems, using
independent cluster managers, or an integrated cluster manager designedto distinguish
between virtualized and physical resources.
4. MigrationofMemory,Files,andNetworkResources
Sinceclustershaveahighinitialcostofownershipwhichincludesspace,power conditioning,
and cooling equipment
Whenonesystemmigratestoanotherphysicalnode,considerthefollowingissues. Memory
Migration
FileSystemMigration
Network Migration
Memory Migration
OnecrucialaspectofVMmigrationismemorymigration,whichinvolves movingthe
memory instance of a VM from one physical host to another.
The efficiency of this process depends on the characteristics of the
application/workloads supported by the guest OS. In today's systems, memory
migration can range from a few hundred megabytes to several gigabytes.
The Internet Suspend-Resume (ISR) technique takes advantage of temporal locality,
wherememorystatesarelikelytohavesignificantoverlapbetweenthesuspendedand
resumed instances of a VM.The ISR technique represents each file in the file system
as a tree of sub files, with a copy existing in both the suspended and resumed VM
instances.
By caching only thechanged files, this approach minimizes transmission overhead.
However, the ISR technique is not suitable for situations where live
machinemigrationisnecessary,asitresultsinhighdowntimecomparedtoothertechniques
.
FileSystemMigration
For a system to support VM migration, it must ensure that each VM has a consistent
and location-independent view of the file system that is available on all hosts.
OnepossibleapproachistoassigneachVMwithitsownvirtualdiskandmapthefile system to
it.
However,duetotheincreasingcapacityofdisks,it'snotfeasibletotransfertheentire contents
of a disk over a network during migration.
Another alternative is to implement a global file system that is accessible across all
machines, where a VM can be located without the need to copy files between
machines.
Network Migration
When a VM is migrated to a new physical host, it is important that any open network
connections are maintained without relying on forwarding mechanisms or support
from mobility or redirection mechanisms on the original host.
7.5DynamicDeploymentofVirtualClusters
The Cellular Disco virtual cluster was created at Stanford on a shared- memory
multiprocessor system, while the INRIA virtual cluster was built to evaluate the
performance of parallel algorithms.
At Duke University, COD was developed to enable dynamic resource allocation with
a virtual cluster management system, and at Purdue University, the VIOLIN cluster
wasconstructedtodemonstratethebenefitsofdynamicadaptationusingmultipleVM
clustering.
7. Whatisadockerexplainitsfeautersindetail?BTL1
(Definition:2marks,Conceptexplanation:11marks)
DockervsVirtualMachine
Docker containers package an application, its binaries, libraries, and configuration
files but do not include a guest OS.
They rely on the underlying OS kernel, which makes them lightweight and they share
resourceswithothercontainersonthesamehostOSwhileprovidingOS-levelprocess
isolation. On the other hand, virtual machines run on hypervisors, which allow
multiple VMs to run on a single machine along with its own operating system.
Each VM has its own copy of an operating system along with the application and
necessary binaries, making it significantly larger and requiring more resources.
VMsprovidehardware-levelprocessisolation,buttheyareslowtoboot.
KeyTerminologies
ADocker Image is a file containing multiple layers of instructions used to create and
run a Docker container. It provides a portable and reproducible way to package and
distribute applications.
A Docker Container is a lightweight and isolated runtime environment created from
an image. It encapsulates an application and its dependencies, providing a consistent
and predictable environment for running the application.
A Dockerfile is a text file that contains a set of instructions to build a Docker Image.It
defines the base image, application code, dependencies, and configuration neededto
create a custom Docker Image.
Docker Engine is the software that enables the creation and management of Docker
containers. It consists of three main components:
DockerDaemon:Itisaserver-sidecomponentthatmanages
Docker images, containers, networks, and volumes.
оRESTAPI:Itisasetofwebservicesthatallowsremoteclientstointeractwith Docker
Daemon.
Docker Hub is a cloud-based registry that provides a centralized platform for storing,
sharing,anddiscoveringDockerImages.Itoffersavastcollectionofpre-builtDocker
Images that developers can use to build, test, and deploy their applications.
Features of Docker
Open-sourceplatform
AnEasy,lightweight,andconsistentwayofdeliveryofapplicationsFastandefficient
development life cycle.
Segregation of duties
Service-orientedarchitecture
Security
Scalability
Reduction in size
Imagemanagement
Networking
Volumemanagement
8. WhatareDockerComponents?BTL1
(Definition:2marks,Diagram:4marks,Conceptexplanation:7marks)
The Dockerclient can be installed onthe same system asthe daemon orconnected
remotely.
Communicationbetweenthe client and daemonoccursthrougha REST APIeither over a
UNIX socket or a network.
The Docker daemon is responsible for managing various Docker services and
communicates with other daemons to doso.UsingDocker'sAPI requests,thedaemon
manages Docker objects such as images, containers, networks, and volumes.
DockerClient
TheDockerclientallowsuserstointeractwithDockerandutilizeitsfunctionalities.It
communicates with the Docker daemon using the Docker API.
The Docker client has the capability to communicate with multiple daemons. When a
user runs a Docker command on the terminal, the instructions are sent to the daemon.
The Docker daemon receives these instructions in the form of commands and REST
API requests from the Docker client.
Commonly used commands by Docker clients include docker build, docker pull, and
docker run.
DockerHost
DockerRegistry
Docker images are stored in the Docker registry, which can either be a public registry
like Docker Hub, or a private registry that can be set up.
To obtain required images from a configured registry, the 'docker run' or 'docker pull'
commands can be used. Conversely, to push images into a configured registry, the
'docker push' command can be used.
DockerObjects
When working withDocker,variousobjects suchasimages,containers,volumes,and
networks are created and utilized.
DockerImages
A docker image is a set of instructions used to create a container, serving as a read-
only template that can store and transport applications.
Images play a critical role in the Docker ecosystem by enabling collaboration among
developers in ways that were previously impossible
DockerStorage
Docker storage is responsible for storing data within the writable layer of the
container, and this function is carried out by a storage driver.The storage driver is
responsible for managing and controlling the images and containers on the Docker
host.There are several types of Docker storage.
о Data Volumes, which can be mounted directly into the container's filesystem, are
essentially directories or files on the Docker Host filesystem.
о Volume Container is used to maintain the state of the containers' data produced by
the running container, where Docker volumes file systems are mounted on Docker
containers.Thesevolumesarestoredonthehost,makingiteasyforuserstoexchange file
systems among containers and backup data.
9
O Directory Mounts, where a host directory is mounted as a volume in the container,
can also be specified.Finally, Docker volume plugins allow integration with external
volumes, such asAmazon EBS, to maintain the state of the container.
Dockernetworking
Dockernetworkingprovidescompleteisolationforcontainers,allowinguserstolink them to
multiple networks with minimal OS instances required to run workloads.
TherearedifferenttypesofDockernetworksavailable, including:
o Bridge: This is the default network driver and is suitable for different
containersthat need to communicate with the same Docker host.
оOverlay: This network allows to communicate with each other. None: This network
disables all networking.
addresstocontainers,whichlookslikeaphysicaladdress.
9. ExplaintheDockerContainers?BTL4
(Definition:2marks,Conceptexplanation:11marks)
By default, a container is isolated from other containers and its host machine. It is
possible to control the level of isolation for a container's network, storage or other
underlying subsystems from other containers or from the host machine.
Any changes made to a container's state that are not stored in persistent storage
willbe lost once the container is removed.
AdvantagesofDockerContainers
Docker provides a consistent environment for running applications from design and
development to production and maintenance, which eliminates production issues and
allows developers to focus onintroducing qualityfeatures instead ofdebuggingerrors
and resolving configuration/compatibility issues.
ocker also allows for instant creation and deployment of containers for every process,
without needing to boot the OS, which saves time and increases agility. Creating,
destroying, stopping or starting a container can be done with ease, and YAML
configuration files can be used to automate deployment and scale the infrastructure.
In multi-cloud environments with different configurations, policies and processes,
Docker containers can be easily moved across any environment, providing efficient
management. However, it is important to remember that data inside the container is
permanently destroyed once the container is destroyed.
Docker enables significant infrastructure cost reduction, with minimal costs for
running applications when compared with VMs and other technologies. This can lead
to increased ROI and operational cost savings with smaller engineering teams
PARTC
15Marks
1. Whataretheothertypesofvirtualization?BTL1
(Definition:2marks,Diagram:5marks,Conceptexplanation:8marks)
acrossdifferentplatformsandoperating systems.
● ◆Generallythesevirtualmachinesconstituteasimplificationoftheunderlying
hardware instruction set and provide some high level instructions that map some
ofthe features of the languages compiled for them.
● ◆Atruntime,thebytecodecanbeeitherinterpretedorcompiledonthefly against
the underlying hardware instruction set.
● ◆Programminglanguagelevelvirtualizationhasalongtrailincomputer science
history and originally was used in 1966 for the implementation of Basic Combined
Programming Language (BCPL), a language for writing compilers and one of the
ancestors of the C
programming language.
● ◆OtherimportantexamplesoftheuseofthistechnologyhavebeentheUCSD Pascal
and Smalltalk
The Java virtual machine was originally designed for the execution of programs
written in the Java language, but other languages such as
◆ ●The ability to support multiple programming languages has been one of the
key elements of the Common Language Infrastructure (CLI) which is the
specification behind .NET Framework
2. Applicationservervirtualization
Application server virtualization abstracts a collection of application servers that
provide the same services as a single virtual application server by usingload balancing
strategies and providing a high availability infrastructure for the services hosted in the
application server.
This is a particular form of virtualization and serves the same purpose of storage
virtualization by providing a better quality of service rather than emulating a different
environment. 3.6.3 Virtualization Support and Disaster Recover
◆ ●Oneverydistinguishingfeatureofcloudcomputinginfrastructureisthe useof
system virtualization and the modification to provisioning tools.
• Virtualizationofserversonasharedclustercanconsolidatewebservices.•Incloud
computing, virtualization also means the resources and
fundamental infrastructure are virtualized. • The user will not care about the
computing resources that are used for providing the services.
● ◆ Cloud users do not need to know and have no way to discover physical
resourcesthatareinvolvedwhileprocessingaservicerequest.Inaddition,
application developers do not care about some infrastructure issues such as
scalability and fault tolerance. Application developers focus on service logic. In
many cloud computing systems, virtualization software is used to virtualize the
hardware.System virtualization software is a special kind of software which
simulates theexecution of hardware and runs even unmodified
operatingsystems.
● ◆ Cloud computing systems use virtualization so ware as the running
environmentforlegacysoftwaresuch asoldoperatingsystemsandunusual
applications.
3. HardwareVirtualization
Virtualization software is also used as the platform for developing new cloud
applications that enable developers to use any operating systems and programming
environments they like.
The development environment and deployment environment can now be the same,
which eliminates some runtime problems.
VMs provide flexible runtime services to free users from worrying about the system
environment.
Suchsharing isnotflexible.
o Userscannotcustomizethesystemfortheirspecialpurposes.
o Operatingsystemscannotbechanged.
o Theseparationisnot complete.
Anenvironmentthatmeetsoneuser'srequirementsoftencannotsatisfyanotheruser.
Virtualizationallowsustohavefullprivilegeswhilekeepingthem separate.
Users have full access to their own VMs, which are completely separate from other
user's VMs.
These managers handle loads, resources, security, data, and provisioning functions.
Figure 3.2 shows two VM platforms.
4. VirtualizationSupportinPublicClouds
AWSprovidesextremeflexibility(VMS)foruserstoexecutetheirownapplications.
GAE provides limited application level virtualization for users to build applications
only based on the services that are created by Google.
Microsoftprovidesprogramminglevelvirtualization(.NETvirtualization)forusersto
build their applications.
5. VirtualizationforIaaS
Mtechnologyhasincreasedin ubiquity.
Thishasenableduserstocreatecustomized environmentsatop physicalinfrastructure for
cloud computing.
UseofVMsincloudshasthefollowing distinct benefits:
o Systemadministratorsconsolidateworkloadsofunderutilized serversinfewer
servers
VMs have the ability to run legacy code without interfering with otherAPIsVMs can
be used to improve security through creation of sandboxes for running applications
with questionable reliability
2. ExplainindetailaboutContainerswithadvantagesanddisadvantages?BTL1
(Definition:2marks,Concept Explanation:7 marks,Diagram:2 marks,Advantages:2
marks,Disadvantages:2 marks)
Containers are software packages that are lightweight and self-contained, and
theycomprise all the necessary dependencies to run an application.
Thedependenciesincludeexternalthird-partycodepackages,systemlibraries,and other
operating system-level applications.
These dependencies are organized instacklevelsthat are higher than the
operatingsystem.
Advantages:
One advantage of using containers is their fast iteration speed. Due totheir lightweight
nature and focus on high-level software, containers can be quickly modified and
updated.
oAdditionally,containerruntimesystemsoftenprovidearobustecosystem,including a
hosted public repository of pre- made containers.
teams. Disadvantages:
о As containers share the same hardware system beneath the operating system layer,
any vulnerability in one container can potentially affect the underlying hardware and
break out of the container.
Although many container runtimes offer public repositories of pre-built containers,
there is a security risk associated with using these containers as they may contain
exploits or be susceptible to hijacking by malicious actors. Examples:
о Docker is the most widely used container runtime that offers Docker Hub, a public
repository of containerized applications that can be easily deployed to a local Docker
runtime.
оRKT,pronounced"Rocket,"isacontainersystemfocusedonsecurity,ensuringthat
insecure container functionality is not allowed by default.
CRI-O, on the other hand, is a lightweight alternative to using Docker as the runtime
for Kubernetes, implementing the Kubernetes Container Runtime Interface (CRI) to
support Open Container Initiative (OCI)-compatible runtimes.
Virtual Machines
Virtual machines are software packages that contain a complete emulation of low-
level hardware devices, such as CPU, disk, and networking devices.They may also
include a complementary software stack that can run on the emulated hardware.
Together, these hardware and software packages create a functional snapshot of a
computational system.
Advantages:
O Virtual machines provide full isolation security since they operate as standalone
systems, which means that they are protected from any interference or exploits from
other virtual machines on the same host.
o Though a virtual machine can still be hijacked by an exploit, the affected virtual
machine will be isolated and cannot contaminate other adjacent virtual machines.
о One can manually install software to the virtual machine and snapshot the virtual
machine to capture the present configuration state.
о The virtual machine snapshots can then be utilized to restore the virtual machine to
thatparticularpointintimeorcreateadditionalvirtualmachineswiththat
configuration.
Disadvantages:
о Virtual machines are known for their slow iteration speed due to the fact that they
involve a complete system stack.
o Any changes made to a virtual machine snapshot can take a considerable amount of
time to rebuild and validate that they function correctly.
o Another issue with virtual machines is that they can occupy a significant amount of
storage space, often several gigabytes in size.
Examples:
Virtualbox is an open source emulation system that emulates x86 architecture, and is
owned by Oracle. It is widely used and has a set of additional tools to help develop
and distribute virtual machine images.
oVMware is a publicly traded company that provides a hypervisor along with its
virtual machine platform, which allows deployment and management of multiple
virtual machines. VMware offers robust UI for managing virtual machines, and is a
popular enterprise virtual machine solution with support.
о QEMU is a powerful virtual machine option that can emulate any generic hardware
architecture. However, it lacks a graphical user interface for configuration or
execution, and is a command line only utility.As a result, QEMU is one of the fastest
virtual machine options available.
3. ExplainDockerRepositorieswithitsfeatures?BTL1
(Definition:2marks,Conceptexplanation:13marks)
The Docker Hub is a cloud-based repository service where users can push their
DockerContainerImagesandaccessthemfromanywhereviatheinternet.Itoffersthe option
to push images as private or public and is primarily used by DevOps teams.
The Docker Hub is an open-source tool that is available for all operating systems. It
functions as a storage system for Docker images and allows users to pull the required
images when needed.However,itis necessary to haveabasicknowledgeof Docker to
push or pull images from the Docker Hub. If a developer team wantsto share a
projectalongwithitsdependenciesfortesting, theycan pushthecodetoDockerHub. To do
this, the developer must create images and push them to Docker Hub. The testing
team can then pull the same image from Docker Hub without needing anyfiles,
software, or plugins, as the developer has already shared the image with all
dependencies.
FeaturesofDockerHub
DockerHubsimplifies thestorage,management,andsharingofimageswithothers.It
provides security checks for images and generates comprehensive reports on any
security issues.
Additionally, Docker Hub can automate processes like Continuous Deployment and
Continuous Testing by triggering webhooks when a new image is uploaded.
ThroughDockerHub,userscanmanagepermissionsforteams,users,and organizations.
Moreover,DockerHubcanbeintegratedwithtoolslikeGitHubandJenkins, streamlining
workflows.
AdvantagesofDockerHub
Thismethodissecureandofferstheoptionofpushingprivateorpublic images.
DockerHubisacriticalcomponentofindustryworkflowsasitspopularitygrows, serving as a
bridge between developer and testing teams.
Makingcode,softwareoranytypeoffileavailabletothepubliccanbedoneeasilyby
publishing the images on the Docker Hub as public
UNITIV
CLOUDDEPLOYMENTENVIRONMENT
SYLLABUS: Google App Engine – Amazon AWS – Microsoft Azure; Cloud
Software Environments – Eucalyptus – OpenStack.
PARTA
2Marks
1. DescribeaboutGAE?BTL1
Google's App Engine (GAE) which offers a PaaS platform supporting various cloud
and web applications.This platform specializs in supporting scalable (elastic) web
applications.GAE enables users to run their applications on a large number of data
centers associated with Google's search engine operations.
GFSisusedforstoring largeamountsofdata.
MapReduce is for use in application program development. Chubby is used for
distributed application lock services. BigTable offers a storage service for accessing
structured data.
3. ListthefunctionalmodulesofGAE?BTL1
DatastoreApplicationruntimeenvironment
Software development kit (SDK) • Administration consoleGAE web service
infrastructure
4. ListsomeofthestoragetoolsinAzure?BTL1
5. ListtheapplicationsofGAE?BTL1
Well-known GAE applications include the Google Search Engine, Google Docs,
Google Earth, and Gmail.These applications can support large numbers of users
simultaneously.Users can interact with Google applications via the web interface
provided by each application.Third-party application providers can use GAE to build
cloud applications for providing services.
7. DescribeaboutOpenstack?BTL1
The OpenStack project is an open source cloud computing platform for all types of
clouds, which aims to be simple to implement, massively scalable, and feature
rich.Developers and cloud computing technologists from around the world create the
OpenStack project.OpenStack provides an Infrastructure-as-a-Service (IaaS) solution
through a set of interrelated services.
8. ListthekeyservicesofOpenStack?BTL1
The OpenStack system consists of several key services that are separately installed.
Compute,Identity,Networking,Image,BlockStorage,ObjectStorage,Telemetry,
Orchestration and Database services.
9. DescribeaboutEucalyptus?BTL1
1
10. Listdifferenttypesofcomputingenvironment?BTL1
Mainframe
Client-Server
Cloud Computing
MobileComputing
Grid Computing
11. WriteshortnoteonAmazonEC2?BTL1
Amazon Elastic Compute Cloud (Amazon EC2) is a cloud-based web service that
offers a secure and scalable computing capacity.It allows organizations to customize
virtual compute capacity in the cloud, with the flexibility to choose from a range of
operating systems and resource configurations such as CPU, memory, and
storage.With Amazon EC2, capacity can be increased or decreased within minutes,
and it supports the use of hundreds or thousands of server instances
simultaneously.This is all managed through web service APIs, enabling applicationsto
scale themselves up or down as needed.
12. MentiontheadvantagesofDynamoDB?BTL1
Amazon DynamoDB is a NoSQLdatabase service that offers fastand flexible storage
for applications requiring consistent, low-latency access at any scale.It'sfully managed
and supports both document and key-value data models.
13. WhatisMicrosoftAzure?BTL1
14. ListthethreemodesofnetworkcomponentinEucalyptus?BTL1
Staticmode,whichallocatesIPaddressestoinstances
System mode, which assigns a MAC address and connects the instance's network
interface to the physical network via NC Managed mode, which creates a local
network of instances.
15. MentionthedisadvantagesofAWS?BTL1
AWS can present a challenge due to its vast array of services and functionalities,
which may be hard to comprehend and utilize, particularly forinexperienced users.The
cost ofAWS can be high, particularly for high-traffic applications or when operating
multiple services.
PARTB
13Marks
1. WhatisGoogleAppEngineandexplainitsarchitecture?BTL1
(Definition:2marks,Conceptexplanation:8,Diagram:3marks)
Google has the world's largest search engine facilities.The company has extensive
experience in massive data processing that has led to new
insights into data-center design and novel programming models that scaleto incredible
sizes.
Google platform is based on its search engine expertise.Google has hundreds of data
centers and has installed more than 460,000 servers worldwide.
For example, 200 Google data centers are used at one time for a number of cloud
applications.
Data items are stored in text, images, and video and are replicated to tolerate faults or
failures.
Google'sApp Engine (GAE) which offers a PaaS platform supporting various cloud
and web applications.Google has pioneered cloud development by leveragingthe large
number of data centers it operates.
For example, Google pioneered cloud services in Gmail, Google Docs, and Google
Earth, among other applications.These applications can support a large number of
users simultaneously with HA.
Notable technology achievements include the Google File System (GFS),MapReduce,
BigTable, and Chubby.In 2008, Google announced the GAE web application platform
which is becoming a common platform for many small cloud service providers.This
platform specializes in supporting scalable (elastic) web applications.GAE enables
users to run their applications on a large number of data centers associated with
Google's search engine operations.
GAEArchitecture
GFSisusedforstoring largeamountsofdata.
MapReduce is for use in application program development.Chubby is used for
distributed application lock services.BigTable offers a storage service for accessing
structured data.
Users can interact with Google applications via the web interface provided by each
application.
Third-party application providers can use GAE to build cloud applications for
providing services.
TheapplicationsallrunindatacentersundertightmanagementbyGoogleengineers. Inside
each data center, there are thousands of servers forming different clusters
Google is one of the larger cloud application providers, although it is fundamental
service program is private and outside people cannot use the Google infrastructure to
build their own service.
The building blocks of Google's cloud computing application include the Google File
System for storing large amounts of data, the MapReduce programming framework
for application developers, Chubby for distributed application lock services, and
BigTable asa storage service for accessing structural or semistructural data.With these
building blocks, Google has built many cloud applications.
2. WhatarethefunctionalModulesofGAE?BTL1
(Definition:2marks,Conceptexplanation:11marks)
The GAE platform comprises the following five major components.The GAE isnot an
infrastructure platform, but rather an application development platform for
users.Thedatastore offersobject-oriented, distributed, structured datastorage services
based on BigTable techniques. The datastore secures data management operations.
3. ExplaintheGAEApplications?BTL4
(Definition:2marks,Conceptexplanation:8marks,Diagram:3marks)
Best-known GAE applications include the Google Search Engine, Google Docs,
Google Earth and Gmail.These applications can support large numbers of users
simultaneously.Users can interact with Google applications via the web interface
provided by each application.Third party application providers can use GAE to build
cloud applications for providing services.The applications are all run in the Google
datacenters.Insideeach datacenter,theremightbethousandsofservernodestoform
different clusters. Each cluster can run multipurpose servers.
GAEsupportsmanywebapplications.
One is a storage service to store application specific data in the Google infrastructure.
The data can be persistently stored in the backend storage server while still providing
the facility for queries, sorting and even transactions similar to traditional database
systems.
GAE also provides Google specific services, such as the Gmail account service. This
can eliminate the tedious work of building customized user management components
in web applications.
ProgrammingEnvironment forGoogleAppEngine:
Several web resources (e.g., http://code.google.com/appengine/)and specific books
and articles discuss how to program GAE.
Figure 4.2 summarizes some key features of GAE programming model for two
supported languages: Java and Python.A client environment that includes an Eclipse
plug-in for Java allows
youtodebug yourGAEon yourlocalmachine.
Also, the GWTGoogleWebToolkit is available for Java web application developers.
Developers can use this, or any other language using a JVM based interpreter or
compiler, such as JavaScript or Ruby.Python is often used with frameworks such as
Django and CherryPy, but Google also supplies a built in webapp Python
environment.
Thereareseveralpowerfulconstructsforstoringandaccessingdata.Thedatastoreis
a NOSQLdata management system for entities that can be, at most, 1 MB in size and
are labeled by a set of schema-less properties.Queries can retrieve entities of a given
kind filtered and sorted by the values of the properties.Java offers Java Data Object
(JDO) and Java Persistence API (JPA) interfaces implemented by the open source
Data Nucleus Access platform, while Python has a SQL-like query language called
GQL.The data store is strongly consistent and uses optimistic concurrency control.
4. WhataretheimportantAWSServices?BTL1
(Definition:2marks,Conceptexplanation:11marks)
AmazonEC2:
Amazon Elastic Compute Cloud (Amazon EC2) is a cloud- based web service that
offers a secure and scalable computing capacity.It allows organizations to customize
virtual compute capacity in the cloud, with the flexibility to choose from a range of
operating systems and resource configurations such as CPU, memory, and
storage.With Amazon EC2 falls under the category of Infrastructure asa Service(IaaS)
and provides reliable, cost-effective compute and high-performance infrastructure to
meet the demands of businesses.
AWSLambda:
AWS Lambda is a serverless, event-driven compute service that enables code
execution without server management.Compute time consumption is the only factor
for payment,and there is no charge when code is not running.AWS Lambda offers the
ability to run code for any application type with no need for administration.
AWSElasticBeanstalk:
AWSElasticBeanstalkisacloud-basedPlatformasaServicethatsimplifiesthe process of
deploying applications by offering all the necessary application services.It provides a
plug-and-play platform that supports a variety of programming languages and
environments, including Node.js, Java, PHP, Python, and Ruby.Amazon VPC:
AmazonVPC(VirtualPrivateCloud)isanetworkingservicethatenablesthecreation
ofaprivatenetworkwithintheAWScloudwithsimilarnetworkingconceptsand controls as
an on- premises network.Users have the ability to configure the
networksettings,suchasIPaddressranges,subnets,routingtables,gateways,andsecurity
measures.Amazon VPC is an essential AWS service that integrates with many other
AWS services.
AmazonRoute53:
Amazon Route 53 is a cloud-based web service that offers a scalable and highly
available Domain Name System (DNS) solution. Its primary purpose is to provide
businesses and developers with a cost-effective and reliable method of directing end-
users to internet applications by converting human-readable domain names into IP
addresses that computers can understand.
AmazonS3
Amazon S3(Simple Storage Service) is a web service interface for object storage that
enables you to store and retrieve any amount of data from any location on the web. It
is designed to provide limitless storage with a 99.999999999% durability
guarantee.Amazon S3 can be used as the primary storage solution for cloud-native
applications, as well as for backup and recovery and disaster recovery purposes. It
delivers unmatched scalability, data availability, security, and performance.
AmazonGlacier:
Amazon Glacier is a highly secure and cost-effective storage service designed for
long-term backup and data archiving. It offers reliable durability andensures the safety
of your data. However, since data retrieval may take several hours, Amazon Glacier is
primarily intended for archiving purposes.
AmazonRDS
Amazon Relational Database Service (Amazon RDS) simplifies the process of setting
up, managing, and scaling a relational database in the cloud. Additionally, it offers
resizable and cost-effective capacity and is available on multiple database instance
types that are optimized for memory, performance, or I/O.WithAmazon RDS, choice
of six popular database engines including Amazon Aurora, PostgreSQL, MySQL,
MariaDB, Oracle, and Microsoft SQL Server.
AmazonDynamoDB
Amazon DynamoDB is a NoSQLdatabase service that offers fastand flexible storage
for applications requiring consistent, low-latency access at any scale.It'sfully managed
and supports both document and key-value
data models.Its versatile data model and dependable performance make it well-suited
for various applications such as mobile, web, gaming, Internet of Things (IoT), and
more.
5. ExplainindetailaboutMicrosoftAzureanditsservices?BTL4
(Definition:2marks,Conceptexplanation:8marks,Advantages:3marks)
History
Windows Azure was announced by Microsoft in October 2008 and became available
in February 2010.In 2014, Microsoft renamed it as Microsoft Azure.It offered a
platform for various services including .NET services, SQL Services, and Live
Services.However, some people were uncertain about using cloud
technology.Nevertheless, MicrosoftAzure is constantly evolving, with new tools and
functionalities being added.The platform has two releases: v1 and v2. The earlier
version was JSON script-oriented, while the newer version features an interactive UI
for easier learning and simplification. Microsoft Azure v2 is still in the preview stage.
AdvantagesofAzure
Azure offers a cost-effective solution as it eliminates the need for expensive hardware
investments.With a pay-as-you-go subscription model, the user can manage their
Setting up anAzure account is a simple process through theAzure Portal, where you
can choose the desired subscription and begin using the platform.
One of the major advantages ofAzure is its low operational cost. Since it operates on
dedicated servers specifically designed for cloud functionality, it provides greater
reliability compared to on-site servers.By utilizing Azure, the user can eliminate the
need for hiring a dedicated technical support team to monitor andtroubleshoot servers.
This results in significant cost savings for an organization.Azure provides easybackup
andrecovery optionsforvaluabledata. Intheeventofadisaster,theuser
canquicklyrecoverthe data witha singleclick,minimizinganyimpact onenduser
business.Cloud-basedbackupandrecoverysolutionsofferconvenience,avoidupfront
investments, and provide expertise from third-party providers.Implementing the
business models in Azure is straightforward, with intuitive features and user-friendly
interfaces. Additionally, there are numerous tutorials available to expedite learning
and deployment process
Azure offers robust security measures, ensuringthe protection of your critical data and
business applications.Even in the face of natural disasters, Azure serves as a reliable
safeguard for the resources. The cloud infrastructure remains operational, providing
continuous protection.
Azureservices
Azure offers a wide range of services and tools for different needs.These include
Compute, whichincludes Virtual Machines, Virtual Machine Scale Sets, Functions for
serverless computing, Batch for containerized batch workloads, Service Fabric for
microservices and container orchestration, and Cloud Services for building cloud-
based apps and APIs.The Networking tools in Azure offer several options like the
VirtualNetwork,LoadBalancer,ApplicationGateway,VPNGateway,AzureDNSfor
domain hosting, Content Delivery Network, Traffic Manager, Express Route
dedicated private network fiber connections, and Network Watcher monitoring and
diagnostics.The Storage tools available inAzure include Blob, Queue, File, and Disk
Storage, Data Lake Store, Backup, and Site Recovery, among others. Web + Mobile
services make it easy to create and deploy web and mobile applications.Azure also
includes tools for Containers, Databases, Data + Analytics, AI + Cognitive Services,
Internet of Things, Security + Identity, and Developer Tools, such as Visual Studio
Team Services, Azure DevTest Labs, HockeyApp mobile app deployment and
monitoring, and Xamarin cross-platform mobile development.
6. WriteashortnotesonCloudSoftwareEnvironments?BTL1
(Definition:2marks,Conceptexplanation:11marks)
Client-Server: Inthis environment, client devices access resources and services from
a central server, facilitating the sharing of data and processing capabilities.
7. ExplainindetailaboutEucalyptusanditscomponents?BTL4
(Definition:2 marks,Diagram:3 marks,Concept explanation:6 marks,Advantages:2
marks)
Components:
Eucalyptushasvariouscomponentsthatworktogethertoprovideefficientcloud computing
services.
The Node Controller manages the lifecycle of instances and interacts with the
operating system, hypervisor, and Cluster Controller.On the other hand, the Cluster
Controller manages multiple Node Controllers and the Cloud Controller, whichacts as
the front-end for the entire architecture.
The Storage Controller, also known as Walrus, allows the creation of snapshots of
volumes and persistent block storage over VM instances.
Eucalyptus operates in different modes, each with its own set of features.In Managed
Mode, users are assigned security groups that are isolated by VLAN between the
Cluster Controller and Node Controller. In Managed (No VLAN) Node mode,
however, the root user on the virtual machine can snoop into other virtual machines
running on the same network layer.The System Mode is the simplest mode with the
least number of features, where a MAC address is assigned to a virtual machine
instance and attached to the Node Controller's bridge Ethernet device. Finally, the
Static Modeis similar toSystemModebut provides morecontrolover theassignment of
IP addresses, as a MAC address/IP address pair is mapped to a static entry within the
DHCP server.
FeaturesofEucalyptus
Eucalyptusoffersvariouscomponentstomanageandoperatecloudinfrastructure.The
Eucalyptus Machine Image is an example of an image, which is software packaged
and uploaded to the cloud, and when it is run, it becomes an instance.
The networking component can be divided into three modes: Static mode, which
allocates IP addresses to instances, System mode, which assigns a MAC address and
connects the instance's network interface to the physical network via NC, and
Managed mode, which creates a local network of instances.Access control is used to
limit user permissions. Elastic Block Storage provides block-level storage volumes
thatcan be attached to instances.Auto-scaling and load balancing areused to create or
remove instances or services based on demand.
AdvantagesofEucalyptus
Eucalyptus is a versatile solution that can be used for both private and public cloud
computing.
Users can easily run Amazon or Eucalyptus machine images on either type of cloud.
Additionally, its API is fully compatible with all Amazon Web Services, making it
easy to integrate with other tools like Chef and Puppet for DevOps.
Although it is not as widely known as other cloud computing solutionslike OpenStack
and CloudStack, Eucalyptus has the potential to become a viable alternative.It enables
hybrid cloud computing, allowing users to combine public and private clouds for their
needs. With Eucalyptus, users can easily transform their data centers into private
clouds and extend their services to other organizations.
PARTC
15Marks
1. ExplainindeailaboutAmazonAWSanditsservices?BTL4
(Definition:2marks,Diagram:3marks,Conceptexplanation:10marks)
AdvantagesofAWS
AWS provides the convenience of easily adjusting resource usage based on your
changing needs, resulting in cost savings and ensuring that your application always
has sufficient resources.
With multiple data centers and a commitment to 99.99 for many of its services,AWS
offers a reliable and secure infrastructure.
Its flexible platform includes a variety of services and tools that can be combined to
build and deploy various applications.Additionally, AWS's pay-as-you-go pricing
model means user only pay for the resource use, eliminating upfront costs and long-
term commitments.
Disadvantages:
AWS can present a challenge due to its vast array of services and functionalities,
which may be hard to comprehend and utilize, particularly forinexperienced users.The
cost ofAWS can be high, particularly for high-traffic applications or when operating
multiple services.Furthermore, service expenses can escalate over time, necessitating
frequent expense monitoring.AWS's management of various
infrastructureelementsmaylimitauthorityovercertainparts ofyourenvironmentand
application.
Global infrastructure
The AWS infrastructure spans across the globe and consists of geographical regions,
each with multiple availability zones that are physically isolated from each other.
When selecting a region, factors such as latency optimization, cost reduction, and
government regulations are considered. In case of a failure in one zone, the
infrastructure in other availability zones remains operational, ensuring business
continuity.AWS's largest region, North Virginia, has six availability zones that are
connected by high-speed fiber-optic networking.
To further optimize content delivery, AWS has over 100 edge locations worldwide
that support the CloudFront content delivery network.This network caches frequently
accessed content, such as images and videos, at these edge locations and distributes
them globally for faster delivery and lower latency for end-users. Additionally,
CloudFront offers protection against DDoS attacks
AWSServicemodel
AWSprovidesthreemaintypesofcloudcomputing services:
InfrastructureasaService(IaaS):Thisservicegivesdevelopersaccesstobasic building
blocks such as data storage space, networking features, and virtual or
dedicatedcomputerhardware.Itprovidesahighdegreeofflexibilityandmanagement
control over ITresources. Examples of laaS services onAWS includeVPC, EC2, and
EBS.
2. ExplainindetailaboutOpenStack?BTL4
(Definition:2marks,Diagram:4marks,Conceptexplanation:9marks)
The OpenStack project is an open source cloud computing platform for all types of
clouds, which aims to be simple to implement, massively scalable and feature
rich.Developers and cloud computing technologists from around the world create the
OpenStack project.OpenStack provides an Infrastructure as a Service (IaaS) solution
through a set of interrelated services.Each service offers an application programming
interface(API)thatfacilitatesthisintegration.Dependingontheirneeds,administrator can
install some or all services.
OpenStack began in 2010 as a joint project of Rackspace Hosting and NASA.As of
2012, it is managed by the OpenStack Foundation, a non-profit corporate entity
established in September 2013 to promote OpenStack software and its
community.Now, More than 500 companies have joined the projectThe OpenStack
system consists of several key services that are separately installed.
These services work together depending on the user cloud needs and include the
Compute, Identity, Networking, Image, Block Storage, Object Storage, Telemetry,
Orchestration, and Database services.
The administrator can install any of these projects separately and configure them
standalone or as connected entities.
Figure4.4showstherelationshipsamongtheOpenStack services:
To design, deploy, and configure OpenStack, administrators must understand the
logical architecture.OpenStack consists of several independent parts, named the
OpenStack services. All services authenticate through a common Identity
service.Individualservices interactwith each other through publicAPIs, except where
privileged administrator commands are necessaryInternally, OpenStack services are
composed of several processes.
All services have at least one API process, which listens for API requests,
preprocesses themand passes them on to other parts of the service.With theexception
of the Identity service, the actual work is done by distinct processes.For
communication between the processes of one service, an AMQP message broker is
used.The service's state is stored in a database.When deploying and configuring the
OpenStack cloud, administrator can choose among several message broker and
database solutions, such as RabbitMQ, MySQL, MariaDB, and SQLite.Users can
access OpenStack via the web-based user interface implemented by the Horizon
Dashboard, via command-line clients and by issuing API requests through tools like
browser plug-ins or curl.For applications, several SDKs are available. Ultimately, all
these access methods issue REST API calls to the various OpenStack services.
The controller node runs the Identity service, Image service, Placement service,
management portions of Compute, management portion of Networking, various
Networking agents, and the Dashboard.It also includes supporting services such as an
SQL database, message queue, and NTP.
Optionally, the controller node runs portions of the Block Storage, Object Storage,
Orchestration,andTelemetryservices.Thecontroller noderequiresaminimumoftwo
network interfaces.The compute node runs the hypervisor portion of Compute that
operatesinstances.Bydefault,ComputeusestheKVMhypervisor.Thecomputenode also
runs a Networking service agent that connects instances to virtual networks and
provides firewalling services to instances via security groups.
Administrator can deploy more than one compute node. Each node requires a
minimum of two networkinterfaces. The optionalBlock Storage node contains
thedisks that the BlockStorage and Shared File System services provision for
instances. For simplicity, service traffic between compute nodes and this node uses
themanagement network.
Productionenvironmentsshouldimplementaseparatestoragenetworktoincrease
performanceandsecurity.Administratorcandeploymorethanoneblockstorage node.
Each node requires a minimum of one network interface.The optional ObjectStorage
node contains the disks that the Object Storage service uses for storing accounts,
containers, and objects.For simplicity, service traffic between compute nodes and this
node uses the management network.Production environments shouldimplement a
separate storage network to increase performance and security.This service requires
two nodes. Each node requires a minimum of one network interface. Administrator
can deploy more than two object storage nodes.The provider networks option deploys
the OpenStack Networking service in the simplest way possible with primarily layer 2
(bridging/switching) services andVLAN segmentation of networks.
Essentially,itbridgesvirtualnetworkstophysicalnetworksandreliesonphysical
networkinfrastructureforlayer-3(routing)services.Additionally,aDHCPservice
provides IP address information to instances.
UNITVCLO
UDSECURITY
SYLLABUS:Virtualization System-SpecificAttacks: Guest hopping – VM migration
attack – hyperjacking. Data Security and Storage; Identity and Access Management
(IAM) - IAM Challenges - IAM Architecture and Practice.
PARTA
2Marks
1. Whatisavirtualizationattack?BTL1
VirtualizationAttacks One of the top cloud computing threats involves one of its core
enabling technologies: virtualization. In virtual environments, the attacker can take
control of virtual machines installed by compromising the lower layer hypervisor.
2. WhatarethedifferenttypesofVMattacks?BTL1
3. Whatisguesthopping?BTL1
Guest-hopping attack: In this type of attack, an attacker will try to get access to one
virtual machine by penetrating another virtual machine hosted in the same hardware.
One of the possible mitigations of guest hopping attack is the Forensics and VM
debugging tools to observe the security of cloud.
4. Whatisahyperjackingattack?BTL1
Cloud data security is the practice of protecting data and other digital information
assets from security threats, human error, and insider threats. It leverages technology,
policies, and processes to keep yourdataconfidentialand stillaccessibleto thosewho
need it in cloud-based environments
7. Whatarethe5componentsofdatasecurityincloudcomputing?BTL1
Visibility.
ExposureManagement.
Prevention Controls.
Detection.
Response
8. Whatiscloudstorageanditstypes?BTL1
What are the types of cloud storage? There are three main cloud storage types: object
storage, file storage, and block storage. Each offers its own advantages and has itsown
use cases.
9. Whatarethefourprinciplesofdatasecurity?BTL1
There are many basic principles to protect data in information security. The primary
principles are confidentiality, integrity, accountability, availability, least privilege,
separation of privilege, and least common mechanisms. The most common security
principle is CIA triad with accountability
10. WhatisthedefinitionifIAM?BTL1
Identity and access management (IAM) ensures that the right people and job roles in
your organization (identities) can access the tools they need to do their jobs. Identity
management and access systems enable your organization to manage employee apps
without logging into each app as an administrator
11. WhatarethechallengesofIAM?BTL1
Lackofcentralized view
Difficulties in User Lifecycle Management
Keeping Application Integrations Updated
ComplianceVisibilityintoThirdPartySaaSTools
12. WhatistheprincipleofIAM?BTL1
A principal is a human user or workload that can make a request for an action or
operation on an AWS resource. After authentication, the principal can be granted
either permanent or temporary credentials to makerequests to AWS, depending on the
principal type.
13. WhatisIAMtools?BTL1
Identity access management (IAM) or simply put, identity management, is a category
of software tools that allows businesses of all sized to generally manage the identities
and access rights of all their employees.
14. HowmanytypesofIAMarethere?BTL1
IAMrolesareof4types,primarilydifferentiatedbywhoorwhatcanassumetherole: Service
Role. Service-Linked Role. Role for Cross-Account Access.
15. WhatareIAMrequirements?BTL1
IAMrequirementsareorganizedintofourcategories: AccountProvisioning&De-
provisioning,Authentication,Authorization&RoleManagement,andSession
Management. For each category a general description of goals is provided, followed
by a list of specific requirements that will help ensure goals will be met
PARTB
13Marks
1. Whatisvirtualmigrationattacks?BTL1
(Definition:2marks,Conceptexplanation:11marks)
VirtualMachineMigration
The movement ofVMs from one resource to another, such as from one physical host
toanotherphysicalhost,ordatastoretodatastore,isknownasVM migration.There are two
types of VM migration: cold and live. Coldmigration occurs when the VM
isshutdown. LivemigrationoccurswhiletheVMisactuallyrunning.Thisamazing new
capability is particularly useful if maintenance is required on the part of the physical
infrastructure and the application running on that infrastructure is mission- critical.
Before the availability of live migration applications, managers were stuck with the
choice of either causing a planned outage, which in someglobal corporations is not
always feasible, or waiting and not takingthe system down, which risks an unplanned
outagein the future. Needless to say, neither of these choices is optimal.With live
migration, a running system is copied to another system and when the last bits of the
running system’s state are copied, the switch ismade and the new system becomes
the active server.This process can take several minutes to complete, but is a great
advantage over the two previous options.
Earlierversions oflivemigration werelimited tomovingVMs withinthe same
datacenters. That restriction was removed and it is now possible to perform live
migrations between different data centers. This capability provides an entirely new
set of options and availability, including the ability to move workloads from a
datacenterthat may be in the eye of a storm to another data center outside of the
target area. Again, these application moves can be accomplished without any
application outages. There are several products on the market today that provide
some form of live migration. These products and platforms may have some
guidelines and requirements to provide the capability. If an organization is
considering live migration as an option, it is recommendedtocheckwiththe
virtualizationsoftware vendor to understand those requirements, particularly for the
data center.
VMMigrationattcak
Migration works by sending the state of the guest virtual machine's memory and any
virtualized devices to a destination host physical machine. Live Migration has many
security vulnerabilities. The security threats could be on the data plane, control plane
and migration plane.
Typesofmigrationattacks
virtualization introduces serious threats to service delivery such as Denial of Service
(DoS) attacks, Cross-VM Cache Side Channel attacks, Hypervisor Escape and Hyper-
jacking. One of the most sophisticated forms of attack is the cross-VM cache side
channel attack that exploits shared cache memory between VMs.
2. Writeashortnotesonguesthopping?BTL1
(Definition:2marks,Conceptexplanation:11marks)
Guest-hopping attack: one of the possible mitigations of guest hopping attack is the
Forensics and VM debugging tools toobserve any attempt to compromise VM.
Another possible mitigation is using High Assurance Platform (HAP) whichprovidesa
high degree of isolation between virtual machines.-SQL injection: to mitigate SQL
injection attack you should remove all stored procedures that are rarely used.
Also,assign the least possible privileges to users who have permissions to access the
database-Side channel attack: as a countermeasure, it might be preferable to ensure
that none of the legitimate userVMs resides onthe same hardware of other users.This
completely eliminates the risk of side-channel attacks in a virtualized
cloudenvironment-Malicious Insider: strict privileges’planning, security auditing can
minimize this security threat-Data storage security: ensuring data integrity and
confidentlyoEnsure limited access to the users’data by the CSP employees.
WhatIsHyperjacking?
VirtualMachine
Avirtualmachineisjustthat:anon-physicalmachinethatusesvirtualizationsoftware
instead of hardware to function. Though virtual machines must exist on a piece of
hardware, they operate using virtual components (such as a virtual CPU).
Hypervisorsformthebackboneofvirtualmachines.Thesearesoftwareprogramsthat are
responsible for creating, running, and managing VMs. Asingle hypervisor can host
multiple virtual machines, or multiple guestoperating systems, at one time, which also
gives it the alternative name of virtual machine manager (VMM).
There are two kinds of hypervisors. The first is known as a "bare metal" or "native"
hypervisor, with the second being a "host" hypervisor.What you should note is that it
is the hypervisors of virtual machines that are the targets of hyperjacking attacks
(hence the term "hyper-jacking").
OriginsofHyperjacking
In the mid-2000s, researchers found that hyperjacking was a possibility. At the time,
hyperjacking attacks were entirely theoretical, but the threat of one being carried out
wasalwaysthere.Astechnologyadvancesandcybercriminalsbecomemoreinventive, the
risk of hyperjacking attacks increases by the year.
In fact,in September2022, warningsof real hyperjackingattacks beganto arise. Both
Mandiant and VMWare published warningsstating that they found malicious actors
using malware to conduct hyperjacking attacks in the wild via aharmful version of
VMWare software. In this venture, the threat actors inserted their own malicious code
within victims' hypervisors while bypassing the targetdevices' security measures
(similarly to a rootkit).
Through this exploit,the hackers in question were able to run commands on the virtual
machines' host devices without detection.
HowDoesaHyperjackingAttackWork?
Hypervisors are the key target ofhyperjacking attacks. Ina typical attack, the original
hypervisor will be replaced via the installation of a rogue, malicious hypervisor that
the threat actor has control of. By installing a rogue hypervisor under the original, the
attacker can therefore gain control of the legitimate hypervisor and exploit the VM.
By having control over the hypervisor of a virtual machine, the attacker can, in turn,
gain control of the entireVM server.This means that they can manipulate anything in
the virtual machine. In the aforementioned hyperjacking attack announced in
September2022, itwasfoundthathackerswereusing hyperjackingto spyonvictims.
Compared to other hugely popular cybercrime tactics like phishing and ransomware,
hyperjacking isn't very common at the moment. But with the first confirmed use of
this method, it's important that you know how to keep your devices, and your data,
safe.
3. Explainaboutclouddatasecurityindetail?BTL4
(Definition:2marks,Conceptexplanation:11marks)
Cloud data security is the practice of protecting data and other digital information
assets from security threats, human error, and insider threats. It leverages technology,
policies, and processes to keep yourdataconfidentialand stillaccessibleto thosewho
need it in cloud-based environments. Cloud computingdelivers many benefits,
allowing you to access data from any device via an internet connection to reduce the
chance of data loss during outages or incidents and improve scalability and agility.At
the same time, many organizations remain hesitant to migrate sensitive data to the
cloud as they struggle to understand their security options and meet regulatory
demands.
Understanding how to secure cloud data remains one of the biggest obstacles to
overcome as organizations transition from building and managing on-premises data
centers. So, what is data security in the cloud? How is your data protected?And what
cloud data security best practices should you follow to ensure cloud-based data assets
are secure and protected?
Readontolearnmoreaboutclouddatasecuritybenefits andchallenges,howitworks, and
how Google Cloudenables companies to detect, investigate, and stopthreats across
cloud, on-premises, and hybrid deployments.
Cloud data security protects data that is stored (at rest) or moving in and out of
thecloud (in motion) from security threats, unauthorized access, theft, and corruption.
Itrelies on physical security, technology tools, access management and controls,
andorganizational policies.
Whycompaniesneedcloudsecurity
Today, we’re living in the era of big data,with companies generating, collecting, and
storing vast amounts of data by the second, ranging from highly confidential business
or personal customer data to less sensitive data like behavioral and marketing
analytics.
Beyond the growing volumes of data that companies need to be able to access,
manage, and analyze, organizations are adopting cloud services to help them achieve
more agility and faster times to market, and to support increasingly remote or hybrid
workforces.The traditional networkperimeter is fastdisappearing,andsecurityteams are
realizing that they need to rethink current and past approaches when it comes to
securing cloud data. With data and applications no longer living inside your data
center and more people than ever working outside a physical office, companies must
solve how to protect data and manage access to that data as it moves across and
through multiple environments.
4. Whatarethechallengesofclouddatasecurity?BTL1
(Definition:2marks,Conceptexplanation:11marks)
As more data and applications move out of a central data center and away from
traditional security mechanisms and infrastructure, the higher the risk of exposure
becomes. While many of the foundational elements of on-premises data security
remain, they must be adapted to the cloud.
5. WhataretheBenefitsofclouddatasecurity?BTL1
(Definition:2marks,Conceptexplanation:11marks)
Greatervisibility
Strong cloud data security measures allow you to maintain visibility into the inner
workingsof your cloud, namely what data assets you have and where they live, who is
using your cloud services, and the kind of data they are accessing.
Easybackupsandrecovery
6. WriteashortnotesonIAMchallenges?BTL1
(Definition:2marks,Conceptexplanation:11marks)
IAM Challenges One critical challenge of IAM concerns managing access for diverse
user populations (employees, contractors, partners, etc.) accessing internal and
externallyhostedservices.ITisconstantlychallengedtorapidlyprovisionappropriate
access to the users whose roles and responsibilities often change for business reasons.
Another issue is the turnover of users within the organization. Turnover varies by
industry and function—seasonal staffing fluctuations in finance departments, for
example—and can also arise from changes in the business, such as mergers and
acquisitions, new product and service releases, business process outsourcing, and
changing responsibilities. As a result, sustaining IAM processes can turn into a
persistent challenge. Access policies for information are seldom centrally and
consistently applied. Organizations can contain disparate directories,creating complex
webs of user identities, access rights, and procedures. This has led to inefficiencies in
user and access management processes while exposing these organizations to
significant security, regulatory compliance, and reputation risks. To address
thesechallengesand risks,many companies havesoughttechnology solutions to enable
centralized and automated user access management. Many of these initiatives are
entered into with high expectations, which is not surprising given that the problem is
often large and complex. Most often those initiatives to improve IAM can span
several years and incur considerable cost. Hence, organizations should approach their
IAM strategy and architecture with both business and IT drivers that address the core
inefficiency issues while preserving the control’s efficacy (related to access control).
Only then will the organizations have a higher likelihood of success and return on
investment.
PARTC
15Marks
1. ExplainindetailaboutIAMarchitecture?BTL4
(Definition:2marks,Diagram:4marks,Conceptexplanation:9marks)
UseIAMtoolstoapplyforappropriatepermissions.Analyzeaccesspatternsand review
permissions.The Architecture of Identity Access Management
UserManagement:-Itconsistsof
CentralizationofAuthenticationandAuthorization:-Itneedstobedevelopedin
order to build custom authentication and authorization features into their application,
it also promotes the loose coupling architecture.
2. WhataretheIAMPracticesinCloud?BTL1
(Comparisontable:3marks,Diagram:3marks,Conceptexplanation:9marks)
Thematuritymodeltakes intoaccountthedynamicnatureofIAMusers,systems,and
applications in the cloud and addresses the four key components of the IAM
automation process: • User Management, New Users • User Management, User
Modifications • Authentication Management • Authorization Management Table 5-3
defines the maturity levels as they relate to the four key components.
By matching the model’s descriptions of various maturity levels with the cloud
services delivery model’s (SaaS, PaaS, IaaS) current state of IAM, a clear picture
emerges of IAM maturity across the four IAM components. If, for example, the
servicedeliverymodel(SPI)is“immature”inoneareabut“capable”or“aware”inall others,
the IAM maturity model can help focus attention on the area most in need of attention.
• Authorizationmanagement
• Compliancemanagement
Wewillnowdiscusseachoftheaforementionedpracticesindetail.
Enterpriseidentityprovider
Pros
Delegating certain authentication use cases to the cloud identity management service
hides the complexity of integrating with various CSPs supporting different federation
standards. Case in point: Salesforce.com and Google support delegated authentication
using SAML. However, as of this writing, they support two different versions of
SAML: Google Apps supports only SAML 2.0, and Salesforce.com supports only
SAML 1.1. Cloudbased identity management services that support both SAML
standards (multiprotocol federation gateways) can hide this integration complexity
from organizations adopting cloud services.Another benefit is that there is little need
for architectural changes to support this model. Once identitysynchronization between
the organization directory or trusted systemof record and theidentity service directory
in the cloud is set up, users can sign on to cloud services using corporate identity,
credentials (both static and dynamic), and authentication policies.
Cons
Whenyourelyonathirdpartyforanidentitymanagementservice,youmayhaveless
visibility into the service, including implementation and architecture details. Hence,
theavailabilityandauthenticationperformanceofcloudapplicationshingesonthe
identity management service provider’s SLA, performance management, and
availability. It is important to understand the provider’s service level, architecture,
service redundancy, and performance guarantees of the identity management service
provider. Another drawback to this approach is that it may not be able to generate
custom reports to meet internal compliance requirements. In addition,identity attribute
management can also become complex when identity attributes are not properly
defined and associated with identities (e.g., definitions of attributes, both mandatory
and optional). New governance processes may be required to authorize various
operations (add/modify/remove attributes) to govern user attributes that move outside
the organization’s trust boundary. Identity attributes will change through the life cycle
of the identity itself and may get out of sync. Although both approaches
enabletheidentification andauthentication ofusersto cloudservices, variousfeatures and
integration nuances are specific to the service delivery model— SaaS, PaaS, and
IaaS—as we will discuss in the next section.