0% found this document useful (0 votes)
8 views24 pages

Module 11

The CompTIA A+ Core 1 course covers cloud computing concepts including IaaS, SaaS, and PaaS, focusing on resource sharing, cloud model types, and virtual desktops. It prepares learners for the CompTIA A+ Core 1 (220-1101) certification by exploring features, benefits, and considerations of various cloud services. The course includes video content that delves into specific cloud service models and their applications in real-world scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views24 pages

Module 11

The CompTIA A+ Core 1 course covers cloud computing concepts including IaaS, SaaS, and PaaS, focusing on resource sharing, cloud model types, and virtual desktops. It prepares learners for the CompTIA A+ Core 1 (220-1101) certification by exploring features, benefits, and considerations of various cloud services. The course includes video content that delves into specific cloud service models and their applications in real-world scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

CompTIA A+ Core 1: Cloud Computing Concepts

Cloud computing allows users to store and access data and programs over the internet instead of a
local hard drive. In this course, explore cloud computing concepts such as cloud model types,
resource sharing, measure service, and virtual desktops. Discover the features of Infrastructure as a
Service (IaaS), Software as a Service (SaaS), and Platform as a Service (PaaS); and learn how to
differentiate between public, private, hybrid, and community cloud infrastructures. Discover key
differences between internal and external shared resources and explore rapid elasticity, a feature
that allows for scalable provisioning. Finally, examine the considerations and benefits of using a
measured service or a metered service and investigate the types of virtual desktops and their
purposes. This course will prepare learners for the CompTIA A+ Core 1 (220-1101) certification.

Table of Contents

1. Video: Course Overview (it_csap121_11_enus_01)

2. Video: Infrastructure as a Service (IaaS) (it_csap121_11_enus_02)

3. Video: Software as a Service (SaaS) (it_csap121_11_enus_03)

4. Video: Platform as a Service (PaaS) (it_csap121_11_enus_04)

5. Video: Cloud Model Types (it_csap121_11_enus_05)

6. Video: Shared Resources and File Synchronization (it_csap121_11_enus_06)

7. Video: Rapid Elasticity (it_csap121_11_enus_07)

8. Video: High Availability (it_csap121_11_enus_08)

9. Video: Measured and Metered Services (it_csap121_11_enus_09)

10. Video: Virtual Desktop (it_csap121_11_enus_10)

11. Video: Course Summary (it_csap121_11_enus_11)

1. Video: Course Overview (it_csap121_11_enus_01)

 discover the key concepts covered in this course

[Video description begins] Topic title: Course Overview. [Video description ends]

Hi, I'm Aaron Sampson, and I've been a professional in the IT industry since 1995. [Video description
begins] Your host for this session is Aaron Sampson. He is an IT Trainer/Consultant. [Video description
ends] With a primary focus on technical training, I can be found most of the time producing and
delivering learning content centered around network infrastructure and services. I've also been
involved with extensive practical implementations in a variety of operational capacities, including
architecture and design, deployment and implementation, administration and management, and
various other technology-based roles.

Cloud computing allows users to store and access data and applications over the Internet, instead of
in a local environment. In this course, I'll explore cloud computing concepts such as cloud model
types, resource sharing, measured services, and virtual desktops. We'll discover features of the
primary models of cloud services, including Infrastructure as a Service or IaaS, Software as a Service
or SaaS, and Platform as a Service or PaaS, and learn how to differentiate between public, private,
hybrid, and community cloud services.

I'll discuss key differences between internal and external shared resources, and explore how cloud
services can accommodate rapid and scalable provisioning. Lastly, I'll discuss considerations and
benefits of using measured and metered services, and examine the types of virtual desktops and
their purposes. This course will help to prepare learners for the CompTIA A+ Core 1 or 220-1101
certification exam.

2. Video: Infrastructure as a Service (IaaS) (it_csap121_11_enus_02)

After completing this video, you will be able to describe features of the IaaS cloud computing service.

 describe features of the IaaS cloud computing service

[Video description begins] Topic title: Infrastructure as a Service (IaaS). Your host for this session is
Aaron Sampson. [Video description ends]

In this video, we'll take a look at what's known as Infrastructure-as-a-Service or IaaS. Now, quite
simply, this just means that you have your infrastructure provided by the cloud provider themselves.
Now, to put that into context a little bit, let's just imagine that you are starting a brand new business
and you need to set up all of the things necessary to create a networked environment. That would
include all of your client computers, all of the servers, and of course, all of the networking
components.

So that effectively refers to the infrastructure that you need. But in that type of scenario, if you are
going out and purchasing all of that yourself and implementing it within your own office space, then
of course, all of those resources reside within your own on-premises environment. With
Infrastructure-as-a-Service, all of those components still exist, but they are hosted components.

In other words, they exist at the cloud provider, not your own physical premises. But because those
resources are available to be used by other customers, this allows you to configure those same low-
level resources, such as servers, applications and networking components, within your environment,
but with little to no on-premises hardware.
You would probably still need client devices, but even those could, in fact, be the personal devices of
your users. So you could, in theory, set up a complete networked environment, very similar to what
you might have in an on-premises configuration but without the need to purchase any of that
hardware yourself. So quite literally, you simply pay a monthly fee to use the resources of someone
else over the Internet.

Now, specifically with respect to those low-level resources, what that translates into for most
environments is the use of virtual machines. So again, the hardware is physically present at the cloud
provider. But what I can do as a customer is to configure virtual machines on that hardware that from
my perspective, are exactly like the same servers that I might implement in my own environment, but
they simply reside at the cloud provider.

Likewise, I can use the physical network of the cloud provider to configure virtual networks, so that
my virtual machines appear to be on a network all by themselves and function specifically for me and
my environment. And similarly, you can also create and configure virtual appliances such as load
balancers and firewalls.

But again, the physical devices themselves reside at the cloud provider. You see an interface that you
are able to configure and use for your services, so again, from your perspective as a customer, it
might as well be a device that is physically present in your environment. It's just not physically there.

But it still provides the services that you need for your environment. Now, there are other
supplementary services that can be provided by the cloud provider over and above just at low-level
hardware. For starters, since it is a subscription-based service, you get detailed billing with respect to
which services you have used and how much you've used them, so you can break things down.

You get monitoring by the cloud provider. Because they, of course, have the actual hardware, they
are responsible for looking after the hardware, but you can still monitor what kind of resource usage
you have configured for your services. There's also log access so that you can see who is accessing
what at which times and for which purposes.

There is a very high level of security available because typically most cloud providers are very large,
very robust companies, and they essentially have to ensure that their own customers can have
secure environments. So a lot of resources are dedicated to the security of the provider, and that
simply trickles down, if you will, to the customers.

As mentioned, they can also provide additional services, such as load balancing and high availability,
because again, the providers have tremendous amounts of resources at their disposal so they can
create as much physical infrastructure as is needed, to ensure that you, as a customer, can configure
the load balancing and the high availability and all of the other services that you need.

So quite literally, from the perspective of any individual customer, the resources are effectively
unlimited. Now, clearly there is a limit at some point, but again, just as a single customer, you are
unlikely to encounter that problem. Now there's also features such as storage resiliency, so that
when you store your data in the cloud, it can be backed up automatically.

Now you can certainly still make your own backups if you want to, but in almost all cases, whenever
you store data in the cloud, the provider themselves will implement automatic backups in multiple
locations, so you almost never have to worry about completely losing your data. You can have data
replicated to other locations, so if, for example, two locations need the same copies of the same
data, you can replicate from point A to point B.
And of course, this facilitates much faster recovery in the event of failure, so that if you did lose, let's
say, a copy in location A, the replicated copy in location B or any of the backups can be used for
speedy recovery. Now, like anything, there are pros and cons, but on the plus side, there is no
procurement of hardware required.

Now again, to clarify, that's for things like the physical infrastructure. You certainly would still have to
have client systems to be able to connect to the cloud services and you might already have a physical
infrastructure in place, but to at least get started with cloud services at the infrastructure level, there
is nothing that you have to procure in terms of hardware.

You simply pay your subscription and then you pay as you go for whatever it is you configure. With
respect to the hardware itself and those very low-level resources, there is no management on your
side. Now, that's not to suggest that you don't have to look after the resources that you configure.

For example, if I create a virtual machine, certainly I have to look after it. But I do not have to look
after the underlying hardware. The physical servers hosting those virtual machines reside at the
provider. It is up to them to make sure that hardware is well maintained. Likewise, there are no
support requirements at that level.

Again, if someone is using the virtual machine and something goes wrong, then it's up to me as an
administrator to maybe address that. But at the hardware level, if, for example, something like a hard
drive fails, that's not up to me to support. That is the provider's responsibility.

And due to the tremendous amount of resources that are available at the provider, flexibility and
scalability are effectively unlimited from the perspective of a customer. I can add whatever I want
whenever I want, I can remove whatever I want whenever I want, and I can either increase or
decrease the capabilities of any given resource.

On the downside, there is a little bit of a lack of transparency. For example, I do see a virtual machine
that I can configure, of course, but I don't have a lot of control over what kind of hardware
configuration is being used. Now, in most cases, you can specify certain levels of performance.

In other words, I can say that I want the server that has X amount of memory and these types of
processing capabilities and various levels of storage, but I still don't really know the actual hardware
that is being implemented at the lowest levels. Sometimes the billing granularity might not be
detailed enough.

It might tell you that you used X amount of any given resource for X amount of time, but in some
cases, it just might not be detailed enough, depending on the services that you need. And perhaps
the biggest disadvantage in most cases is that it's a multi-tenant architecture.

In other words, I can sign on to a subscription with the cloud provider and I can configure my
resources as I see fit, but you can do the exact same thing, and unbeknownst to us, our two virtual
machines could be side by side, if you will, on the same physical server. So that does introduce some
security concerns for some environment.

But that said, again, I want to stress that the cloud providers dedicate significant resources to
ensuring that my subscription does not interfere with yours and vice versa. But it is still somewhat of
a concern for some environments. It's simply the fact that all of us are sharing those same physical
resources.

So again, that can be a concern, but that said, in many cases, you can also request dedicated
resources for whatever it is that you need. Now that will typically cost more, but you do then get the
assurance that no one else is using the same hardware that you are, so if that is a concern, you
certainly can investigate with the provider to see if they offer that type of dedicated service.

Ultimately, the ability to configure a fully functioning networking environment for just about any size
organization without having to actually purchase any physical infrastructure, is a tremendous plus in
most situations. So if you are looking to just get started with a new environment or you're looking to
expand your existing, then it's certainly worth looking into Infrastructure-as-a-Service as a cloud
services model.

3. Video: Software as a Service (SaaS) (it_csap121_11_enus_03)

After completing this video, you will be able to list features of a SaaS cloud computing service.

 list features of SaaS cloud computing service

[Video description begins] Topic title: Software as a Service (SaaS). Your host for this session is Aaron
Sampson. [Video description ends]

In this presentation we'll examine Software-as-a-Service or SaaS as a cloud services model, and you
can, in a way, think of Software-as-a-Service as the opposite end of the scale to Infrastructure-as-a-
Service. In other words, we are really just focusing on software. Now, to clarify that, if you are in an
on-premises environment that already has infrastructure, then of course you can implement your
own software onto that infrastructure.

But if you don't have any physical infrastructure in your environment, then you do have the option to
subscribe to an Infrastructure-as-a-Service model to implement the same type of infrastructure. But
in fact, depending on your needs, you might not need the infrastructure at all.

In other words, in some cases, there are some environments where all they need is the software. So
as such, the provider can host those applications for you. This provides software on demand because
you just sign in to your cloud services provider interface, and the software that you need is literally
available within there. So again, you don't need to install anything on any local servers or access
anything over any kind of local network.

If all you need is the software, then Software as a Services can provide it for you. So in essence, this
allows your clients to simply access the applications they need over the Internet. Now the end users
will still receive all of the same features and all of the updates, just as if it was running on their local
computers, but they also get the added advantage of service-level agreements that often have to be
in place with the cloud services provider, that determines the level of performance that is
acceptable.

So in other words, if I have a lot of users in a local environment, the more users I have, the more
demanding it might be on my local servers and performance might start to degrade.
But with a cloud services provider, the resources are effectively unlimited, so I can require that no
matter how many users I have accessing this application, that the service remain at a consistent level.
In addition, no local licenses are required on any individual user devices.

Now, that might need a little bit of clarification because in most environments, you do still have to
license the users as part of the cloud subscription. But to install any single local piece of software on
a client device, that typically requires a license on that device. This kind of configuration is not really
required when you implement Software as a Service.

You simply create your users within your subscription, and you can indicate that any given user is
able to use any given software, and then the fee is incorporated into your monthly total fee so you
don't have to install an individual license on each client computer.

Some common examples of applications that you might find in a Software as a Services
implementation include client relationship management, financial management, human resources
management, collaboration management and project management applications. Now those are only
just some examples. There are literally hundreds and or possibly thousands of applications that are
available from a cloud services provider, depending on who the provider is.

But this entire model offers many advantages, including tremendous flexibility and scalability,
because, again, quite literally, you can just pick and choose which applications you want out of a
library of possibly thousands. And really, they can support as many users as you need without having
to make any kind of change or adjustment to any kind of infrastructure because there is no
infrastructure.

You are just accessing the software. It's a pay-as-you-go billing model, so you simply pay based on the
number of users using any given application. There is less in-house management.

Now, I won't go so far as to say that there is none because you still will likely have users who need
help with a given application, but as far as the underlying infrastructure supporting that application is
concerned, none of that is your concern.

So you might still need to do some configuration to a degree and perhaps still provide some in-house
training, but things like updating the software are no longer your concern. That is up to the provider.
Any kind of security patches or fixes that need to be implemented, same deal. That is also up to the
provider. So those new features and updates are automatically applied as they're released, and you
never have to worry about that as the subscriber.

And perhaps the biggest advantage is the high accessibility. As long as your users have Internet
access, they can gain access to the application, and in most cases, they can use just about any type of
device to access that application as well. But there are still disadvantages of the Software as a Service
model.

For example, you are dependent on the third party for access to that application. So if, for example,
your users do not have Internet access for some reason, then they will not be able to gain access to
that application. There is still some concern with data security because the data itself that is being
accessed through the application might still be susceptible to other users.

There are certain configurations whereby the data can be stored locally on any individual device, but
there are ways to mitigate that as well. But it's also just a matter of the fact that data is being
accessed through any given application and any given user might not be aware of some of the
security concerns and inadvertently give access to someone else through that application by allowing
them to use their device or their account.

And there can be potential billing issues because it can be difficult to track which users are using
which services, and particularly in very large environments, if you have a lot of users who are very
dynamically changing the services that they use, it can be difficult to know who is currently licensed
or allowed to use which application and how often are they using it.

So there can be a few issues there, but in most cases, the advantages provided by the Software as a
Services model will outweigh the disadvantages. But both the advantages and disadvantages should
always be considered before fully implementing the Software as a Services model.

4. Video: Platform as a Service (PaaS) (it_csap121_11_enus_04)

Upon completion of this video, you will be able to list features of a PaaS cloud computing service.

 list features of PaaS cloud computing service

[Video description begins] Topic title: Platform as a Service (PaaS). Your host for this session is Aaron
Sampson. [Video description ends]

In this video, we'll take a look at the Platform-as-a-Service or PaaS model of cloud services, which for
all intents and purposes sits between Infrastructure-as-a-Service and Software-as-a-Service because
it combines both software and hardware services.

Now that said, if you are in an environment where you have already implemented some
Infrastructure-as-a-Service such as virtual machines, and then you implement some additional
software on top of that using Software-as-a-Service, that would really qualify as Platform-as-a-
Service. But more often, this refers to a development environment. This is what is typically meant by
the term platform.

It gives you a platform upon which to develop new software and new services, so it will still require
the implementation of infrastructure, including servers, networking, and operating systems on virtual
machines. But the idea is to ultimately develop applications and services that can be used.

So again, imagine that you are starting a brand new company and you want to develop software.
Well, if you're going to do everything in an on-premises environment, then you still need to acquire
all of the standard infrastructure, the servers, the networking equipment, and your client systems.
Then you also still need to acquire all of the development software itself.

For example, various types of development studios that can be used to build the new applications.
Then there's probably even additional infrastructure that is likely going to be required because
development environments need to simulate various circumstances, such as storing data locally
versus storing it on a network or even on the Internet.

So you need to be able to configure various types of situations that can simulate actual
circumstances, once fully implemented. So doing all of that simply represents quite a significant local
investment. So, like all cloud services, Platform-as-a-Service allows you to implement all of this
without requiring any local investment whatsoever.

Now, specifically, that refers to no local infrastructure. If you already have some in place, that's
certainly fine, but you can build a complete development environment entirely within the cloud. So
again, the idea is to have all of that development platform delivered by a cloud services provider. And
once again, it combines both Infrastructure-as-a-Service and Software-as-a-Service.

So you still implement the virtual machines that you need as servers. You still configure virtual
networks that are necessary to host those servers. Then on top of that, you add in the Software-as-a-
Service that is necessary, in this case, the development platform itself, the studios that your
developers use.

The clients themselves still access all of these services through the Internet, which allows the
developers to simply place all of their focus on application development instead of having to worry
about constructing and maintaining and managing the internal environment if all of this were to be
implemented in-house.

So once again, you just remove all of those concerns. Everything is implemented at the cloud services
provider, including the development environment itself, so they can literally get to work building
their applications within a matter of hours.

If you implement your subscription, let's just say on Monday of any given week, you could quite
literally be up and running fully functionally that same day, because it's really just a matter of
configuring the services you need through the cloud interface.

Now that said, it might still take a while to get everything 100% fully functional, but you can still get
started right away. As soon as your subscription is active you can begin implementing the resources
that you need for your developers to start building their applications. As for some of the key features
of PaaS, the services that are implemented effectively can complement your existing resources.

So as mentioned, you might already have some infrastructure in place in your local environment.
That's perfectly fine. But rather than having to purchase new equipment and new resources, you can
simply add the cloud services as the additional resource. So maybe you decide that you want to
upgrade or expand.

Well, you just don't have to purchase any of the infrastructure to do so. You still just keep the focus
on development. You extend your services into the cloud and you simply implement whatever
services are required through your subscription. And no additional local infrastructure is required to
complete your upgrade or your expansion.

As for some of the common PaaS offerings, they typically include processing infrastructure, storage
infrastructure, version control, and compiling and testing. So again, to try to put all of this together, if
you are a developer, then when you build an application, it typically requires backend components,
servers that can handle the processing and storage, such as databases, to store the data.
So the servers that are doing the processing and that storage would represent the infrastructure that
is required. So again, maybe this is present already in your on-premises environment, but if not, you
can implement this in the cloud.

Then, as I start to develop my software, I need to make sure that all of the changes that occur are
always tracked, particularly if you have a very large team, and this is where version control comes
into play, so that I can maintain a complete history of all changes from one version to the next, across
all developers who are making changes to the same overall product.

That in itself typically requires additional components within the software. Then compiling and
testing is putting everything together into an executable format, and, of course, testing it to make
sure that it functions as expected. So all of that can still be implemented through the software
services of the provider, in addition to the infrastructure.

So this is why I say, it combines both Infrastructure-as-a-Service and Software-as-a-Service, but


you're adding that extra component of building your own applications within here. The cloud service
provider is providing you with that complete platform. I can add more servers if I need more
processing.

I can add more storage at any point in time if I need larger databases or just more storage. I can add
in the versioning control software and the compiling and testing can then all be done within that
entire cloud-based environment because I can also create virtual machines on which to test the
installation of the software. So everything is there within the cloud environment.

In addition, not only are all of these resources available, but developers can also configure what kind
of underlying resources are needed within the specifications of the application. In other words, let's
imagine something like a web server farm where there might normally be three web servers that
handle some backend processing. That's perfectly fine.

You can define that within the specifications of the application itself so that three virtual servers are
required. But then, as the load increases on that specific application, you can also define that an
additional server be implemented to meet the increased demand. And that happens automatically.

The developers quite literally specify that they need a server with the following specifications,
whatever they might be. And if at a certain point the load becomes too heavy, that additional server
can be implemented through automated scripted processes to simply configure an additional virtual
machine to act as that extra server. And you can extend that type of approach into features such as
the storage as well.

If you need more storage, it can be allocated automatically. If you don't need as much, you can
deallocate the storage, and all of that can happen dynamically. Now again, that's up to the
developers with respect to how they define their applications and services, but that can all be done
so the developers themselves just don't have to worry about the infrastructure components. If they
need more storage, new storage is allocated, quite literally on demand.

So that translates into many advantages, most notably the low infrastructure management that I just
mentioned. A developer can simply define in their applications that this kind of infrastructure is
required, and that's it. That infrastructure is allocated as per those definitions. The developers
themselves need not be concerned with managing that underlying infrastructure.

That's up to the provider. All of their services are still accessible through browsers. So just like you
and I as regular software users might use Software-as-a-Service, the developers can still access
everything that they need to develop their applications through a browser. And it's also still
implemented on a per-use model, so you only ever pay for the resources you actually consume.

On the downside, however, you are still dependent on the service provider and their level of service
availability. So if something does go down at the service provider, then of course it's up to them to
get it back up and running. But most providers these days dedicate substantial resources to ensuring
that they do maintain service availability, but you also have to consider the service resiliency with
respect to how quickly they are able to resume services in the event of a failure.

So while both of these considerations are very high on the priority list of most providers, they still
aren't really within your control. So that's always a consideration. And in some cases, there may be a
degree of service lock-in. Now, that's usually not to suggest that the provider will require you to sign
a contract for a certain number of years.

Rather, it just refers to the fact that if you do implement something within, let's just call them cloud
provider A, it's probably not going to be a particularly portable service. In other words, you probably
couldn't easily just pick it up and move it to provider B, so you can end up being locked in just
through the inherent complexity of the application itself, and there can be some ongoing support
requirements that should also be considered as well, which does come back to the service lock-in.

If your solution is very tightly integrated with a lot of other services through that provider, that,
again, can require you to remain with that provider for quite some time, which can result in some
ongoing support issues if, for example, the provider themselves discontinues a service on which your
application or service depends. So there are certainly some considerations in that regard as well. But
for any environment looking to implement a development platform without having to invest in all of
the local infrastructure, any type of Platform-as-a-Service solution can be a very viable way to go.

5. Video: Cloud Model Types (it_csap121_11_enus_05)

Learn how to differentiate between public, private, hybrid, and community cloud models.

 differentiate between public, private, hybrid, and community cloud model types

[Video description begins] Topic title: Cloud Model Types. Your host for this session is Aaron
Sampson. [Video description ends]

In this video, we'll examine the types of cloud models that are available, including public cloud,
private clouds, hybrid clouds, and community clouds. Now, we'll begin with the public cloud because
this is by far the most common type of cloud service available and it simply refers to the fact that the
resources of the cloud provider are publicly shared.

Anyone can make use of public cloud services. I can sign up with cloud provider A, you can sign up
with that exact same provider, or of course, a different provider, but within any given single provider
there would be what's referred to as multiple tenants. Now, you can just think of those as customers
but the idea is that if I'm using the resources of cloud provider A and you are also using those same
resources, we are considered to be tenants of the same subscriber.

Now the resources themselves are accessed over the Internet for all cloud subscribers. In other
words, none of the resources that you or I or anyone else makes use of, are physically present within
your own organization. Everything resides at the public cloud provider, and we simply access those
resources using the public Internet.

So with respect to the design, if you will, we do see the public cloud provider at the top of this
graphic, then each individual tenant typically would represent a completely separate organization. So
tenant A would be a completely separate organization or company from tenant B, who would be yet
another completely separate organization or company from tenant C.

But we can all still access the resources of the cloud provider in a private fashion because it is in fact
incumbent upon the cloud provider to ensure that tenants are able to access resources in a private
manner, even though it is a public cloud. So in other words, my resources are not visible to you.

Likewise, your resources are not visible to me. So we may be using the same underlying hardware,
but all of the resources that are configured are maintained separately for each tenant. Now, the
private cloud can essentially be thought of as the exact opposite to the public in that the resources
are only privately shared.

Now, they are still accessible to other users, but you entirely determine yourself which users will be
able to access which resources, and the resources themselves can still be accessed over the Internet
but it could also be over a private or internal network. Now, the most common implementation of a
private cloud is when you have a very large data center that might be something like your corporate
headquarters, but then you might also have smaller branch offices.

So most of the infrastructure would be at that large data center. So for any given branch office that
needs a certain level of infrastructure or perhaps a software application, they can simply make use of
the infrastructure that exists at the primary data center, as opposed to having to implement new
infrastructure in those branch offices.

Now this, of course, will come down to the amount of infrastructure that you actually have in that
data center that can be made available to those branch locations. But on the assumption that you do
have enough, then you simply don't have to implement infrastructure at the branch offices.

You just allow them to access the resources in the corporate headquarters, in the exact same manner
as a public cloud tenant would access the resources of a public cloud provider. So again, that's
probably the most common implementation, but it really just comes down to the fact that the
private cloud is implemented and maintained internally. So, regardless of what the clients look like,
all of the resources exist within your own organization.

So what you don't have are clients out in the public Internet accessing those resources. It's all still the
internal users of your own organization.

Now, if you have a very large organization with multiple different divisions or multiple different
companies, that might be a little closer to a public cloud, but it's still all under the control of the
corporate headquarters or that primary data center, as to which resources are made available and
which entities are able to access those resources. So then, the hybrid cloud is effectively a
combination of both a public and a private, and you'll typically find a hybrid cloud when you need to
supplement your existing resources.

So let's go back to the private cloud for just a moment and let's just use the scenario whereby there
is a single primary data center acting as the corporate headquarters and maybe one additional
branch office, and that branch office is already using a private cloud configuration in that they are
using the resources of the primary data center. That's perfectly fine.

That, in and of itself, is just a private cloud. But maybe your organization is about to expand and two
or three more branch offices are going to be implemented. However, you don't have enough
resources at the primary data center to support those additional branch offices.

So you have two choices: you could implement local infrastructure in those new branch offices, but
since you already have cloud services configured, then the other option is to simply supplement your
existing primary data center with public cloud services. So rather than implementing infrastructure in
those branch offices, we just add more resources to the corporate headquarters by implementing a
public cloud service.

Then we simply make those resources available to those branch offices. So now you have both. The
hybrid cloud consists of both your existing private cloud and the public cloud subscription that you
used to supplement your own resources, to then make those supplemented resources available to
your new branch offices. Now, the community cloud is probably the least common but this typically
is implemented within business communities that have similar needs.

In other words, you want to share the infrastructure among a few different businesses, but it's still a
control group of businesses. So you typically will implement a private cloud space but within a public
cloud provider, so you'll still get the resources from a public cloud, but they will be shared among just
the members of the business community, effectively making this private cloud space.

So in other words, it's not just your organization that is the tenant. It's your organization and let's say
three others that all have similar or common needs. So this makes it very ideal for joint projects
because you can all share the cost of that subscription.

So again, in terms of the infrastructure, it is still a public cloud provider, but each organization would
be able to access the resources of that subscription, so that all of us can participate, we can all
communicate, and we can all ensure that we have access to the resources that are necessary to
complete the project.

Again, this is probably not nearly as common as any of the other models, but it's certainly something
that you still might encounter. Ultimately, it just comes down to what you need for your organization.
If you are simply looking to extend your own resources or to create an environment that essentially
has no infrastructure, for example, then you're probably looking at a public cloud.

If you already have a very robust internal infrastructure and you're looking to extend that to other
smaller locations, then you're probably looking at a private cloud, then the hybrid and the
community will certainly come down to very specific circumstances, but ultimately, there are models
to suit just about any type of need when it comes to cloud services.

6. Video: Shared Resources and File Synchronization (it_csap121_11_enus_06)


Upon completion of this video, you will be able to describe the differences between internal and
external sharing and file synchronization.

 describe the differences between internal and external sharing and file synchronization

[Video description begins] Topic title: Shared Resources and File Synchronization. Your host for this
session is Aaron Sampson. [Video description ends]

In this video, we'll take a look at cloud-based shared resources and file synchronization. But
beginning with shared resources, what this simply means is that any resource in any cloud
environment can be shared. It's just a matter of determining with whom that resource should be
shared.

And this begins with deciding if the resource is going to be shared internally or externally. Now, you
don't actually have to make the call of one over the other. You can implement both if you want to,
and I'll come back to that in greater detail in just a moment. But quite simply, if it's shared internally,
then only internal resources and or users will have access.

If it's shared externally, then other users or applications outside of your organization can also have
access. So if we look at internal resources, again, you simply have any resource such as a document;
let's just go with that as a very simple example. Access to that document then, would be defined
within the confines of your cloud subscription.

In other words, it can be consumed only by other internal resources or users, or to put that another
way, there quite simply is no public access. No one outside of your organization is able to gain access
to that resource. Some other examples of internal resources might include data used only by cloud-
based virtual machines or perhaps a cloud database that drives a cloud-based application.

But regardless of what the resource is, in terms of access, if it is internally shared only, then only
entities within your cloud subscription can have access. But as mentioned, you can also share
resources externally, and just as the name suggests, these represent resources that can be accessed
by outside applications and services, but they can still be consumed by internal resources as well as
external.

So there virtually is no situation where you would only have external sharing. Now, that's not to
suggest that you might not create a resource that is only meant to be shared with external resources,
but anyone within the organization could still have access. So, this simply results in both public and
private access, and once again, this is quite simply up to you.

It depends on what your needs are, but both can be configured. Some examples of external services
might include media content that is delivered to your customers or perhaps business intelligence
that is consumed by some kind of application. Again, it doesn't really matter what the resource is.
These are just common examples.

But just to give you a better idea of an example of sharing with external services, you are in fact
watching it right now. You are watching a resource. This is media content that is being delivered to
you from the vendor. But while it's intended to be viewed by external clients, anyone within the
organization can also view and or work with this resource.

So again, it's usually not just a matter of saying that this particular resource is only going to be
viewable or shareable with external entities. That might be its intention but anyone within the
organization would still have access, provided, of course, that they have permission to do so.

You can still control who has access, whether they are internal or external, by using permissions, but
again, the point is that sharing can be done internally or externally or both, depending on your
needs. Now, the other issue is synchronization, and this typically deals with documents that are
stored in the cloud. When you have any kind of cloud-based document, of course, users might want
to work with it on their local devices because they might not always be connected.

So I can save a copy of any given document on my local device and I can work with it when I am
disconnected, for example, when I'm traveling, but when I reconnect, then, whatever changes I have
made to the local copy can be synchronized back to the cloud-based copy so that, of course, both
copies remain consistent with each other.

I don't want to have one version on my local device and a different version in the cloud, so
synchronization is typically implemented in all of these scenarios, if a resource such as a document is
able to be stored locally on user devices. Now, that is also something that can be configured to suit
your needs. For example, certain users might require the ability to store and work with local copies,
while other users maybe should not have the ability.

So that will come down to whatever your needs are as well, but it's certainly a feature that can be
enabled for anyone within the organization or outside of the organization if that document is also
externally shared. So it's always just a matter of which options will best suit your needs, but all cloud
resources that can be shared are shareable either internally and or externally. Then synchronization,
of course, is up to you, depending on the type of resource and who needs what type of access.

7. Video: Rapid Elasticity (it_csap121_11_enus_07)

Upon completion of this video, you will be able to describe what rapid elasticity is.

 describe rapid elasticity

[Video description begins] Topic title: Rapid Elasticity. Your host for this session is Aaron
Sampson. [Video description ends]
In this presentation, we'll examine a feature known as rapid elasticity, which may also be referred to
as dynamic provisioning. But this is effectively a feature of the cloud service provider as opposed to
something that you would implement in any type of solution.

And it's most commonly found in web applications, databases and storage services, but there are
certainly other types of services that can take advantage of rapid elasticity, but effectively, it simply
means that any given type of service will be able to respond very rapidly, as the name suggests, to
any kind of change in conditions when it comes to the demands on that service.

So if you think about just something like a database, for example, if you forget about cloud services
entirely and just think about your own in-house database, it probably exists on a particular server
and users, of course, connect to the database to access the information they need. That's fine, but
with respect to the server itself, how much memory does it have?

How much storage does it have? What kind of networking performance does it have? What kind of
throughput does it offer as more and more users connect to that database? In short, you might not
know. So in many cases, you quite literally have to guess when it comes to the resources of that
server.

So rapid elasticity quite literally eliminates the need to estimate the resource usage for that database
server or any other type of resource. Now, it does so by implementing very scalable resource
provisioning. Now, in this context, provisioning can mean adding resources or taking them away,
which would officially be deprovisioning but it simply means that you can go in either direction.

We can add resources or we can take them away, and most notably, all of this is done automatically.
Adjustments are made on the fly, if you will, based on the workload on that particular service at any
particular point in time. So with that, there is some configuration that is required.

In other words, it doesn't just automatically figure out everything, but typically what you would do
would be to define thresholds. So in other words, if you cross a certain threshold in terms of the
number of users who are connected to a database, then you could increase the processing resources,
the memory resources or perhaps even access to redundant or replicated copies of that data.

Regardless of the resource, though, you simply define that if you cross this particular value, then
more of that resource will be allocated. Now, that's on the assumption that the demand is increasing.
But again, remember that the demand can also decrease at certain times, so you can do the exact
same thing in reverse.

As the demand drops you can effectively release some of those resources so that you aren't using
them for no reason. So once you configure these thresholds, effectively the process is quite
seamless. You don't have to do anything to respond to this change in condition. You just define how
much of any given resource will be allocated or deallocated based on the threshold that is being
crossed, and again, the direction; whether you are requiring more of a resource or less.

Now, on the subject of requiring more, when you are dealing with a cloud provider, from the
perspective of any single customer, resources are effectively limitless. Now, that's not to suggest that
you can't go overboard, but when it comes to cloud providers, they have tremendous resources at
their disposal.

So if you need more storage, for example, you can get more storage. If you need more processing,
you can get more processing. If you need more memory, you can get more memory. And it's very
unlikely that you'll encounter a situation where you simply cannot get more. So, of course, the
benefits from this include more responsive services.

Not only can they perform better when they need to, but you can reclaim those resources when they
aren't necessary, so in fact, you don't have to use up as many resources as you might if you had to
guess in something like an in-house scenario.

Your applications and services are much more flexible, you don't really have to worry about how they
will respond to changes in conditions, but it certainly could be the case that it will come at an
additional cost because in some cases it might actually be an add-on as a feature, but you also have
to bear in mind that you are getting the benefits of more responsive and flexible applications, which
will likely better suit the needs of your customers.

And since resources can be deprovisioned, it would certainly cost less than implementing a very
static solution where you overestimated or you allocated too much in terms of resources, because
then, you're simply paying for nothing. There are some other considerations, including the fact that
the requests for new resources can come from multiple sources, at any time, from anywhere.

So in other words, you might need to allocate and or deallocate several different resources at the
same time. So this does introduce a little bit more of a management overhead. You might need to
monitor the resources that are being requested so that you get a better idea as to what is happening
under normal circumstances, on a day to day basis.

In other words, can you identify peaks where additional resources do tend to need to be allocated
and then vice versa? Are there times of day when they can be deallocated? Because this can allow
you to build a little bit of an audit trail, if you will, for billing purposes. In other words, you can match
up the areas where the increased resources were allocated to increases in your monthly payment.

And then similarly, of course, you would want to see the cost going down when those resources were
deallocated. In other words, you don't want to be paying for something that you aren't using. But
that's the whole idea behind rapid elasticity, is to ensure that you are only paying for what you need
when you need it.

So in most cases, you do want to implement rapid elasticity, but it would come down to what the
service is and how often it's being used. If, for example, it's a fairly small service that just isn't being
accessed by that many people, then you might not see a lot of benefit. But if it's a very large, very
robust service that has a lot of dynamic situations whereby increases and decreases on the workload
change often, then rapid elasticity should certainly be considered as a solution to deal with those
changing conditions.

8. Video: High Availability (it_csap121_11_enus_08)


Upon completion of this video, you will be able to recognize the benefits of high availability cloud
solutions.

 recognize the benefits of high availability cloud solutions

[Video description begins] Topic title: High Availability. Your host for this session is Aaron
Sampson. [Video description ends]

In this presentation, we'll provide an overview of high availability for cloud-based services, which
refers to the ability of any type of service to operate continually without a failure. Now, I'm going to
clarify that in just a moment with respect to what continuously operate really means, but this is
achieved by implementing redundancy.

Quite simply, you have more than one component involved in any given solution, which provides the
ability for the system or the service as a whole to survive the failure of any given component. Now,
high availability does generally refer to the amount of time any given service is available, but you also
have to bear in mind the amount of time it takes the system or the service to respond to your client
requests.

For example, everything in the system might be up and running but if your clients or customers are
having to wait an unacceptable amount of time to receive any kind of result, then really that system
cannot be considered to be highly available. So again, this is where the multiple components come
into play. But as mentioned, I'll get into the overall architecture in greater detail in just a moment.

But there are also some other considerations with respect to implementing high availability, but
most notably, it is the percentage of uptime. Now, I think you'll find that it's very rare for any type of
service provider to advertise that they would have 100% uptime because it just isn't feasible in the
real world. There are simply too many things that could go wrong with any given system or solution.

But you will find, and we'll come to this in a moment as well, that the advertised availability uptime
percentages are very high. But you do also have to consider how long it takes any given system or
service to recover in the event that it does fail. And of course, you also have to factor in standard
maintenance.

And in fact, your clients would really want to be assured that you are performing maintenance. So if
you were to advertise 100% uptime, then you really wouldn't leave yourself any time to perform
regular maintenance. So typically, values very close to 100% are offered or advertised, but again, I
would say that it would be very unlikely that you would ever see 100% being offered as the
percentage of uptime.

So with respect to the factors that might affect your availability, most notably, you have to try to
identify any single point of failure. In other words, if that single component, regardless of what it is,
were to fail, it would take down the entire system. Now, just as a very simple example, if you have,
let's just say, a database on a server and that server itself were to fail, then clearly that is a single
point of failure.

But, as we'll see in just a moment, you can implement multiple servers to enhance the availability.
But let's take it a step further and just imagine that if both of those servers were plugged into the
same wall outlet and you lost power or even just that circuit was tripped, then you would still lose
both of those servers.
So the power in that example becomes a single point of failure. Now, when you do have multiple
components, the process of one component failing and another assuming the services of the
component that did fail, is referred to as failover.

So going back to my example of the database server, you could implement two database servers, and
if one were to fail, then the other server assumes the services of the failed system. But of course, this
needs to be detected.

So there has to be some kind of a mechanism whereby the secondary or standby server becomes
aware of the failure of the primary server, and that can be implemented in a couple of different ways
but ideally, it should be automatic so that no kind of manual intervention has to take place for the
secondary server to start responding to client requests.

Otherwise, there could be quite a delay between the time that the primary server fails and someone
realizes it and then manually starts up the secondary server. Now, with respect to the multiple
systems that I just referred to, these are known as clusters, and these are quite simply just groups of
computers that all provide the same service.

So once again, just going back to those two database servers, they would both have identical copies
of the same databases on them, so that if that primary server does go down, then all of the exact
same information is on that secondary system. And of course, there would be some kind of a process
whereby any changes that are made to the primary server are replicated to the secondary server so
that when it does fail, there is as little data as possible missing, if you will, on the secondary server.

But high availability clusters aren't limited to just having multiple servers. You can also implement
load balancing, which allows you to automatically redistribute resources to many different servers. In
fact, in some cases you can have 5, 10, 15, 20 or more servers, so the load balancer itself becomes
the initial point of access from the perspective of all client requests.

They all arrive at the load balancer, then the load balancer determines where that particular request
should be sent based on how busy each server is behind the load balancer. So if, for example, there
are five servers, the load balancer will do its best to distribute the traffic evenly, about 20% each,
across all five servers.

Now, in that example, if you only have a single load balancer, then the load balancer itself becomes a
point of failure. So if, for example, it became overwhelmed with too many requests, then that could
also take down the service. So in fact, you can have multiple load balancers in a tiered architecture,
to even further distribute the load.

But this does increase the complexity, and now you could end up having to deal with too many
points of failure, so there is a little bit of a balancing act here, if you will, but it's going to depend on
how robust the service is and how much overhead you want to deal with. But ultimately, a load
balancer is typically recommended in most situations because it does allow you to place many more
servers behind a load balancer so that you are in fact able to survive the failure of more than one of
them at any given time.

Now, another component of high availability is resiliency, which refers to the system's ability to
recover, and we did touch on this earlier and I mentioned I would come back to this, but it's not just
a matter of ensuring that the system is up as much as possible. But if and or when it does fail, how
quickly can you get it back up and running?
So again, there's a little bit of a balance here because the more components you have, the less likely
it might be for the entire system as a whole to fail. But at the same time, the more components you
have, the more complex the system becomes as a whole and the more management and overhead
you introduce.

So as mentioned, there is a little bit of a balance there, but with respect to how much uptime you
would generally want to see or expect from a provider, as mentioned, it's typically not 100%, but it is
in fact very high; in many cases as much as 99.999% of the time.

This is often referred to as five 9s. But that level of uptime only allows for less than five hours per
year downtime. So typically, that's what customers would be looking for in terms of uptime, but at
the same time, you as a provider are limiting yourself to only a few hours for things like maintenance
and just whatever might happen that is simply unexpected.

So as mentioned, there is always a little bit of a balancing act when it comes to high availability. And
finally, there are two different models that are generally used when it comes to high availability. One
is referred to as an active-active deployment and the other an active-standby deployment.

Now, this will depend on the service and really the type of infrastructure that you're looking to
implement but let's just go with the example of multiple servers that are all servicing the same type
of requests, such as a database server. An active-active deployment means that if you have even just
two servers, both of them are actively servicing client requests.

Let's just say for the sake of argument that each one responds to about 50% of the requests, even
without a load balancer. OK? That's perfectly fine. However, in that scenario, if one server were to go
down, then the other server must be capable of assuming 100% of the workload because of course,
it was only servicing about 50% when both servers were up.

So each member of the cluster needs to be fully capable of supporting the entire workload on its
own in an active-active deployment. Now, that's still the case with an active-standby, because in this
configuration, there is only one server that is actively servicing the client requests.

So by definition, it must be capable of handling 100% of client requests. But then the other server is
simply in standby mode, so it does not do anything unless the primary server fails. Now, in both
models, active-active and active-standby, you can have more than two servers. That's perfectly fine.

But again, it really comes down to what you feel the servers themselves are capable of doing. So if
you had, let's say, ten servers, but you don't think that there's any one of them that could handle the
entire workload, then you should probably go with an active-active deployment, because it's unlikely
that most of them would fail at the same time.

But with an active-standby, there is only ever one active system. You can still have nine of them on
standby, which certainly gives you a lot of security in terms of knowing that there will always be a
standby server, but there is still only one active server.

So again, it really comes down to what you feel is going to suit your needs the best, and perhaps just
the number of servers that you have that you want to dedicate to this service would be a
consideration. But in either case, you still get redundancy. There is still more than one server capable
of servicing client requests, and ultimately, that's what high availability is all about.

Once you have that implemented, it's really just a matter of determining how robust you want your
solution to be. Again, is it two servers? Is it ten servers? Are there load balancers? Are there multiple
load balancers? All of that is up to you. But for any type of service that would be considered to be
mission critical, then you should most certainly investigate implementing high availability solutions.

9. Video: Measured and Metered Services (it_csap121_11_enus_09)

After completing this video, you will be able to list considerations for using measured and metered
services.

 list considerations when using measured and metered services

[Video description begins] Topic title: Measured and Metered Services. Your host for this session is
Aaron Sampson. [Video description ends]

In this video, we'll examine measured and metered services, and we'll talk about the differences
between those two in just a moment, but with respect to anything that is measured, exactly as the
name indicates, it is quite simply an indication of how much something is being used.

So with respect to cloud services, if any given user accesses a service or a feature, then that can
actually be measured with respect to how long the user actually uses that service. Now that is
certainly related to metering something, but where measuring is concerned with how much of
something is used, metering is more so concerned with how much you actually pay.

So the two are directly correlated, but if you think about something like the power meter on your
home, that is what determines what your bill actually is each month. Then if you were to track when
the lights were turned on and how often they were turned on, that would be measuring. So clearly,
measuring affects the metering, but it's just a matter of knowing that you used this much of a
service, which is measuring, therefore, your bill is this much, so that's metering.

Now, with respect to measuring and metering, they all really come down to the resource
considerations that you have implemented in terms of your servers. The overall resources that those
servers require, such as storage and processing power and network bandwidth. So once again,
measuring would be wanting to know the actual values, how much storage is required, how much
processing power, how much network bandwidth is being consumed by those servers.

This will then translate to the metered value, which will determine your overall bill each month. So in
terms of some of the key considerations for a measured service, it allows you to better control your
resource usage. And when you are dealing with cloud services, the cloud provider themselves will
typically make these metering capabilities and measurement tools available to you by default.

So whereas some of these services might not be as easy to measure or meter in an internal
environment, it's almost always an option within a cloud-based environment, because, of course, it
determines your overall bill each month. So you get the usage details, which again is the
measurement which translates to the service charge tracking or the metering.
And this, of course, directly translates into several advantages, including better resource
management. You have a much better idea as to which services require the most resources and the
least. So this allows you to, in turn, optimize your resource utilization, and you can also generate
reports so that you can see historical information. You can track trends and help, in fact, determine
what might be required as you move forward.

So in fact, there are several types of reports available that can help you to gather insights, including
reports on overall server-based resources, reports on storage resources, user licensing, application
metering and then chargeback and showback reports. Now, chargeback versus showback is
somewhat similar to measuring versus metering.

The chargeback component typically refers to who was using what, and the showback indicates the
price for each broken-down section. As an analogy, if you were to think about something like your
mobile phone bill, if you have a family package, it might break it down by each individual user, which
would then show you how much, in terms of a percentage, that user is paying for the entire bill. So
again, it's just to break things down and show you the details on a service-by-service or resource-by-
resource basis.

So ultimately, all of this can simply help you to make better decisions moving forward, by just getting
a better picture as to what's happening overall.

This can allow you to correlate the use of your resources to your business needs and of course, make
adjustments accordingly, so that if you discover that a particular resource is not being used all that
often, then maybe you can reclaim some of the resources or allocate them somewhere else, or
possibly even consolidate them into some other type of solution, or come up with a brand new
solution if it's indicated.

But of course, it would be difficult to make these decisions if you didn't know what was being used
and how much it was being used in the first place. So this is a tremendous advantage of all cloud-
based services, whereby you can simply be informed that this service is being used this much and it
is responsible for this much of your bill. Once you know that information, you can simply adjust
accordingly for better management and better utilization overall.

10. Video: Virtual Desktop (it_csap121_11_enus_10)

Upon completion of this video, you will be able to describe the types of virtual desktops and their
purposes.

 describe the types of virtual desktops and their purposes

[Video description begins] Topic title: Virtual Desktop. Your host for this session is Aaron
Sampson. [Video description ends]
In our final presentation for this course, we'll take a look at virtual desktops, which refer to server-
based virtual machines. Now in that regard, the server can be any particular type of server in any
location, and almost all servers these days support the ability to run virtual machines.

But with respect to the type of system that is being run, in a virtual desktop environment they are
actually client or desktop-level systems. Then users access those virtual machines through some kind
of remote connectivity software. But you aren't establishing a connection through to the server, then
accessing the virtual machine on that server. You are connecting directly through to the virtual
machine on the server.

So from a user's perspective, the server itself is entirely transparent. You only see the virtual desktop
on that server. Now, once you make the connection, you can still interact with your own local device.

For example, you may be able to copy files from the virtual desktop to your actual desktop, or you
might be able to print documents from the virtual to your own locally installed printer. But this
makes the virtual desktop accessible by a broad variety of devices. Even if you just consider one
person.

They might work at a desktop system in the office and still connect to this virtual desktop, but then
they might have a laptop that they use when they travel or just take home with them and they could
still make the same connection to the same virtual desktop to be able to access the same
applications or the same services on that desktop.

They could even use mobile devices such as tablets or phones. So regardless of the local device
they're using, they can always establish this connection through to the virtual desktop and still see
the exact same environment. So with respect to some key features, there, of course, does need to be
virtualization software on the server that hosts these virtual machines.

But as mentioned earlier, almost every type of server these days offers the ability to support
virtualization or if it doesn't support it inherently, you just have to install that software. But again, the
virtual machines are running on the servers themselves, and they can run really as many desktops as
they can support. It would just come down to the resource requirements in terms of how many
virtual machines could be running on that system.

Now, if you implement this within your own internal environment, this is typically referred to as a
virtual desktop infrastructure, or VDI. But you can also set up the same kind of scenario in a cloud-
based environment, in which case it's referred to as Desktop-as-a-Service, or DaaS. Ultimately, it
doesn't really matter from the perspective of a user.

They still just establish a remote connection using some kind of remote connectivity client directly to
the virtual machine on those servers. So again, from the perspective of the client, it doesn't really
matter where the server is. Whether it's in the local environment or whether it's in the cloud. They
simply establish the remote connection and they see the virtual desktop.

So, for lack of a better word, the server that hosts the virtual desktop is bypassed, so to speak. Now,
there are a couple of different configurations that you can implement when considering a virtual
desktop infrastructure. The first is what's known as dedicated or persistent, and then the other is
referred to as stateless or non-persistent.

Now, dedicated simply means that you, as a user, will establish a connection through to the virtual
desktop, and you will always end up at the same virtual desktop so you can kind of think of it as your
particular environment. You can make changes to the desktop, and when you log back on again, you
would see those changes because it's the same desktop.

So of course, that's why it's called dedicated. You always get the same environment. Stateless,
however, refers to a pool of virtual machines that would likely all be configured identically and all
running the same application or service. So, from the perspective of a user, it doesn't really matter
which desktop they access.

They just make their connection and they end up on any one of the desktops. But if they connect
again later on, it could be a completely different desktop. So, it typically just comes down to what the
users need to use these virtual desktops for.

If it is for a very particular service that is exactly the same for everyone, then it's probably a little
simpler to create a stateless or a non-persistent pool, because the server will simply allocate any one
of the virtual machines to any given user, and it can balance the load a little bit, if you will.

But when you make connections to a dedicated or a persistent system, then you always get that
same one and one user could end up using more resources than another. And it might not be as
balanced of an environment, but it's always going to come down to your needs. If user A always
needs to connect to virtual desktop A, then that's a dedicated configuration, and if that's what you
need, then that's what you go with.

But you do have those two options. As for advantages of virtual desktop environments, it allows you
to ensure that the data always resides on the server or in the cloud because the application itself is
not running on any local devices. This inherently enhances the security because that data only
resides in a single location.

Now you could, of course, have multiple servers, all hosting multiple virtual desktops, but you still get
a very centralized location for all of the data, and nothing resides on the local client systems. In
addition, if you are in an environment that does a lot of testing, this can be ideal because you only
need to install the application on the virtual machines.

You don't need to install it across all of your devices, so therefore, if there are any issues or problems,
they're contained to just the servers hosting those virtual desktops. You don't have to run down all of
the installations across all of your devices, to try to troubleshoot the problem. All you have to do is
fix it on the virtual desktop. Then, once it's corrected, everyone connecting into that virtual desktop
sees the correction.

So again, everything is very contained. This also applies to things like updates and patches and fixes
as well. If it's just simply a matter of applying an update, you need only apply it to the virtual
machines on the server. You don't have to worry about managing and implementing those updates
across all of your client systems.

And lastly, this also offers compatibility for legacy systems because again, all you need on the actual
client systems is the remote connectivity software. So if your system is becoming a little bit aged and
maybe it would not even support the application itself, as long as it supports the remote connection,
then you can still access that application or service because nothing is running locally except for the
remote client connection.

So if you are in a situation where you do want to centralize your applications, but you still want to
give your users a familiar desktop environment, then virtual desktop infrastructures and or Desktop-
as-a-Service if it's in a cloud, can certainly be very useful to allow those users to run centralized
applications in a familiar setting.

11. Video: Course Summary (it_csap121_11_enus_11)

In this video, we will summarize the key concepts covered in this course.

 summarize the key concepts covered in this course

[Video description begins] Topic title: Course Summary. [Video description ends]

So in this course, we've examined various cloud computing concepts. We did this by exploring IaaS,
SaaS and PaaS cloud computing services, cloud model types, internal and external sharing and file
synchronization, rapid elasticity and high availability, and measured and metered services, and types
of virtual desktops.

In our next course, we'll move on to explore how to set up and configure client-side virtualization.

You might also like