1 Solved by Purusottam Adhikari

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Tribhuvan University

Institute of Science and Technology Model Question, 2079


Bachelor Level/ Fourth Year/ Eighth Semester/Science Full Marks: 60
Computer Science and Information Technology (CSC467) Pass Marks: 24
(Introduction to Cloud Computing) Time: 3 hours
Section A Attempt any two questions. (2 × 10 = 20)
1. Define cloud. Describe the evolution of cloud. Mention the advantages of using cloud computing.
[10]
Cloud allows network-based access to communication tools like emails and calendars. Whatsapp is also
a cloud-based infrastructure as it comes in communication it is also one of the examples of cloud computing.
All the messages and information are stored in service providers hardware.
Cloud Computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared
pool of configurable computing resources(eg. Networks, servers,storage, applications and services) that can
be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud
computing is the use of networked infrastructure software and capacity to provide resources to users in an
on-demand environment. Cloud Computing is an emerging consumption and delivery model that enables
provisioning of standardised business and computing services through a shared infrastructure, where-in
end user is enabled to control the interaction in-order to accomplish the business task. Computing resources
such as hardware, software networks ,storage, services and interfaces are no longer confined within the four
walls of the enterprise.Cloud computing is an emerging style of computing where applications, data and
resources are provided to users as services over the web.
Let’s have a quick walkthrough of cloud computing history and evolution all these years-

1960’s
One of the renowned names in Computer Science, John McCarthy, enabled enterprises to use expensive
mainframe and introduced the whole concept of time-sharing. This turned out to be a huge contribution to
the pioneering of Cloud computing concept and establishment of Internet.
1969
With the vision to interconnect the global space, J.C.R. Licklider introduced the concepts of “Galactic
Network” and “Intergalactic Computer Network” and also developed Advanced Research Projects Agency
Network- ARPANET.
1970
By this era, it was possible to run multiple Operating Systems in isolated environment.

1997

Prof. Ramnath Chellappa introduced the concept of “Cloud Computing” in Dallas.

1999

Salesforce.com started the whole concept of enterprise applications through the medium of simple websites.
Along with that, the services firm also covered the way to help experts deliver applications via the Internet.

2003

The Virtual Machine Monitor (VMM), that allows running of multiple virtual guest operating systems on
single device, paved way ahead for other huge inventions.

1 Solved By Purusottam Adhikari


2006

Amazon also started expanding in cloud services. From EC2 to Simple Storage Service S3, they introduced
pay-as-you-go model, which has become a standard practice even today.

2013

With IaaS, (Infrastructure-as-a-Service), the Worldwide Public Cloud Services


Market was totalled at £78bn, which turned out to be the fastest growing market services of that year.

The evolution of cloud computing can be bifurcated into three basic phases:

 The Idea Phase- This phase incepted in the early 1960s with the emergence of utility and grid computing
and lasted till pre-internet bubble era. Joseph Carl Robnett Licklider was the founder of cloud computing.
 The Pre-cloud Phase- The pre-cloud phase originated in 1999 and extended to 2006. In this phase the
internet as the mechanism to provide Application as Service.
 The Cloud Phase- The much talked about real cloud phase started in the year 2007 when the classification
of IaaS, PaaS, and SaaS development got formalized. The history of cloud computing has witnessed some
very interesting breakthroughs launched by some of the leading computer/web organizations of the world.

Advantages of cloud computing


 Reduced Cost
 Organizations want to reduce the cost of managing and maintaining has to shift toward the resources
of cloud computing vendor. Using cloud provider services, organization keep their applications up to
date on their systems free without having to purchase and install.
 Flexibility
 The main reason of popularity of cloud computing is flexibility, due to this users have ability to
access data anywhere and anytime such as from home, on holiday in the world. If user is off-line
want to access data, user can connect through virtual office, quickly and easily. The devices which
are applicable include laptop, desktop, smart phone etc. with internet connection.
 Availability and Reliability
 Availability of cloud resources is high because it is up to vender available on 24x7 and more reliable
chances of failure are minimal and immediate response to disaster recovery. From anywhere one can
login and access the information.
 Simplicity
 Simplicity offers a user does not require training or have technical sound to work on a cloud, with
little knowledge of hardware and software can use the cloud resources.
 Greener
 The cloud computing is naturally a green technology since it enable resource sharing among users
thus not requiring large data centers that consumes a large amount of power.Users can get anything
from cloud at anytime and anywhere.
 Centralized
 Because the system is centralized, you can easily apply patches and upgrades. This means your users
2 Solved By Purusottam Adhikari
always have access to the latest software versions.
 Mobility
 Users of cloud do not require to carry their personal computer, because they can access own
documents anytime anywhere.
 Unlimited Storage
 Capacity Cloud computing support unlimited data storage capacity, so it offers virtually unlimited
storage of data. Users can store approximately hundreds of petabytes ( a million gigabytes) compared
to own computer's current capacity can be 500 GB or 1 TB, so do not require to panic about your
data.

2. Describe the services provided under cloud computing. What are the benefits of virtualization?
[6+4]
There are 5 commonly used categories in a spectrum of cloud offerings: (Distinction is not clear-cut)(to suit
different target audiences)
● Platform-as-a-service (PaaS) -Provision H/W, OS, Framework, Database
● Software-as-a-service (SaaS) - Provision H/W, OS, Special purpose S/W
● Infrastructure-as-a-service (IaaS) - Provision H/W and organization has control over OS
● Storage-as-a-service (SaaS) - Provision of DB-like services, metered like - per gigabyte/month
● Desktop-as-a-service (DaaS) - Provision of Desktop environment within a browser

Software as a Service(SaaS)

Software-as-a-Service (SaaS) is a way of delivering services and applications over the Internet. Instead of
installing and maintaining software, we simply access it via the Internet, freeing ourselves from the
complex software and hardware management. It removes the need to install and run applications on our
own computers or in the data centers eliminating the expenses of hardware as well as software
maintenance.
SaaS provides a complete software solution that you purchase on a pay-as-you-go basis from a cloud
service provider. Most SaaS applications can be run directly from a web browser without any downloads
or installations required. The SaaS applications are sometimes called Web-based software, on-demand
software, or hosted software.
Advantages of SaaS
1. Cost-Effective: Pay only for what you use.
2. Reduced time: Users can run most SaaS apps directly from their web browser without needing to
download and install any software. This reduces the time spent in installation and configuration and
can reduce the issues that can get in the way of the software deployment.
3. Accessibility: We can Access app data from anywhere.
4. Automatic updates: Rather than purchasing new software, customers rely on a SaaS provider to
automatically perform the updates.
5. Scalability: It allows the users to access the services and features on-demand.
The various companies providing Software as a service are Cloud9 Analytics, Salesforce.com, Cloud
Switch, Microsoft Office 365, Big Commerce, Eloqua, dropBox, and Cloud Tran.

3 Solved By Purusottam Adhikari


Platform as a Service

PaaS is a category of cloud computing that provides a platform and environment to allow developers to
build applications and services over the internet. PaaS services are hosted in the cloud and accessed by
users simply via their web browser.
A PaaS provider hosts the hardware and software on its own infrastructure. As a result, PaaS frees users
from having to install in-house hardware and software to develop or run a new application. Thus, the
development and deployment of the application take place independent of the hardware.
The consumer does not manage or control the underlying cloud infrastructure including network, servers,
operating systems, or storage, but has control over the deployed applications and possibly configuration
settings for the application-hosting environment. To make it simple, take the example of an annual day
function, you will have two options either to create a venue or to rent a venue but the function is the same.
Advantages of PaaS:
1. Simple and convenient for users: It provides much of the infrastructure and other IT services, which
users can access anywhere via a web browser.
2. Cost-Effective: It charges for the services provided on a per-use basis thus eliminating the expenses
one may have for on-premises hardware and software.
3. Efficiently managing the lifecycle: It is designed to support the complete web application lifecycle:
building, testing, deploying, managing, and updating.
4. Efficiency: It allows for higher-level programming with reduced complexity thus, the overall
development of the application can be more effective.
The various companies providing Platform as a service are Amazon Web services Elastic Beanstalk,
Salesforce, Windows Azure, Google App Engine, cloud Bess and IBM smart cloud.

Infrastructure as a Service

Infrastructure as a service (IaaS) is a service model that delivers computer infrastructure on an outsourced
basis to support various operations. Typically IaaS is a service where infrastructure is provided as
outsourcing to enterprises such as networking equipment, devices, database, and web servers.
It is also known as Hardware as a Service (HaaS). IaaS customers pay on a per-user basis, typically by
the hour, week, or month. Some providers also charge customers based on the amount of virtual machine
space they use.
It simply provides the underlying operating systems, security, networking, and servers for developing such
applications, and services, and deploying development tools, databases, etc.
Advantages of IaaS:
1. Cost-Effective: Eliminates capital expense and reduces ongoing cost and IaaS customers pay on a per-
user basis, typically by the hour, week, or month.
2. Website hosting: Running websites using IaaS can be less expensive than traditional web hosting.
3. Security: The IaaS Cloud Provider may provide better security than your existing software.
4. Maintenance: There is no need to manage the underlying data center or the introduction of new
releases of the development or underlying software. This is all handled by the IaaS Cloud Prov ider.
The various companies providing Infrastructure as a service are Amazon web services, Bluestack, IBM,
Openstack, Rackspace, and Vmware.

Benefits of Virtualization

4 Solved By Purusottam Adhikari


1. Increased security:
The ability to control the execution of a guest in a completely transparent manner opens new possibilities for
delivering a secure, controlled execution environment.
2. Managed execution:
Provides sharing, aggregation, emulation, isolation etc.
3. Portability:
User works can be safely moved and executed on top of different virtual machines.

4. Slash your IT expenses

5. Reduce downtime and enhance resiliency in disaster recovery situations

6. Increase efficiency and productivity

7. Control independence and DevOps

8. Move to be more green-friendly (organizational and environmental)

3. What is Map-Reduce Programming? Describe how enterprise batch processing is done using map-
reduce? [4+6]
MapReduce is triggered by map and reduce operations in functional languages, such as Lisp. This model
abstracts computation problems through two functions: map and reduce. All problems formulated in this way
can be parallelized automatically. All data processed by MapReduce are in the form of key/value pairs. The
execution happens in two phases. In the first phase, a map function is invoked once for each input key/value
pair and it can generate output key/value pairs as intermediate results. In the second one, all the intermediate
results are merged and grouped by keys. The reduce function is called once for each key with associated
values and produces output values as final results. A map function takes a key/value pair as input and
produces a list of key/value pairs as output. The type of output key and value can be different from input key
and value: map::(key1,value1) => list(key2,value2) A reduce function takes a key and associated value list as
input and generates a list of new values as output: reduce::list(key2,value2) => list(value3)
MapReduce Execution
A MapReduce application is executed in a parallel manner through two phases. In the first phase, all map
operations can be executed independently with each other. In the second phase, each reduce operation may
depend on the outputs generated by any number of map operations. However, similar to map operations, all
reduce operations can be executed independently. From the perspective of dataflow, MapReduce execution
consists of m independent map tasks and r independent reduce tasks, each of which may be dependent on m
map tasks. Generally the intermediate results are partitioned into r pieces for r reduce tasks. The MapReduce
runtime system schedules map and reduce tasks to distributed resources. It manages many technical
problems: parallelization, concurrency control, network communication, and fault tolerance. Furthermore, it
performs several optimizations to decrease overhead involved in scheduling, network communication and
intermediate grouping of results.

Today, the volume of data is often too big for a single server – node – to process. Therefore, there was a need
to develop code that runs on multiple nodes. Writing distributed systems is an endless array of problems, so
5 Solved By Purusottam Adhikari
people developed multiple frameworks to make our lives easier. MapReduce is a framework that allows the
user to write code that is executed on multiple nodes without having to worry about fault tolerance,
reliability, synchronization or availability. Batch processing is an automated job that does some computation,
usually done as a periodical job. It runs the processing code on a set of inputs, called a batch. Usually, the job
will read the batch data from a database and store the result in the same or different database. An example of
a batch processing job could be reading all the sale logs from an online shop for a single day and aggregating
it into statistics for that day (number of users per country, the average spent amount, etc.). Doing this as a
daily job could give insights into customer trends.

MapReduce is a programming model that was introduced in a white paper by Google in 2004. Today, it is
implemented in various data processing and storing systems (Hadoop, Spark, MongoDB, …) and it is a
foundational building block of most big data batch processing systems.

For MapReduce to be able to do computation on large amounts of data, it has to be a distributed model that
executes its code on multiple nodes. This allows the computation to handle larger amounts of data by adding
more machines – horizontal scaling. This is different from vertical scaling, which implies increasing the
performance of a single machine.

In order to decrease the duration of our distributed computation, MapReduce tries to


reduce shuffling (moving) the data from one node to another by distributing the computation so that it is
done on the same node where the data is stored. This way, the data stays on the same node, but the code is
moved via the network. This is ideal because the code is much smaller than the data.

To run a MapReduce job, the user has to implement two functions, map and reduce, and those implemented
functions are distributed to nodes that contain the data by the MapReduce framework. Each node runs
(executes) the given functions on the data it has in order the minimize network traffic (shuffling data).

6 Solved By Purusottam Adhikari


The computation performance of MapReduce comes at the cost of its expressivity. When writing a
MapReduce job we have to follow the strict interface (return and input data structure) of the map and the
reduce functions. The map phase generates key-value data pairs from the input data (partitions), which are
then grouped by key and used in the reduce phase by the reduce task. Everything except the interface of the
functions is programmable by the user.
Hadoop, along with its many other features, had the first open-source implementation of MapReduce. It also
has its own distributed file storage called HDFS. In Hadoop, the typical input into a MapReduce job is a
directory in HDFS. In order to increase parallelization, each directory is made up of smaller units called
partitions and each partition can be processed separately by a map task (the process that executes the map
function). This is hidden from the user, but it is important to be aware of it because the number of partitions
can affect the speed of execution.

The map task (mapper) is called once for every input partition and its job is to extract key-value pairs from
the input partition. The mapper can generate any number of key-value pairs from a single input

The MapReduce framework collects all the key-value pairs produced by the mappers, arranges them into
groups with the same key and applies the reduce function. All the grouped values entering the reducers are
sorted by the framework. The reducer can produce output files which can serve as input into another
MapReduce job, thus enabling multiple MapReduce jobs to chain into a more complex data processing
pipeline.

Section B Attempt any eight questions. (8 × 5 = 40)


4. Describe cloud service requirements. [5]
1. Efficiency / cost reduction
By using cloud infrastructure, you don’t have to spend huge amounts of money on purchasing and
maintaining equipment.
2. Data security
Cloud offers many advanced security features that guarantee that data is securely stored and handled. Cloud
storage providers implement baseline protections for their platforms and the data they process, such
authentication, access control, and encryption.
3. Scalability
Different companies have different IT needs — a large enterprise of 1000+ employees won’t have the same
IT requirements as a start-up.Using cloud is a great solution because it enables enterprise to efficiently —
and quickly — scale up/down according to business demands.
4. Mobility
Cloud computing allows mobile access to corporate data via smartphones and devices, which is a great way
to ensure that no one is ever left out of the loop. Staff with busy schedules, or who live a long way away
from the corporate office, can use this feature to keep instantly up-to-date with clients and coworkers.
5. Disaster recovery
Data loss is a major concern for all organizations, along with data security. Storing your data in the cloud
guarantees that data is always available, even if your equipment like laptops or PCs, is damaged. Cloud-
based services provide quick data recovery for all kinds of emergency scenarios
6. Control
Cloud enables you complete visibility and control over your data. You can easily decide which users have
what level of access to what data.
7. Market reach
7 Solved By Purusottam Adhikari
Developing in the cloud enables users to get their applications to market quickly.
8. Automatic Software Updates
Cloud-based applications automatically refresh and update themselves.
5. Differentiate public cloud from private cloud. [5]
Public Cloud Private Cloud

Cloud Computing infrastructure is


Cloud Computing infrastructure is shared with the public by shared with private organizations by
service providers over the internet. It supports multiple service providers over the internet. It
customers i.e, enterprises. supports one enterprise.

Multi-Tenancy i.e, Data of many enterprises are stored in a


shared environment but are isolated. Data is shared as per rule, Single Tenancy i.e, Data of a single
permission, and security. enterprise is stored.

Cloud service provider provides all the possible services and


hardware as the user-base is the world. Different people and Specific services and hardware as per
organizations may need different services and hardware. the need of the enterprise are
Services provided must be versatile. available in a private cloud.

It is hosted at the Service Provider


It is hosted at the Service Provider site. site or enterprise.

It only supports connectivity over the


organizationsenterprisesIt is connected to the public internet. private network.

Scalability is limited, and reliability is


Scalability is very high, and reliability is moderate. very high.

Cloud service provider manages the cloud and customers use Managed and used by a single
them. enterprise.

It is cheaper than the private cloud. It is costlier than the public cloud.

Security matters and dependent on the service provider. It gives a high class of security.

Performance is low to medium. Performance is high.

It has shared servers. It has dedicated servers.

Example: Amazon web service (AWS) and Google AppEngine Example: Microsoft KVM, HP, Red
etc. Hat & VMWare etc.

8 Solved By Purusottam Adhikari


6. Discuss the different types of hypervisors. [5]
A hypervisor is a form of virtualization software used in Cloud hosting to divide and allocate the resources
on various pieces of hardware. The program which provides partitioning, isolation, or abstraction is called
a virtualization hypervisor. The hypervisor is a hardware virtualization technique that allows multiple
guest operating systems (OS) to run on a single host system at the same time. A hypervisor is sometimes
also called a virtual machine manager(VMM).

Types of Hypervisor –
TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a “Native Hypervisor” or
“Bare metal hypervisor”. It does not require any base server operating system. It has direct access to
hardware resources. Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer, and
Microsoft Hyper-V hypervisor.

Pros & Cons of Type-1 Hypervisor:


Pros: Such kinds of hypervisors are very efficient because they have direct access to the physical
hardware resources(like Cpu, Memory, Network, and Physical storage). This causes the empowerment of
the security because there is nothing any kind of the third party resource so that attacker couldn’t
compromise with anything.
Cons: One problem with Type-1 hypervisors is that they usually need a dedicated separate machine to
perform their operation and to instruct different VMs and control the host hardware resources.

TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as ‘Hosted Hypervisor”.
Such kind of hypervisors doesn’t run directly over the underlying hardware rather they run as an
application in a Host system(physical machine). Basically, the software is installed on an operating
system. Hypervisor asks the operating system to make hardware calls. An example of a Type 2 hypervisor
includes VMware Player or Parallels Desktop. Hosted hypervisors are often found on endpoints like
PCs. The type-2 hypervisor is very useful for engineers, and security analysts (for checking malware, or
malicious source code and newly developed applications).
Pros & Cons of Type-2 Hypervisor:
Pros: Such kind of hypervisors allows quick and easy access to a guest Operating System alongside the
host machine running. These hypervisors usually come with additional useful features for guest machines.
Such tools enhance the coordination between the host machine and the guest machine.
Cons: Here there is no direct access to the physical hardware resources so the efficiency of these
hypervisors lags in performance as compared to the type-1 hypervisors, and potential security risks are
also there an attacker can compromise the security weakness if there is access to the host operating system
so he can also access the guest operating system.

7. How thread is different from task? How thread programming is done? [3+2]
Here are some differences between a task and a thread.

1. The Thread class is used for creating and manipulating a thread in Windows. A Task represents some
asynchronous operation and is part of the Task Parallel Library, a set of APIs for running tasks
asynchronously and in parallel.
2. The task can return a result. There is no direct mechanism to return the result from a thread.
3. Task supports cancellation through the use of cancellation tokens. But Thread doesn't.

9 Solved By Purusottam Adhikari


4. A task can have multiple processes happening at the same time. Threads can only have one task
running at a time.
5. We can easily implement Asynchronous using ’async’ and ‘await’ keywords.
6. A new Thread()is not dealing with Thread pool thread, whereas Task does use thread pool thread.
7. A Task is a higher level concept than Thread.

A thread can be created by implementing the Runnable interface and overriding the run() method. Then
a Thread object can be created and the start() method called. The Main thread in Java is the one that begins
executing when the program starts. All the child threads are spawned from the Main thread and it is the last
thread to finish execution.
A program that demonstrates this is given as follows:

class ThreadDemo implements Runnable {


Thread t;
ThreadDemo() {
t = new Thread(this, "Thread");
System.out.println("Child thread: " + t);
t.start(); }
public void run() {
try {
System.out.println("Child Thread");
Thread.sleep(50);
} catch (InterruptedException e) {
System.out.println("The child thread is interrupted.");
}
System.out.println("Exiting the child thread");
}}
public class Demo {
public static void main(String args[]) {
new ThreadDemo();
try {
System.out.println("Main Thread");
Thread.sleep(100);
} catch (InterruptedException e) {
System.out.println("The Main thread is interrupted");
}
System.out.println("Exiting the Main thread");
}}
10 Solved By Purusottam Adhikari
8. Discuss the cloud security issues. [5]
1. Privileged user access —inquire about who has specialized access to data, and about the hiring and
management of such administrators.
2. Regulatory compliance—make sure that the vendor is willing to undergo external audits and/or security
certifications.
3. Data location—does the provider allow for any control over the location of data?
4. Data segregation —make sure that encryption is available at all stages, and that these encryption schemes
were designed and tested by experienced professionals.
5. Recovery —Find out what will happen to data in the case of a disaster. Do they offer complete
restoration? If so, how long would that take?
6. Investigative support —Does the vendor have the ability to investigate any inappropriate or illegal
activity?
7. Long-term viability —What will happen to data if the company goes out of business? How will data be
returned, and in what format?

9. What is bucket in Amazon simple storage service? How addressing of a bucket is done? [2+3]
A bucket is a container for objects stored in Amazon S3. You can store any number of objects in a bucket
and can have up to 100 buckets in your account. To request an increase, visit the Service Quotas Console.

Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in
the DOC-EXAMPLE-BUCKET bucket in the US West (Oregon) Region, then it is addressable using the
URL https://DOC-EXAMPLE-BUCKET.s3.us-west-2.amazonaws.com/photos/puppy.jpg.

When you create a bucket, you enter a bucket name and choose the AWS Region where the bucket will
reside. After you create a bucket, you cannot change the name of the bucket or its Region. Bucket names
must follow the bucket naming rules. You can also configure a bucket to use S3 Versioning or other storage
management features.

Buckets also:

 Organize the Amazon S3 namespace at the highest level.


 Identify the account responsible for storage and data transfer charges.
 Provide access control options, such as bucket policies, access control lists (ACLs), and S3 Access Points,
that you can use to manage access to your Amazon S3 resources.
 Serve as the unit of aggregation for usage reporting.

First of all we should create bucket with unique bucket name and region.
After that versioning should be enabled then we should go to next step where we should manage users and
then we upload the file then we choose the storage class then encryption, metadata and tag should be given.
An S3 bucket can be accessed through its URL. The URL format of a bucket is either of two
options: http://s3.amazonaws.com/[bucket_name]/ http://[bucket_name].s3.amazonaws.com/

10. Describe how cloud computing is used in business and consumer applications. [5]
Cloud computing innovations are likely to help the commercial and consumer sectors the most. On the other
hand, the ability to convert capital expenses into operating costs makes cloud more appealing alternative for
any IT-centric business. On the other hand, the cloud’s feeling of ubiquity in terms of accessing data and
11 Solved By Purusottam Adhikari
services makes it appealing to end-users. Furthermore, because cloud technologies are elastic, they do not
necessitate large upfront investments, allowing innovative ideas to be easily converted into products and
services that can readily scale with demand.

CRM and ERP:


Cloud CRM applications constitute a great opportunity for small enterprises and start-upsto have fully
functional CRM software without large up-front costs and by paying subscriptions. CRM is not an activity
that requires specific needs, and it can be easily moved to thecloud. ERP solutions on the cloud are less
mature and have to compete with well-establishedin-house solutions.
Productivity:
Productivity applications replicate in the cloud some of the most common tasks that we are used to
performing on our desktop: from document storage to office automation and complete desktop environments
hosted in the cloud.

Social networking:
Social networking applications have grown considerably in the last few years to become the most active
sites on the Web. To sustain their traffic and serve millions of users seamlessly, services such as Twitter and
Facebook have leveraged cloud computing technologies. The possibility of continuously adding capacity
while systems are running is the most attractive feature for social networks, which constantly increase their
user base.

Media applications
Media applications are a niche that has taken a considerable advantage from leveraging cloud computing
technologies. In particular, video-processing operations, such as encoding, transcoding, composition, and
rendering, are good candidates for a cloud-based environment. These are computationally intensive tasks that
can be easily offloaded to cloud computing infrastructures.
Multiplayer online gaming:
Online multiplayer gaming attracts millions of gamers around the world who share a common experience by
playing together in a virtual environment that extends beyond the boundaries of a normal LAN. Online
games support hundreds of players in the same session, made possible by the specific architecture used to
forward interactions, which is based on game log processing. Players update the game server hosting the
game session, and the server integrates all the updates into a log that is made available to all the players
through a TCP port. The client software used for the game connects to the log port and, by reading the log,
updates the local user interface with the actions of other players.

11. How virtual machine security is enforced? [5]


Virtualized security, or security virtualization, refers to security solutions that are software-based and
designed to work within a virtualized IT environment. This differs from traditional, hardware-based network
security, which is static and runs on devices such as traditional firewalls, routers, and switches.

Virtualized security can take the functions of traditional security hardware appliances (such as firewalls and
antivirus protection) and deploy them via software. In addition, virtualized security can also perform
additional security functions. These functions are only possible due to the advantages of virtualization, and
are designed to address the specific security needs of a virtualized environment.

12 Solved By Purusottam Adhikari


For example, an enterprise can insert security controls (such as encryption) between the application layer and
the underlying infrastructure, or use strategies such as micro-segmentation to reduce the potential attack
surface.

Virtualized security can be implemented as an application directly on a bare metal hypervisor (a position it
can leverage to provide effective application monitoring) or as a hosted service on a virtual machine. In
either case, it can be quickly deployed where it is most effective, unlike physical security, which is tied to a
specific device.

12. Describe the types of services hosted in Aneka container. [5]


Aneka is the product of Manjarasoft. Aneka is used for developing, deploying and managing cloud
capplications. Aneka can be integrated with existing cloud technologies. Aneka includes extensible set of
APIs associated with programming models like MapReduce.
These APIs supports different types of cloud models like private, public, hybrid cloud.
Aneka framework:
Aneks is a software platform for developing cloud computing applications. In Aneka cloud applications are
executed. Aneka is a pure PaaS solution for cloud computing.
Aneka is a cloud middleware product. Aneks can be deployed on a network of computers, a multicore server,
datacenters, virtual cloud infrastructures, or a mixture of these.

Aneka container can be classified into three major categories:


1. Fabric Services
2. Foundation Services
3. Application Services
1. Fabric services:
Fabric Services define the lowest level of the software stack representing the Aneka Container. They provide
access to the resource-provisioning subsystem and to the monitoring facilities implemented in Aneka.

13 Solved By Purusottam Adhikari


2. Foundation services:
Fabric Services are fundamental services of the Aneka Cloud and define the basic infrastructure management
features of the system. Foundation Services are related to the logical management of the distributed system
built on top of the infrastructure and provide supporting services for the execution of distributed applications.
3. Application services:
Application Services manage the execution of applications and constitute a layer that differentiates according
to the specific programming model used for developing distributed applications on top of Aneka.

14 Solved By Purusottam Adhikari

You might also like