0% found this document useful (0 votes)
74 views10 pages

Mobile Computing Unit 3: 1. Define Virtualization

Virtualization allows multiple virtual machines to run on a single physical server using a hypervisor. This improves efficiency by reducing space, energy, and maintenance requirements. Virtual machines provide benefits like lower costs, easier management, and the ability to run multiple operating systems. However, virtualization also introduces potential disadvantages like unstable performance and security risks. Containerization packages applications and dependencies to run on any system, while microservices break applications into small, independent services.

Uploaded by

BM Mithun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views10 pages

Mobile Computing Unit 3: 1. Define Virtualization

Virtualization allows multiple virtual machines to run on a single physical server using a hypervisor. This improves efficiency by reducing space, energy, and maintenance requirements. Virtual machines provide benefits like lower costs, easier management, and the ability to run multiple operating systems. However, virtualization also introduces potential disadvantages like unstable performance and security risks. Containerization packages applications and dependencies to run on any system, while microservices break applications into small, independent services.

Uploaded by

BM Mithun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Mobile Computing Unit 3

1. Define Virtualization
• Virtual Machine: It’s an instance of software that imitates functions of a hardware
based computer. It consists of - CPU, Memory, Storage, Networking and an
Operating System to run Applications.
2. What are Type of Virtualization
a. Desktop Virtualization - Desktop virtualization is a method of simulating a user
workstation so it can be accessed from a remotely connected device.
b. Server Virtualization - the process of dividing a physical server into multiple
unique and isolated virtual servers by means of a software application. Each virtual
server can run its own operating systems independently.
c. Application Virtualization - Application virtualization software allows users to
access and use an application from a separate computer than the one on which the
application is installed.
d. Network Virtualization - Network Virtualization (NV) refers to abstracting network
resources that were traditionally delivered in hardware to software.
3. What is Hypervisor –

• It’s a software necessary to run VMs. It abstracts (isolates) VMs with the underlying
hardware. It provides necessary resources (CPU, Memory, Storage, Networking) for
VMs to run Applications

Since a hypervisor with the help of its special feature, it allows several virtual machines
to operate on a single physical server. So, it helps us to reduce:

• The Space efficiency


• The Energy uses
• The Maintenance requirements of the server.

4. What are Benefits of Virtualization -


a. Cost less compared to real hardware resources
b. Easy to create and destroy Virtual machines
c. Allows installation of multiple types of OS on a single system
d. Allows automation and scaling
• Greater IT efficiencies.
• Reduced operating costs.
• Faster workload deployment.
• Increased application performance.
• Higher server availability.
• Eliminated server sprawl and complexity.

5. What disadvantages of Virtualization –

• Multiple VM’s lead to unstable performance


• Hypervisors are not as efficient as host OS
• Long bootup process
• Data can be at Risk
• Learning New Infrastructure –
• High Initial Investment –
6. Why a Virtual Machine is used ?

• VMs are used to deploy Applications (e.g. Web Servers, Backend Applications) by
dynamically bringing up instances (computes) on Hardware.

Operational flexibility

Reducing overhead.

Centralized management of diverse operating units allows you to increase


efficiency and, ultimately, to increase your output.

Disaster recovery.
7. What happens in case of a power failure in a Data Center / How to safeguard against
power failures in DC

• Cloud DC - refers to Data Centers hosted on remote Cloud platforms accessible over
Internet. This typically uses services provided a Cloud based DC platform, example:
Amazon AWS, Google Cloud Platform (GCP), Microsoft Azure, Oracle Cloud
Infrastructure (OCI).
• Hybrid DC - refers to mixed use of both On-prem and Cloud Data Centers.
8. Do we need dual Data Centers, if so, why ?

• is that because the computing load can be spread among multiple locations, power
bills can be reduced in each location, particularly if the satellites are in areas with
lower power costs.
• They need to offer high uptime, make use of various cloud computing services, have
a thorough disaster recovery plan in place, ensure regulatory compliance and so
much more.
9. How to Secure Data Centers
a. Use Firewall, VPN, Intrusion Detection System (IDS), DDoS Protection for Network
Security
b. Authentication, Authorization for User Security
c. Storage Encryption for Data Security
d. Endpoint Agents for Server Security
e. Enforce Monitoring and Alerting to detect threats
10. How does Containerization Differ from Virtualization
Containerization: It’s the process of packaging software applications, it’s libraries,
dependencies and configurations that can be easily deployed anywhere - any OS or
Hardware.
Container - packaging of application and all its dependencies. They are lightweight as they
don’t include operating system and use the Host OS. Example: Dockers, LXC (Linux
Containers)
Container Engine: It’s an interface between Containers carrying applications and Operating
system on which the Containers are runnings.
Virtualization - It’s an instance of software that imitates functions of a hardware based
computer. It consists of - CPU, Memory, Storage, Networking and an Operating System to
run Applications.
Hypervisor: It’s a software necessary to run VMs. It abstracts (isolates) VMs with the
underlying hardware. It provides necessary resources (CPU, Memory, Storage, Networking)
for VMs to run Applications
Uses: VMs are used to deploy Applications (e.g. Web Servers, Backend Applications) by
dynamically bringing up instances (computes) on Hardware.
11. What is Docker Hub –
Repository of all Docker Images (https://hub.docker.com/)
Docker Hub is the world's largest repository of container images with an array of content
sources including container community developers, open source projects and independent
software vendors (ISV) building and distributing their code in containers.
12. Quote a few Docker Commands
a. docker pull - pull a docker image from docker hub
b. docker run - run a docker container
c. docker list - list all docker container images available
d. docker ps -a - list all docker instances
13. What is Kubernetes and Docker Swarm
Docker Swarm is a lightweight, easy-to-use orchestration tool with limited offerings
compared to Kubernetes. In contrast, Kubernetes is complex but powerful and provides self-
healing, auto-scaling capabilities out of the box.

14. What is the need for DevOps


Composed of development (Dev) and operations (Ops), DevOps is process and technology to
deliver software applications and services at high velocity from Planning to Release and
including it’s monitoring post release.
1. Shorter Development Cycles, Faster Innovation
2. Reduced Deployment Failures, Rollbacks, and Time to Recover
3. Improved Communication and Collaboration
4. Reduced Costs and IT Headcount
5. Increased Efficiencies
15. Describe DevOps pipeline and give example of certain DevOps tools
A DevOps pipeline is a set of automated processes and tools that allows developers and
operations professionals to collaborate on building and deploying code to a production
environment.
DevOps Tool is an application that helps automate the software development process. It
mainly focuses on communication and collaboration between product management,
software development, and operations professionals.
16. Name DevOps Tools and Describe its functions
a. Git : Version Control System tool
b. Jenkins : Continuous Integration tool
c. Selenium : Continuous Testing tool
d. Puppet, Chef, Ansible : Configuration Management and Deployment tools
e. Nagios : Continuous Monitoring tool
f. Docker : Containerization tool
17. What are characteristics of Microservices
Microservice - an approach to develop application as a bundle of small, separate, loosely
coupled services, where each service is accessible via an API (Rest API)
Microservices are small, each running in their own process, using lightweight
communication mechanisms and built around business capabilities. Applications (Apps) are
larger but still small, separate, runnable processes, using a share-nothing model that share
many characteristics with microservices.
Decentralized

Designed for Business


Multiple Components

Failure Resistant

Componentization via Services


Infrastructure Automation
18. Differentiate between Monolithic and Microservices
19. What is RestAPI
A REST API (also known as RESTful API) is an application programming interface (API or web
API) that conforms to the constraints of REST architectural style and allows for interaction
with RESTful web services. REST stands for representational state transfer and was created
by computer scientist Roy Fielding.
REST uses less bandwidth, simple and flexible making it more suitable for internet usage. It’s
used to fetch or give some information from a web service. All communication done via
REST API uses only HTTP request.
In HTTP there are five methods that are commonly used in a REST-based Architecture i.e.,
POST, GET, PUT, PATCH, and DELETE
20. What is the use of Docker in a Microservice

Create image.
• The initial step is to get the base Docker image that is needed for
the given microservice.
• Using Dockerfile, we create an image for the service. We can use
dockerfile for
o Installing required apps and libraries.
o Adding service to the image.
Deployment and running microservices.
• Let's say we have pushed our new image to the Docker hub and we
provide necessary access to the system where we want to run the
service. Without access to our repository, the host won't be able to
pull an image.
• Docker-machine is a tool that installs docker-engine on hosts and
manages the host with docker-machine commands.
• We can create a host on drivers like VirtualBox. In this case, it'll be
AWS or Digital Ocean.

21. What does Data and Code separation mean ?


Code separation - a design principle for separating a computer program into distinct
sections. Each section addresses a separate concern, a set of information that affects the
code of a computer program.
Code - code is a system of rules to convert information—such as a letter, word, sound,
image, or gesture—into another form, sometimes shortened or secret, for communication
through a communication channel or storage in a storage medium.

The practice of keeping "code" - instructions for some machine, whether a microprocessor,
a VirtualMachine, or a scripting language - distinct from data. This is often done for security
reasons, to prevent untrusted code (which might compromise a machine) from being
executed.
The degree and reasons for separation range from hardware-level separation (e.g. to
prevent stack overflows from mucking with the machine instructions, and possibly to allow
greater efficiency in the cache line since code supposedly changes slower) to higher level
separation (e.g. keeping the data in databases to better secure it and because data tends to
survive applications - in this view, the data is what changes slower).
22. How do modules within Microservice communicate with each other?
Because microservices are distributed and microservices communicate with each other by
inter-service communication on network level. Each microservice has its own instance and
process. Therefore, services must interact using an inter-service communication protocols
like HTTP, gRPC or message brokers AMQP protocol.
The synchronous communication protocols can be HTTP or HTTPS.
In synchronous communication, the client sends a request with using http protocols and
waits for a response from the service.
Basically, In Asynchronous communication, the client sends a request but it doesn’t wait for
a response from the service. So the key point here is that, the client should not have blocked
a thread while waiting for a response.
The most popular protocol for this Asynchronous communications is AMQP (Advanced
Message Queuing Protocol).
23. What are Pros and Cons of Microservices
24. Name few databases used in Microservices
a. MySQL, MongoDB,
b. Postgresql, DynamoDB
25. What are characteristics of a cloud native applications

26. How do you scale a microservice based system


Getting resources to different parts of the system that need it. Because resources are finite
in any system, it's best to give the resources to the parts of the systems that need it and not
over- or underutilize any part of those resources.
X-Axis Scaling
X-axis scaling is also called as horizontal scaling. In this procedure, the entire
application is sub-divided into different horizontal parts. Normally, any web server
application can have this type of scaling.

Y-Axis Scaling
Y-axis scaling is also called as a vertical scaling that includes any resource level
scaling. Any DBaaS or Hadoop system can be considered to be Y-axis scaled. In this
type of scaling, the users request is redirected and restricted by implementing some
logic.
This method of breaking down resources into small independent business units is
known as Y-Axis scaling.

Z-Axis Scaling
X- and Y-axis scaling is pretty much easier to understand. However, one application
can also be scaled at the business level, which is called as Z-axis scaling.

Advantages of Scaling
• Cost − Proper scaling of a software will reduce the cost for
maintenance.
• Performance − Due to loose coupling, the performance of a properly
scaled software is always better than a non-scaled software.
• Load distribution − Using different technologies, we can easily maintain
our server load.

27. What does resilience of a cloud native application mean


Resiliency is the ability of your system to react to failure and still remain functional. It's not
about avoiding failure, but accepting failure and constructing your cloud-native services to
respond to it. You want to return to a fully functioning state quickly as possible.
A well-designed cloud native application is able to survive and stay online even in the event
of an infrastructure outage.

28. Describe Jenkins pipeline


a suite of plugins which supports implementing and integrating continuous delivery
pipelines into Jenkins. A continuous delivery pipeline is an automated expression of your
process for getting software from version control right through to your users and
customers.

Here are the reasons why you use should use Jenkins pipeline:

• Jenkins pipeline is implemented as a code which allows multiple users to


edit and execute the pipeline process.
• Pipelines are robust. So if your server undergoes an unforeseen restart,
the pipeline will be automatically resumed.
• You can pause the pipeline process and make it wait to resume until there
is an input from the user.
• Jenkins Pipelines support big projects. You can run multiple jobs, and
even use pipelines in a loop.

Diff between Puppet, Ansible, Chef, Saltstack

You might also like