Unit 3 Virtualization Notes
Unit 3 Virtualization Notes
1. Definition:
4. Historical Evolution:
5. Key Concepts:
a. Increased Security:
Performance Tuning:
3. Portability:
Hardware Virtualization:
o Virtual images can run on different machines with compatible VM
managers.
Programming-Level Virtualization:
o Application binaries (e.g., Java jars, .NET assemblies) run across platforms
without recompilation.
o Facilitates flexible development and simplified deployment.
Machine Reference Model
Key Layers in the Model:
o Example: Tells the CPU what instructions (like adding numbers) it can understand.
o Why it matters: Helps operating system developers know how to use the
computer's hardware.
o Example: Defines how programs use system functions like saving files.
o Why it matters: Makes programs work across different operating systems if they
follow the same rules.
o What it does: Lets applications talk to the operating system or other programs.
How It Works:
2. ABI: The operating system understands the command through the ABI.
3. ISA: The operating system converts the command into instructions the CPU can execute.
1. Nonprivileged Instructions:
o Safe tasks that don’t affect system settings (e.g., math calculations).
2. Privileged Instructions:
Execution Modes:
2. User Mode:
Modern CPUs have features like Intel VT and AMD Pacifica to help hypervisors run securely.
Hardware-Level Virtualization
Hardware-level virtualization allows multiple operating systems (OS) to run on one physical
computer by pretending each OS has its own hardware. This is done using a special program called
a hypervisor.
How It Works:
1. Type I (Native/Bare-metal):
2. Type II (Hosted):
1. Equivalence:
2. Resource Control:
3. Efficiency:
o Most tasks should run directly on the hardware without slowing down.
Why It Matters:
By managing hardware this way, virtualization powers cloud computing, data centers, and testing
environments.
Hardware virtualization allows running multiple operating systems (OS) on one computer using
special methods. Here are the main techniques:
1. Hardware-Assisted Virtualization:
What It Is: The computer’s hardware helps run virtual machines efficiently.
Why It Matters: Older computers used only software, which was slow. Modern CPUs like
Intel VT and AMD V have built-in virtualization support to run OS faster and safer.
Example: VirtualBox, VMware, and Hyper-V use hardware assistance for better
performance.
2. Full Virtualization:
What It Is: A virtual machine manager (VMM) creates a complete virtual version of the
computer’s hardware. The guest OS doesn’t need any changes and thinks it’s running on a
real computer.
Why It Matters: Full isolation improves security and allows different OS types to run
together. However, it can be slow without hardware assistance.
3. Paravirtualization:
What It Is: The guest OS is slightly modified to work better with the virtual machine. This
reduces performance problems.
Why It Matters: It’s faster than full virtualization because some tasks run directly on the
real hardware. However, it requires changing the guest OS, which isn’t always possible.
Example: Xen uses paravirtualization for Linux and special drivers for Windows.
4. Partial Virtualization:
What It Is: Only some parts of the hardware are virtualized, not the entire system.
Why It Matters: Applications can run in separate memory spaces, but full OS isolation
isn’t possible.
Example: Time-sharing systems that allow multiple users on the same computer.
How It Works:
The OS kernel (core part of the OS) creates separate user spaces for each container.
o File system
o IP address
The OS manages system resources (CPU, memory) and ensures containers don’t interfere with
each other.
Key Features:
1. Isolation:
2. Resource Sharing:
o The OS controls how much CPU, memory, and storage each container can use.
3. Lightweight:
o Applications run directly on the OS, making it faster and more efficient.
Examples:
Advantages:
Efficient Use of Resources: Many apps can run on the same machine.
Limitations:
How It Works:
1. Compilation to Bytecode:
o The virtual machine (e.g., Java Virtual Machine or .NET Common Language
Runtime) reads and runs the bytecode.
Key Features:
1. Portability:
2. Managed Execution:
o The VM controls how the program runs, ensuring security and stability.
3. Security:
o The VM isolates programs, preventing them from accessing sensitive data or the
underlying hardware.
4. Ease of Deployment:
Examples:
o Runs programs written in Java and other supported languages (Python, Groovy).
3. Parrot VM:
Real-Life Analogy:
Virtual Machine: A chef who understands the recipe and cooks the dish, adjusting for
different kitchen setups (operating systems).
Advantages:
Limitations:
Application-Level Virtualization
Application-level virtualization allows applications to run on operating systems or devices
where they normally wouldn’t work. It creates a virtual environment that tricks the application
into thinking it’s running in its native environment, even when it's not installed there.
How It Works:
1. Application Isolation:
o The app runs in a self-contained environment with its own settings, libraries, and
files.
2. Emulation or Translation:
Techniques Used:
1. Interpretation:
o Every instruction from the app is read and executed one by one.
2. Binary Translation:
o The app’s instructions are translated into the host system’s instructions.
Key Benefits:
1. Run Unsupported Apps: Apps built for one operating system can run on another.
2. No Installation Needed: The app runs without being installed on the host system.
Real-Life Examples:
1. Wine:
2. CrossOver:
3. VMware ThinApp:
o Converts installed apps into portable packages that run on any system.
1. Storage Virtualization:
What it is:
Storage virtualization combines multiple physical storage devices into a single virtual storage
system. It makes it easier to manage data by allowing users to access storage through a logical
path instead of worrying about the physical location of the data.
Example:
Imagine having multiple hard drives, but you access them as one large storage unit. You don’t need
to know where the files are physically stored; you just access them through one system.
How it works:
One method is SAN (Storage Area Network), where storage devices are connected over
a high-speed network, allowing them to appear as a single system.
2. Network Virtualization:
What it is:
Network virtualization combines physical networks into a single virtual network. It can also create
virtual networks within an operating system, allowing more flexible and efficient management of
network resources.
Example:
A VLAN lets devices communicate as if they were on the same local network, even if they’re
physically spread out.
3. Desktop Virtualization:
What it is:
Desktop virtualization allows users to access a desktop environment remotely, from any device, by
connecting to a virtual desktop stored on a remote server.
Key Points:
Remote Access: Users can access their desktop and applications from anywhere, not just
from the physical computer.
Cloud-Based: The desktop is stored in a remote server or data center, ensuring high
availability and persistence of data.
How it works:
When you connect to your virtual desktop, the system loads your personalized desktop from the
server, so you can work on it just as if it were on your local machine.
Example:
Tools like Windows Remote Desktop, VNC, and Citrix XenDesktop allow access to remote
desktops.
4. Application Server Virtualization:
What it is:
Application server virtualization combines multiple application servers into a single virtual server,
offering services as if they were hosted by a single server. This helps ensure better performance,
load balancing, and high availability.
Example:
Instead of managing multiple servers for different applications, virtualization makes them appear
as one, providing seamless service.
What it is: Virtualizing storage resources, allowing cloud providers to offer scalable
storage services that can be divided into smaller slices and allocated as needed.
Benefit: Storage can be dynamically adjusted and provided to users in small, flexible
units, improving efficiency and resource management.
What it is: Recreating an entire desktop environment in the cloud, allowing users to
access their desktop from anywhere via the internet, just like accessing a remote
computer.
Benefit: Users can work on the same desktop environment from different devices,
ensuring high availability and persistence of their work, all managed by the cloud provider.
1. Customizable Environments:
3. Manageability:
o What it is: Virtualizing physical hardware to create multiple virtual machines, each
running its own operating system.
o How it helps: It allows multiple virtual machines to run on the same physical
server, maximizing resource use and providing flexibility for users.
2. Server Consolidation:
o What it is: Combining multiple virtual machines onto fewer physical servers,
making better use of available resources.
o Benefit: This reduces waste and allows cloud providers to save energy and costs
by using fewer physical servers.
3. Virtual Machine Migration (Live Migration):
o What it is: Moving virtual machines between physical servers with little to no
downtime.
o Benefit: Ensures that virtual machines can continue running even if the physical
hardware needs maintenance or if resources need to be reallocated.
Advantages of Virtualization:
o What it means: Virtualization lets you easily divide and manage resources (like
memory or processing power) between different virtual systems. A program
controls how much resource each system gets.
o Why it’s useful: This makes managing and optimizing resources easier, especially
when you want to reduce energy use or improve performance in a system that
handles many tasks.
3. Portability:
o Why it’s useful: This makes it easy to "carry" your work, since you can transfer a
virtual machine (or its files) from one computer to another, just like moving files
between folders.
o What it means: Since virtual machines (VMs) are portable and easy to manage,
companies can reduce the number of physical machines needed. Fewer physical
machines mean lower maintenance costs and simpler management.
o Why it’s useful: With fewer physical machines to maintain, businesses save
money and time on maintenance, and also reduce energy use.
o Why it’s useful: It enables server consolidation, where several virtual systems run
on one physical machine, making the system more efficient and reducing wasted
resources. This also helps save energy, which is better for the environment.
Disadvantages of Virtualization :
1. Performance Degradation:
o What it means: Virtualization adds an extra layer between the virtual machine
(guest) and the real hardware (host), which can slow down performance. This is
because the virtualization software has to manage and control the virtual systems,
which introduces delays.
o Why it’s a problem: The virtual machine may experience slower processing,
especially when running complex tasks like managing virtual processors, handling
memory, or running privileged commands (which require special access). This extra
workload can slow down the overall system.
o How it's improving: New technologies like paravirtualization and better hardware
are making virtualization faster, but performance issues still exist, especially for
tasks that need a lot of resources.
o What it means: Virtualization can sometimes result in inefficient use of the host's
resources because some features of the host system may not be available to the
virtual machine. For example, a virtual machine may not have access to the full
capabilities of the hardware like specific device drivers or advanced graphical
features.
o Why it’s a problem: Some virtualized environments may not provide the best
user experience. For example, earlier versions of Java had limited graphical
capabilities compared to native applications, which made apps look less polished.
o Example: In hardware virtualization, the virtual machine might only have a basic
graphic card instead of a high-performance one, leading to lower-quality graphics.
o What it means: Virtualization creates new security risks. Malicious software can
exploit the fact that a virtual machine is running on top of a host system. Since the
virtual machine is often isolated from the host, malware can sneak into the system
in ways that were harder before virtualization.
o Why it’s a problem: Some types of malware can hide within a virtual machine
and gain control over the host system. These "rootkits" can manipulate the virtual
machine manager to extract sensitive data or control the entire system.
o Example: Malware like BluePill can install itself in a virtual machine and control the
operating system to steal information. Similarly, SubVirt infects the guest OS and
then takes over the host when the virtual machine is restarted.
o How it's improving: New hardware support from Intel and AMD (like Intel VT and
AMD Pacifica) is improving security, but virtualization can still be a target for
hackers looking to exploit weak points.
Full virtualization is a technology that allows multiple operating systems (OS) to run on a single
physical computer by replicating the underlying hardware. The guest OS runs as if it has its own
dedicated hardware, without needing any modification.
o Examples: VMware Workstation (for Windows) and VMware Fusion (for Mac OS X).
2. Type I Hypervisors (for Servers):
1. Direct Execution:
2. Binary Translation:
o Sensitive tasks are translated into safer instructions. This enables unmodified guest
OSs like Windows to run smoothly.
1. CPU Virtualization:
o VMware virtualizes the CPU using direct execution for most instructions and binary
translation for sensitive ones.
2. Memory Virtualization:
3. Device Virtualization:
o VMware virtualizes devices such as keyboards, network cards, disks, and USB
controllers.
VMware Player:
VMware ACE:
VMware ThinApp:
o Isolates applications to prevent software conflicts.
Server Virtualization
o Manage virtual machines directly on server hardware. ESXi has a smaller OS layer
for better efficiency.
VMware vSphere:
o Manages virtual servers and provides services like storage, networking, and
application migration.
VMware vCenter:
VMware vCloud:
VMware vFabric:
VMware Zimbra:
Scientific applications are programs used by researchers and academics for tasks like data
analysis, simulations, and solving complex problems. Cloud computing has become popular for
running these applications because it offers:
o MapReduce: A simple model for processing large datasets. Widely used for data-
heavy scientific tasks.
These features make cloud computing an essential tool for advancing scientific research efficiently
and cost-effectively.
Healthcare uses computer technology for many tasks, including helping doctors diagnose diseases.
One important application is ECG data analysis using cloud computing.
What is ECG?
Doctors analyze this wave pattern to detect heart problems like arrhythmias.
Cloud computing enables remote monitoring and quick analysis of ECG data, ensuring timely
medical attention. Here’s how it works:
1. Wearable Devices:
o Patients wear devices with ECG sensors that monitor their heartbeats.
2. Data Transmission:
o The wearable device sends the data to the patient’s mobile phone.
o The phone forwards the data to a cloud-based web service for analysis.
3. Cloud Infrastructure:
o The web service stores ECG data using Amazon S3 (cloud storage).
o The data is processed using cloud servers managed by Aneka and a workflow
engine.
o If dangerous heart conditions are detected, doctors and emergency services are
notified immediately.
1. Elasticity:
o Cloud systems can automatically increase or decrease the number of servers based
on how many ECG requests need processing.
2. Accessibility (Ubiquity):
3. Cost Savings:
o Cloud services are paid for based on usage, reducing the need for expensive in-
house computer systems.
Protein structure prediction is essential in biology, especially for research like drug development. It
involves finding the 3D shape of a protein from its gene sequence, which is important because a
protein's shape determines its function in the body. However, this process is computationally
intensive due to the large number of possible structures that need evaluation.
The Jeeva project uses cloud computing for protein structure prediction through a web portal
powered by Aneka (a cloud computing platform). Here’s how it works:
o The system uses Support Vector Machines (SVMs) to predict protein structures.
o The prediction process is like pattern recognition, classifying proteins into three
categories (E, H, C).
3. Task Execution:
o The entire prediction process is translated into a task graph and sent to Aneka for
processing.
o Once complete, results are displayed through the web portal for researchers to use.
1. Scalability:
o Cloud systems can expand or reduce computing power based on the workload.
2. Cost-Effectiveness:
o Researchers pay only for what they use, making this approach much cheaper than
owning a supercomputer.
3. Accessibility:
Gene expression data analysis is used in biology to measure how active thousands of genes are at
once. This helps scientists understand how treatments affect cells and is crucial for cancer
diagnosis and drug development.
What Is Cancer?
Cancer happens when certain genes mutate, causing uncontrolled cell growth.
Identifying which genes are mutated helps doctors diagnose and treat cancer more
effectively.
Gene expression profiling helps classify cancerous tumors based on their genetic activity.
However, gene datasets are very large (thousands of genes), while sample sizes are
usually small, making analysis challenging.
Scientists use learning classifiers, which are computer models that classify gene data using
rules.
o Designed for gene datasets with many genes and limited samples.
Since analyzing thousands of genes takes a lot of computing power, scientists use Cloud-CoXCS, a
cloud-based version of CoXCS powered by Aneka.
1. Parallel Processing:
o Different parts of the gene dataset are processed at the same time using cloud
servers.
3. Final Results:
o The system combines results from different cloud servers to provide a final
classification of the gene data.
1. Scalability:
3. Cost-Effectiveness:
o Scientists only pay for the resources used, making it cheaper than owning powerful
computers.
By using cloud computing, researchers can analyze gene expression data faster and more
accurately, helping improve cancer diagnosis and treatment.
Geoscience: Satellite Image Processing
Geoscience involves studying Earth using data collected from satellites, sensors, and other
devices. This creates massive amounts of data, especially images from satellites, which need
special processing to be useful for tasks like weather forecasting, natural disaster management,
farming, and urban planning.
What Is GIS?
o Raw images from satellites are sent from ground stations to cloud-based computing
systems.
o SaaS (Software as a Service): Provides services like map creation and data
visualization.
o PaaS (Platform as a Service): Manages data import and image processing tasks.
3. Example Project:
1. Scalability:
o Ground stations avoid overloading their own computers by sending tasks to the
cloud.
3. Cost-Efficiency:
4. Faster Results:
By using cloud computing, geoscience researchers can handle vast amounts of satellite data
efficiently, enabling faster, smarter, and more cost-effective decision-making in areas like
agriculture, disaster response, and environmental management.
Business and consumer applications :
Definition:
CRM helps businesses manage relationships with customers by keeping track of
customer information, sales, and interactions.
Definition:
ERP is a complete business management system that integrates various business
functions like:
Less Popular: Large businesses often already have in-house ERP systems.
Transition Costs: Moving from in-house ERP to cloud ERP can be costly and
complex.
Example:
A car manufacturing company uses ERP to manage its supply chain, track production
progress, and handle employee payroll.
Key Difference:
Cloud CRM is more popular because it’s easier to adopt and affordable for businesses of
all sizes, while cloud ERP is less common due to its complexity and the difficulty of
switching from existing systems.
Salesforce.com Overview
1. Force.com Platform
o Initially designed for CRM, Force.com has evolved to support a variety of
cloud applications.
o It provides a scalable infrastructure that can handle different
applications and their needs.
2. Metadata Architecture
o Instead of storing business logic and data in fixed components, Salesforce
stores metadata (descriptions of data and logic) in a central place called
the Force.com store.
o This provides flexibility and scalability because applications don't depend
on specific components.
o The runtime engine fetches this metadata to execute the logic and
processes, which means all applications share a common structure.
3. Search Engine
o Salesforce includes a full-text search engine that helps users quickly
find data, even in large datasets.
o The search engine is constantly updated in the background as users
interact with the platform.
4. Customization Options
o Users can customize their CRM application in multiple ways:
Force.com Framework: Visual tools to define data or core
application structure.
APIs: Programmatic APIs allow developers to integrate using
popular programming languages.
APEX: A Java-like language that lets developers write scripts and
triggers to automate or customize processes.
1. Deployment Options
o On-Premises: Dynamics CRM can be installed and hosted on the
company’s own servers.
o Online: The online version is hosted in Microsoft’s data centers and
offered as a subscription service. This version is highly available, with a
99.9% Service Level Agreement (SLA) that guarantees uptime and
provides bonus credits if the service does not meet the agreement.
2. CRM Instances and Database
o Each CRM instance is deployed on a separate database to ensure data
isolation and security for different customers.
3. Core Features
o Marketing: Tools for managing marketing campaigns and customer
interactions.
o Sales: Sales automation features to help businesses track and close deals.
o Customer Relationship Management: Tools to manage customer
relationships effectively, improving customer service and satisfaction.
4. Access and Integration
o Dynamics CRM can be accessed via a web browser or programmatically
through SOAP and RESTful Web services. This makes it easy to
integrate with other Microsoft products and custom business applications.
5. Extensibility with Plugins
o The platform supports plugins, which are custom-developed pieces of
code that add specific functionalities. These plugins can be triggered by
events in the system (e.g., when a new customer is added).
6. Windows Azure Integration
o Dynamics CRM can integrate with Windows Azure, Microsoft’s cloud
platform, to develop and add new features, improving scalability and
performance.
NetSuite Overview
Datacenters:
NetSuite's services are powered by two large datacenters located on the East and West
coasts of the United States. These are connected by redundant links to ensure
continuous availability.
Uptime Guarantee:
NetSuite guarantees 99.5% uptime, ensuring high availability of its services.
Productivity :
Cloud computing offers the benefit of accessing data anytime, anywhere, and from
any device with an internet connection. Document storage is one of the most common
applications of this technology. Before cloud computing, online storage solutions existed
but were not as popular. With cloud technologies, these solutions have become more
advanced, user-friendly, and widely accessible.
Dropbox:
What is Dropbox?
o Dropbox is a popular cloud storage service that allows users to
synchronize files across multiple devices and platforms.
o Users can store documents, images, and other files in Dropbox's cloud
storage.
Key Features:
o Free Storage: Dropbox offers a certain amount of free storage to users.
o Synchronization: Files are stored in a special Dropbox folder on users'
devices. Any changes made to files in this folder are automatically
synchronized across all devices where Dropbox is installed, ensuring the
latest version is always available.
o Access: Users can access their files either through:
A web browser, or
By installing the Dropbox client on their devices, which creates a
special folder.
o Platform Availability: Dropbox works across multiple platforms,
including Windows, Mac, Linux, and mobile devices (iOS and Android).
o Seamless Integration: The service works seamlessly across all devices,
with no manual syncing required.
iCloud:
What is iCloud?
o iCloud is a cloud-based document-sharing and synchronization service
provided by Apple for its iOS and Mac devices.
Key Features:
o Automatic Synchronization: iCloud automatically syncs documents,
photos, and videos across all your Apple devices without requiring any
action from the user. For example, photos taken on an iPhone will
automatically appear in iPhoto on your Mac.
o Transparent Process: Unlike Dropbox, which requires users to interact
with a special folder, iCloud works in the background, keeping everything
in sync.
o iOS and Mac Focused: iCloud is designed primarily for Apple devices
(iPhones, iPads, and Macs) and works seamlessly within Apple's
ecosystem. Currently, there is no web interface for iCloud, meaning it’s
limited to Apple products only.
Google Docs is a cloud-based office suite that provides essential office automation tools
and enables collaborative editing online. It is delivered as a Software-as-a-Service
(SaaS), meaning that users can access it through the web without the need for
installation.
Ubiquitous Access: Access your documents from any device, at any time.
Elasticity: The service can scale to accommodate increasing numbers of users.
No Installation or Maintenance: Users don’t need to worry about installation or
software maintenance—everything is handled by Google.
Core Functionalities as a Service: Google Docs provides essential office tools
as a service, without the need for users to install or manage them.
Conclusion: Google Docs exemplifies what cloud computing can offer to end users:
easy access, collaboration, and elimination of installation and maintenance
costs, all delivered seamlessly through the cloud.
1. EyeOS
Architecture:
o Server Side: EyeOS stores user profiles and data. The server handles user
login and manages the desktop environment and applications.
o Client Side: Users access EyeOS through a web browser, where all the
necessary JavaScript libraries are loaded to create the desktop interface
and run the applications.
o AJAX Communication: Applications within the EyeOS desktop interact
with the server using AJAX, allowing real-time updates and operations like
document editing, file management, and communication (email and chat).
Customization: EyeOS allows for the development of new applications using its
API. Applications are created with server-side PHP and JavaScript files that handle
both functionality and user interaction.
Deployment: Individual users can use EyeOS via the web, and organizations can
set up a private EyeOS cloud to manage employees’ desktop environments
centrally.
Architecture:
o Client-Side: XIOS/3 relies on the client to render the user interface,
manage processes, and bind XML data to user interface components.
o Server-Side: The server handles the core functions, such as managing
transactions for collaborative document editing and the logic behind
installed applications.
Development Environment (XIDE): XIOS/3 provides an environment for
developers called XIDE (Xcerion Integrated Development Environment). This tool
allows users to quickly develop applications using a visual interface and XML
documents to define business logic. Developers can create applications that
interact with data via XML Web services.
Open-Source: XIOS/3 is open-source, and third-party developers can contribute
applications to a marketplace, expanding the functionality of the XIOS/3 desktop.
Focus on Collaboration: XIOS/3 simplifies collaboration by integrating services
and applications using XML Web services. It enables users to easily share and edit
documents and applications in real time.
Facebook is one of the largest social networking platforms in the world, with over 800
million users. To support its massive growth, Facebook has built a robust, scalable cloud
infrastructure, enabling it to add capacity quickly while maintaining high performance.
Here’s a look at how Facebook’s infrastructure works:
1. Scalable Infrastructure
Data Centers: Facebook operates two primary data centers that are optimized to
reduce costs and minimize environmental impact. These data centers are built
using inexpensive hardware but are carefully designed to be efficient.
Cloud Platform: Facebook's infrastructure supports its core social network and
provides APIs that allow third-party applications to integrate with Facebook’s
services. This helps deliver additional features like social games, quizzes, and
other services developed by external developers.
2. Technology Stack
5. Performance Optimization
Caching: One of the key strategies Facebook uses to ensure high performance is
caching. Data that is frequently requested is stored temporarily in memory,
reducing the time it takes to retrieve it from the database. This caching process
helps to speed up access to user data, improving the overall user experience.
Cloud-Based Media Applications: Animoto and Maya Rendering with
Aneka
Animoto is a popular cloud-based media application that allows users to easily create
videos from images, music, and video clips. The platform provides a simple, user-friendly
interface that enables users to:
The core value of Animoto lies in its ability to generate visually appealing videos quickly
and effortlessly. The service uses an AI-driven engine that automatically selects
animation and transition effects based on the content of the images and music. This
means that users only need to organize the content, and the system handles the creative
process. If users are unsatisfied with the initial result, they can render the video again,
with the AI engine creating a different version.
Free and Premium Plans: Users can create 30-second videos for free. For
longer videos and more templates, users must subscribe to a paid plan.
Amazon EC2: Used for the web front-end and worker nodes that process video
rendering tasks.
Amazon S3: Used for storing images, music, and videos.
Amazon SQS (Simple Queue Service): Manages communication between
different system components.
Rightscale: A cloud management tool that auto-scales the system by monitoring
load and adjusting the number of worker nodes based on demand.
The system is designed to handle high scalability and performance, using up to 4,000
EC2 servers during peak times. The architecture ensures that the system can process
requests without losing data, though users may experience temporary delays during
rendering.
Cloud computing is also playing a significant role in industries like engineering and movie
production, where rendering complex 3D models is a critical part of the design process.
Rendering such models is computationally expensive, especially when dealing with large
numbers of frames in high-quality 3D images. Cloud computing enables companies to
speed up this process by leveraging scalable resources.
One such application is the Maya Rendering with Aneka solution used by the GoFront
Group (a division of China Southern Railway). This group is responsible for designing
high-speed electric locomotives, metro cars, and other transportation vehicles. The
design and prototype testing of these vehicles often require high-quality, 3D renderings,
which are crucial for identifying and solving design issues.
Cloud Solution: To solve this challenge, the GoFront Group implemented a private
cloud solution for rendering tasks, using Aneka. Aneka is a cloud-based platform that
allows businesses to leverage distributed computing resources for demanding
computational tasks. By converting the department's network of desktop computers into
a desktop cloud, GoFront was able to significantly speed up the rendering process.
Aneka provides the necessary computing power to handle large-scale rendering tasks
efficiently, reducing the time spent on each iteration.
Cloud-Based Video Encoding and Multiplayer Online Gaming
Video encoding and transcoding are crucial processes for converting videos into
different formats to make them accessible across a variety of devices and platforms.
These processes are computationally intensive, requiring significant processing power
and storage capacity. Traditional encoding solutions often involve high upfront costs and
lack flexibility in handling different formats. With the rise of cloud technologies, services
like Encoding.com make video encoding and transcoding more accessible and scalable.
Cloud Integration: The service integrates with both Amazon Web Services
(AWS) (EC2, S3, and CloudFront) and Rackspace (Cloud Servers, Cloud Files,
and Limelight CDN), enabling flexible and scalable transcoding operations.
Multiple Access Methods: Users can interact with the service through:
o The Encoding.com website
o Web service XML APIs
o Desktop applications
o Watched folders
Customization: Users specify the video source, destination format, and target
location for the transcoded video. The service also supports additional video-
editing operations, such as inserting thumbnails, watermarks, or logos, and it
extends to audio and image conversion.
Pricing Models: Encoding.com offers various pricing models to suit different
needs:
o Monthly subscription
o Pay-as-you-go (by batches)
o Special pricing for high volumes
Performance and Scalability: With more than 2,000 customers and over 10 million
videos processed, Encoding.com provides reliable performance backed by its cloud
infrastructure, allowing users to scale their transcoding needs seamlessly without the
need for dedicated hardware.
Titan Inc. (now Xfire), a gaming company based in California, implemented a cloud-
based solution to offload the game log processing for its portal to a private Aneka
Cloud. The prototype allowed the company to: