Cloud Computing
Cloud Computing
Cloud Computing
UNIT- I
Pros Cons
Rapid Deployment Not much freedom
Low Cost Choices of tools are limited
HPC Architecture :
HPC cluster: An HPC cluster consists of hundreds or thousands of compute servers
that are networked together.
HPC Node: Each server is called a node. The nodes in each cluster work in parallel
with each other, boosting processing speed to deliver high performance computing.
Multicore processors have two or more processors in the same integrated chip.
Early on in practical applications, multiple cores were used independently of each
other.
Concurrency isn’t as much of an issue if cores are not working in tandem on the
same problem. Supercomputers and high-performance computing (HPC) saw
multiple cores first.
1.3 Parallel Computing :
Most conventional computers have SISD architecture where all the instruction and
data to be processed have to be stored in primary memory.
- Multiple data: Each processing unit can operate on a different data element
• This type of machine typically has an instruction dispatcher, a very high-bandwidth
internal network, and a very large array of very small-capacity instruction units.
• Best suited for specialized problems characterized by a high degree of regularity, such
as image processing.
• Synchronous (lockstep) and deterministic execution.
Multiple-instruction, single-data (MISD) :
• An MISD computing is a multiprocessor machine capable of executing different
instructions on processing elements but all of them operating on the same data set.
• Each processing unit operates on the data independently via independent instruction
streams.
• Few actual examples of this class of parallel computer have ever existed. One is the
expérimental Carnegie-Mellon C.mmp computer (1971).
The computing resources in most of the organizations are underutilized but are
necessary for certain operations.
The idea of grid computing is to make use of such non utilized computing power by
the needy organizations, and there by the return on investment (ROI) on
computing investments can be increased.
Several machines on a network collaborate under a common protocol and work as
a single virtual supercomputer to get complex tasks done. This offers powerful
virtualization by creating a single system image that grants users and applications
seamless access to IT capabilities.
A typical grid computing
network consists of three
machine types:
- Control node/server: A
control node is a server or a
group of servers that
administers the entire
network and maintains the
record for resources in a
network pool.
- Provider/grid node: A
provider or grid node is a
computer that contributes its
resources to the network
resource pool.
- User: A user refers to the
computer that uses the resources on the network to complete the task.
Grid computing operates by running specialized software on every computer
involved in the grid network. The software coordinates and manages all the tasks
of the grid. Fundamentally, the software segregates the main task into subtasks
and assigns the subtasks to each computer. This allows all the computers to work
simultaneously on their respective subtasks. Upon completion of the subtasks, the
outputs of all computers are aggregated to complete the larger main task.
In grid computing, each computing task is broken into small fragments and
distributed across computing nodes for efficient execution. Each fragment is
processed in parallel, and, as a result, a complex task is accomplished in less time.
Let’s consider this equation:
X = (4 x 7) + (3 x 9) + (2 x 5)
Typically, on a desktop computer, the steps needed here to calculate the value of X
may look like this:
Step 1: X = 28 + (3 x 9) + (2 x 5)
Step 2: X = 28 + 27 + (2 x 5)
Step 3: X = 28 + 27 + 10
Step 4: X = 65
However, in a grid computing setup, the steps are different as three processors or
computers calculate different pieces of the equation separately and combine them
later. The steps look like this:
Step 1: X = 28 + 27 + 10
Step 2: X = 65
As seen above, grid computing combines the involved steps due to the multiplicity of
available resources. This implies fewer steps and shorter timeframes.
Advantages
Can solve larger, more complex problems in a shorter time
Easier to collaborate with other organizations
Make better use of existing hardware
Disadvantages
Grid software and standards are still evolving
Learning curve to get started
Non-interactive job submission
Public Cloud
Public clouds are managed by third parties which provide cloud services over the
internet to the public, these services are available as pay-as-you-go billing models.
The fundamental characteristics of public clouds are multitenancy. A public cloud is
meant to serve multiple users, not a single customer. A user requires a virtual
computing environment that is separated, and most likely isolated, from other users.
Private cloud
Private clouds are distributed systems that work on private infrastructure and provide
the users with dynamic provisioning of computing resources. Instead of a pay-as-you-
go model in private clouds, there could be other schemes that manage the usage of
the cloud and proportionally billing of the different departments or sections of an
enterprise. Private cloud providers are HP Data Centers, Ubuntu, Elastic-Private
cloud, Microsoft, etc.
Hybrid cloud:
A hybrid cloud is a heterogeneous distributed system formed by combining facilities
of the public cloud and private cloud. For this reason, they are also
called heterogeneous clouds.
A major drawback of private deployments is the inability to scale on-demand and
efficiently address peak loads. Here public clouds are needed. Hence, a hybrid cloud
takes advantage of both public and private clouds.
Community cloud:
Community clouds are distributed systems created by integrating the services of
1. Software as a Service(SaaS)
Software-as-a-Service (SaaS) is a way of delivering services and applications over the
Internet. Instead of installing and maintaining software, we simply access it via the
Internet, freeing ourselves from the complex software and hardware management.
SaaS provides a complete software solution that you purchase on a pay-as-you-
go basis from a cloud service provider. The SaaS applications are sometimes
called Web-based software, on-demand software, or hosted software.
Advantages of SaaS
Advantages of IaaS:
1. Cost-Effective: Eliminates capital expense and reduces ongoing cost and IaaS
customers pay on a per-user basis, typically by the hour, week, or month.
2. Website hosting: Running websites using IaaS can be less expensive than
traditional web hosting.
3. Security: The IaaS Cloud Provider may provide better security than your existing
software.
4. Maintenance: There is no need to manage the underlying data center or the
introduction of new releases of the development or underlying software. This is all
handled by the IaaS Cloud Provider.
Advantages of XaaS: As this is a combined service, so it has all the advantages of every
type of cloud service.
Highly Scalable: Auto scaling is done by the provider depending upon the
demand.
Cost-Effective: Pay only for the number of events executed.
Code Simplification: FaaS allows the users to upload the entire application all at
once. It allows you to write code for independent functions or similar to those
functions.
Maintenance of code is enough and no need to worry about the servers.
Functions can be written in any programming language.
Less control over the system.
Engineering Biocomputers :
The behavior of biologically derived computational systems such as these relies on
the particular molecules that make up the system, which are primarily proteins but
may also include DNA molecules.
Nanobiotechnology provides the means to synthesize the multiple chemical
components necessary to create such a system.
The chemical nature of a protein is dictated by its sequence of amino acids—the
chemical building blocks of proteins. This sequence is in turn dictated by a specific
sequence of DNA nucleotides—the building blocks of DNA molecules.
Proteins are manufactured in biological systems through the translation
of nucleotide sequences by biological molecules called ribosomes, which assemble
individual amino acids into polypeptides that form functional proteins based on the
nucleotide sequence that the ribosome interprets.
• Computing Technologies are the technologies that are used to manage, process,
and communicate the data.
• Wireless simply means without any wire i.e. connecting with other devices without
any physical connection.
• Wireless computing is transferring the data or information between computers or
devices that are not physically connected to each other and having a “wireless
network connection”.
• For example, mobile devices, Wi-Fi, wireless printers and scanners, etc. Mobiles are
not physically connected but then too we can transfer data.
• It’s a Hand Held device, But communications takes place between various
resources using wireless.
• Mobile communication for voice applications (e.g., cellular phone) is widely
established throughout the
world and witnesses a very
rapid growth in all its
dimensions.
• An extension of this technology
is the ability to send and
receive data across various
cellular networks using small
devices such as smart
phones.
• Mobile is a computing device that not require any network connection or any
connection to transfer data or information between devices.
• For example laptops, tablets, smartphones, etc.
• Mobile computing allows transferring of the data/information, audio, video, or any
other document without any connection to the base or central network.
• These computing devices are the most widely used technologies nowadays.
• There are some wireless/mobile computing technologies such as:
5. Global System for Mobile Communications (GSM)
6. Code-Division Multiple Access (CDMA)
7. Wireless in Local Loop (WLL)
8. General Packet Radio Service (GPRS)
9. Short Message Service (SMS)
Mobile communication can be divided in the following four types:
1. Fixed and Wired
2. Fixed and Wireless
3. Mobile and Wired
4. Mobile and Wireless
1. Fixed and Wired: In Fixed and Wired configuration, the devices are fixed at a
position, and they are connected through a physical link to communicate with other
devices.
For Example, Desktop Computer.
2. Fixed and Wireless: In Fixed and Wireless configuration, the devices are fixed at a
position, and they are connected through a wireless link to make communication
with other devices.
For Example, Communication Towers, Wi-Fi router
3. Mobile and Wired: In Mobile and Wired configuration, some devices are wired, and
some are mobile. They altogether make communication with other devices.
For Example, Laptops.
4. Mobile and Wireless: In Mobile and Wireless configuration, the devices can
communicate with each other irrespective of their position. They can also connect to
any network without the use of any wired device.
For Example, WiFi Dongle.
A bit of data is represented by a single atom that is in one of two states denoted by
|0> abd |1> . A single bit of this form is known as a qubit.
A physical implementation of a qubit could use the two energy levels of an atom. An
excited state representing |1> and a ground state representation |0>.
1. Superposition
Given two states, a quantum particle exists in both states at the same time.
Alternatively, we may say that the particle exists in any combination of the two
states.
The particle's state is always changing but it can be programmed such that, for
example, 30% of the time it's in one state and 70% in the other state.
2. Entanglement
Two quantum particles can form a single system and influence each other.
Measurements from one can be correlated from the other.
3. Quantum Interference:
Trying to measure the current state of a quantum particle leads to a collapse; that
is, the measured state is one of the two states, not something in between.
External interference influences the probability of particle collapsing to one state or
the other.
Quantum computing systems must therefore must be protected from external
interference.
• Moore’s Law states that the number of transistors on a computer chip doubles
every eighteen months.
• Traditional transistors can no longer keep up.
• Too many transistors will slow down processor speeds.
• Transistors have physical size limits.
• Metallic wires limit the speed of transmission.
• Resistance per unit length in the chip is being increased, causing more power usage
and excess heating.
• Optical computing system uses the photons in visible light or infrared beams,
rather than electric current, to perform digital computations.
• An electric current flows at only about 10% of the speed of light.
• This limits the rate of data exchanged over long distances and is one of the factors
that led to the evolution of optical fiber
• A computer can be developed that can perform operations 10 or more times faster
than a conventional electronic computer.
• Nano computing refers to computing systems that are constructed from nano scale
components.
• Nanotechnology is science, engineering, and technology conducted at the
nanoscale, which is about 1 to 100 nanometers.
• Nanoscience and nanotechnology are the study and application of extremely small
things and can be used across all the other science fields, such as chemistry,
biology, physics, materials science, and engineering.
• One nanometer is a billionth of a meter, or 10-9 of a meter.
• Nanoscience and nanotechnology involve the ability to see and to control individual
atoms and molecules.
• Everything on Earth is made up of atoms—the food we eat, the clothes we wear, the
buildings and houses we live in, and our own bodies.
• Applications include Interdisciplinary area- Areas ranging from computing and
medicine to stain resistant textiles sutan lotions.
A quantum nanocomputer would work by storing data in the form of atomic quantum
states or spin.
• Technology of this kind is already under development in the form of single-electron
memory (SEM) and quantum dots.
• The energy state of an electron within an atom, represented by the electron energy
level or shell, can theoretically represent one, two, four, eight, or even 16 bits of
data.
• The main problem with this technology is instability.
• Instantaneous electron energy states are difficult to predict and even more difficult
to control.
• An electron can easily fall to a lower energy state, emitting a photon ; conversely, a
photon striking an atom can cause one of its electrons to jump to a higher energy
state.
Advantages
- High computing performance
- Low power computing
- Easily portable flexible
- Faster processing
- Lighter and small computer devices
- Noise Immunity
Disadvantages
- It is very expensive and developing it can cost you a lot of money.
- It is also pretty difficult to manufacture.
- These particles are very small, problems can actually arise from the
inhalation of these minute particles.
- Braking Ciphers
- Statistical Analysis
- Factoring large numbers
- Solving problems in theoretical physics
- Solving optimization problems in many variables