Cloud Computing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Department of Computer Science Engineering

Name of the faculty : Mrs. M SilpaRaj


Name of the Subject : CLOUD COMPUTING (CS714PE)
Class and Section : VI B. Tech CSE
Semester : I (ODD Semester)

Unit Number Topic to be covered


Lecture Number

UNIT- I

1. Introduction to computer paradigms

2. High performance computing


3. Parallel Computing
4. Distributed Computing
5. Cluster computing
6. Grid computing
12
7. Cloud computing
8. Bio computing
9. Mobile computing
10. Quantum computing
11. Optical computing

12. Nano computing


UNIT – I
NOTES
Cloud Computing : It is the use of remote servers on the internet to store, manage and
process data rather than a local server or your personal computer.
Service Providers :
– Google Apps Engine
– Windows Azure Platform
– AWS

Pros Cons
Rapid Deployment Not much freedom
Low Cost Choices of tools are limited

1. Introduction to computer paradigms

 Computing paradigm is the technique of linking two or more computers in to a


network.
 Paradigm- a model or example that shows how something works
 Computing- is any activity that uses computers
 Computing paradigm- a model or example that shows how computer works
 The process of utilizing computer technology to complete a task.
 Computing may involve computer hardware and/or software, must involve some
form of a computer system.
 Computing includes :
- Designing, developing and building hardware and software systems.
- Processing, structuring and managing various kinds of information.
- Doing scientific research on and with computers.
 In the domain of computing, there are many different standard practices being
followed based on inventions and
technological advancements. The
various computing paradigms: namely
1. Parallel Computing
2. Distributed Computing
3. Cluster Computing
4. Grid Computing
5. Cloud Computing
6. Bio Computing
7. Mobile Computing
8. Quantum Computing
9. Optical Computing
10. Nano Computing
1.2 High-Performance Computing :

- Ability to process data and perform complex calculations at high speeds.


- High performance computing (HPC) is the ability to process data and perform
complex calculations at high speeds.
- As technologies like the Internet of Things (IoT), artificial intelligence (AI), and 3-D
imaging evolve, the size and amount of data that organizations have to work with is
growing exponentially.
- For many purposes, such as streaming a live sporting event, tracking a developing
storm, testing new products, or analyzing stock trends, the ability to process data
in real time is crucial.
- To keep a step ahead of the competition, organizations need lightning-fast, highly
reliable IT infrastructure to process, store, and analyze massive amounts of data.

Main components of HPC :


 To build a high performance computing architecture,
compute servers are networked together into a cluster.
 Software programs and algorithms are run
simultaneously on the servers in the cluster.
 The cluster is networked to the data storage to capture
the output.
 Together, these components operate seamlessly to
complete a diverse set of tasks.

HPC Architecture :
 HPC cluster: An HPC cluster consists of hundreds or thousands of compute servers
that are networked together.
 HPC Node: Each server is called a node. The nodes in each cluster work in parallel
with each other, boosting processing speed to deliver high performance computing.
 Multicore processors have two or more processors in the same integrated chip.
Early on in practical applications, multiple cores were used independently of each
other.
 Concurrency isn’t as much of an issue if cores are not working in tandem on the
same problem. Supercomputers and high-performance computing (HPC) saw
multiple cores first.
1.3 Parallel Computing :

- Parallel computing is defined as a


type of computing where multiple
computer systems are used
simultaneously.
- A problem is broken into sub-
problems and then further broken
down into instructions.
- These instructions from each sub-
problem are executed concurrently on
different processors.
- Parallel computing is also one of the
facets of HPC.
- Here, a set of processors work
cooperatively to solve a computational problem. These processor machines or CPUs
are mostly of homogeneous type.
- Therefore, this definition is the same as that of HPC and is broad enough to include
supercomputers that have hundreds or thousands of processors interconnected
with other resources.
- One can distinguish between conventional (also known as serial or sequential or
Von Neumann) computers and parallel computers in the way the applications are
executed.

- In serial or sequential computers, the following apply:


 It runs on a single computer/processor machine having a single CPU.
 A problem is broken down into a discrete series of instructions.
 Instructions are executed one after another.

- In parallel computing, since there is simultaneous use of multiple processor


machines, the following apply:
 It is run using multiple processors (multiple CPUs).
 A problem is broken down into discrete parts that can be solved
concurrently.
 Each part is further broken down into a series of instructions.
 Instructions from each part are executed simultaneously on different
processors.
 An overall control/coordination mechanism is employed.
 M.J. Flynn proposed a classification for the organization of a computer system by
the number of instructions and data items that are manipulated simultaneously.
They are
1. Single-instruction, single-data (SISD)
2. Single-instruction, multiple-data (SIMD)
3. Multiple-instruction, single-data (MISD)
4. Multiple-instruction, multiple-data (MIMD)
Single-instruction, single-data (SISD) :
 Single instruction: Only one instruction stream is being acted or executed by CPU
during one clock cycle.
 Single data stream: Only one data stream is used as input during one clock cycle.
 A SISD computing system is a uniprocessor machine that is capable of executing a
single instruction operating on a single data stream.

 Most conventional computers have SISD architecture where all the instruction and
data to be processed have to be stored in primary memory.

Single-instruction, multiple-data (SIMD) :


• A SIMD system is a multiprocessor machine, capable of executing the same instruction
on all the CPUs but operating on the different data stream.
• A type of parallel computer
- Single instruction: All processing units execute the same instruction at any given
clock cycle

- Multiple data: Each processing unit can operate on a different data element
• This type of machine typically has an instruction dispatcher, a very high-bandwidth
internal network, and a very large array of very small-capacity instruction units.
• Best suited for specialized problems characterized by a high degree of regularity, such
as image processing.
• Synchronous (lockstep) and deterministic execution.
Multiple-instruction, single-data (MISD) :
• An MISD computing is a multiprocessor machine capable of executing different
instructions on processing elements but all of them operating on the same data set.

• A single data stream is fed into multiple processing units.

• Each processing unit operates on the data independently via independent instruction
streams.

• Few actual examples of this class of parallel computer have ever existed. One is the
expérimental Carnegie-Mellon C.mmp computer (1971).

• Multiple cryptography algorithms attempting to crack a single coded message.

Multiple-instruction, multiple-data (MIMD) :

 An MISD computing is a multiprocessor machine capable of executing different


instructions on processing elements but all of them operating on the same data set.
 Currently, the most common type of parallel computer.
 Most modern computers fall into this category.
 Multiple Instruction: every processor may be executing a different instruction stream
 Multiple Data: every processor may be working with a different data stream
 Execution can be synchronous or asynchronous, deterministic or non-deterministic
 Examples: most current supercomputers, networked parallel computer "grids" and
multi-processor SMP computers - including some types of PCs.
1.4 Distributed Computing :

 A distributed system is a collection of autonomous computers that are


interconnected with each other and cooperate, thereby sharing resources such as
printers and databases.
 Distributed computing is also a computing system that consists of multiple
computers or processor machines connected through a network, which can be
homogeneous or heterogeneous, but run as a single system.
 The connectivity can be such that the CPUs in a distributed system can be
physically close together and connected by a local network, or they can be
geographically distant and connected by a wide area network.

 The heterogeneity in a distributed system supports any number of possible


configurations in the processor machines, such as mainframes, PCs, workstations,
and minicomputers. The goal of distributed computing is to make such a network
work as a single computer. Distributed computing systems are advantageous over
centralized systems, because there is a support for the following characteristic
features:
1. Scalability: It is the ability of the system to be easily expanded by adding
more machines as needed, and vice versa, without affecting the existing
setup.
2. Redundancy or replication: Here, several machines can provide the same
services, so that even if one is unavailable (or failed), work does not stop
because other similar computing supports will be available

Motivational factors for distributed computing


• There are four major reasons for building distributed systems: resource sharing,
computation speedup, reliability, and communication.
- Resource Sharing : If a number of different sites (with different capabilities)
are connected to one another, then a user at one site may be able to use
the resources available at another. For example, a user at site A may be
using a laser printer located at site B. Meanwhile, a user at B may access a
file that resides at A.
- Computation Speedup : If a particular computation can be partitioned into
subcomputations that can run concurrently, then a distributed system
allows us to distribute the subcomputations among the various sites; the
subcomputations can be run concurrently and thus provide computation
speedup.
- Reliability : If one site fails in a distributed system, the remaining sites can
continue operating, giving the system better reliability If the system is
composed of multiple large autonomous installations (that is, general-
purpose computers), the failure of one of them should not affect the rest.
- Communication : When several sites are connected to one another by a
communication network, the users at different sites have the opportunity to
exchange information.

Parallel Computing Distributed Computing


Distributed computing is a computation type
Parallel computing is a computation
in which networked computers communicate
type in which multiple processors
and coordinate the work through message
execute multiple tasks simultaneously.
passing to achieve a common goal.
Parallel computing occurs on one Distributed computing occurs between
computer. multiple computers.
In parallel computing multiple In distributed computing, computers rely on
processors perform processing. message passing.
There is no global clock in distributed
All processors share a single master
computing, it uses synchronization
clock for synchronization.
algorithms.
In Parallel computing, computers can
In Distributed computing, each computer has
have shared memory or distributed
their own memory.
memory.

1.5 Cluster Computing :

 A cluster computing system consists of a set


of the same or similar type of processor
machines connected using a dedicated
network infrastructure.
 All processor machines share resources such
as a common home directory and have a
software such as a message passing interface
(MPI) implementation installed to allow
programs to be run across all nodes
simultaneously.
 This is also a kind of HPC category. The
individual computers in a cluster can be
referred to as nodes.
 The reason to realize a cluster as HPC is due to the fact that the individual nodes
can work together to solve a problem larger than any computer can easily solve.
 And, the nodes need to communicate with one another in order to work
cooperatively and meaningfully together to solve the problem in hand
 If we have processor machines of heterogeneous types in a cluster, this kind of
clusters become a subtype and still mostly are in the experimental or research
stage.

Cluster Computing- Types


1. High performance (HP) clusters :HP clusters use computer clusters and
supercomputers to solve advance computational problems.
2. Load-balancing clusters : Incoming requests are distributed for resources among
several nodes running similar programs or having similar content. This prevents any
single node from receiving a disproportionate amount of task. This type of distribution
is generally used in a web-hosting environment.
3. High Availability (HA) Clusters : HA clusters are designed to maintain redundant
nodes that can act as backup systems in case any failure occurs. Consistent
computing services like business activities, complicated databases, customer services
like e-websites and network file distribution are provided.

Cluster Computing- Classification


1. Open Cluster : IPs are needed by every node and those are accessed only through
the internet or web. This type of cluster causes enhanced security concerns.
2. Close Cluster : The nodes are hidden behind the gateway node, and they provide
increased protection. They need fewer IP addresses and are good for computational
tasks.

Cluster Computing- Design Objectives


 Scalability: Clustering of computers is based on the concept of modular growth.
To scale a cluster from hundreds of uniprocessor nodes to a supercluster with
10,000 multicore nodes is a nontrivial task.
 Packaging: Cluster nodes can be packaged in a compact or a slack fashion.
 Control: Cluster can be controlled or managed in a centralized or decentralized
fashion.
 Homogeneity: A homogeneous cluster uses nodes from the same platform, that is,
the same processor architecture and the same operating system; often, the nodes
are from the same vendors. A heterogeneous cluster uses nodes of different
platforms.
 Security: Intracluster communication can be either exposed or enclosed.

Cluster Computing- Advantages & Disadvantages


Advantages Disadvantages
High Performance High cost
Easy to manage Problem in finding fault
Scalable More space is needed
Expandability
Availability
Flexibility

Cluster Computing- Applications


 Various complex computational problems can be solved.
 It can be used in the applications of aerodynamics, astrophysics and in data
mining.
 Weather forecasting.
 Image Rendering.
 Various e-commerce applications.
 Earthquake Simulation.
 Petroleum reservoir simulation.

1.6 Grid Computing :

The computing resources in most of the organizations are underutilized but are
necessary for certain operations.
 The idea of grid computing is to make use of such non utilized computing power by
the needy organizations, and there by the return on investment (ROI) on
computing investments can be increased.
 Several machines on a network collaborate under a common protocol and work as
a single virtual supercomputer to get complex tasks done. This offers powerful
virtualization by creating a single system image that grants users and applications
seamless access to IT capabilities.
 A typical grid computing
network consists of three
machine types:
- Control node/server: A
control node is a server or a
group of servers that
administers the entire
network and maintains the
record for resources in a
network pool.
- Provider/grid node: A
provider or grid node is a
computer that contributes its
resources to the network
resource pool.
- User: A user refers to the
computer that uses the resources on the network to complete the task.
 Grid computing operates by running specialized software on every computer
involved in the grid network. The software coordinates and manages all the tasks
of the grid. Fundamentally, the software segregates the main task into subtasks
and assigns the subtasks to each computer. This allows all the computers to work
simultaneously on their respective subtasks. Upon completion of the subtasks, the
outputs of all computers are aggregated to complete the larger main task.
 In grid computing, each computing task is broken into small fragments and
distributed across computing nodes for efficient execution. Each fragment is
processed in parallel, and, as a result, a complex task is accomplished in less time.
Let’s consider this equation:
X = (4 x 7) + (3 x 9) + (2 x 5)
 Typically, on a desktop computer, the steps needed here to calculate the value of X
may look like this:
Step 1: X = 28 + (3 x 9) + (2 x 5)
Step 2: X = 28 + 27 + (2 x 5)
Step 3: X = 28 + 27 + 10
Step 4: X = 65
 However, in a grid computing setup, the steps are different as three processors or
computers calculate different pieces of the equation separately and combine them
later. The steps look like this:
Step 1: X = 28 + 27 + 10
Step 2: X = 65
 As seen above, grid computing combines the involved steps due to the multiplicity of
available resources. This implies fewer steps and shorter timeframes.

Grid Computing- Applications


• General Applications of Grid Computing
• Distributed Supercomputing
• High-throughput Supercomputing
• On-demand Supercomputing
• Data-intensive Supercomputing
• Collaborative Supercomputing

• Applications of Grid Computing Across Different Sectors


• Movie Industry
• Gaming Industry
• Life Sciences
• Engineering and Design
• Government

Advantages
 Can solve larger, more complex problems in a shorter time
 Easier to collaborate with other organizations
 Make better use of existing hardware
Disadvantages
 Grid software and standards are still evolving
 Learning curve to get started
 Non-interactive job submission

 Grid computing is more popular due to the following reasons:


1. Its ability to make use of unused computing power, and thus, it is a cost
effective solution (reducing investments, only recurring costs).
2. As a way to solve problems in line with any HPC-based application.
3. Enables heterogeneous resources of computers to work cooperatively and
collaboratively to solve a scientific problem.
 Grid services provide :
- Access control,
- Security,
- Access to data including digital libraries and databases,
- Access to large-scale interactive and long- term storage facilities.

Cluster computing Grid computing


Computers of Cluster computing are co- Computers of Grid Computing can be
located and are connected by high speed present at different locations and are
n/w bus cables. usually connected by internet.
Cluster computing network is prepared Grid computing network is distributed and
using a centralized network topology. have a de- centralized network topology.
A centralized server controls the task In Grid Computing, multiple servers can
scheduling . exist. Each node behaves independently
without need of any centralized scheduling

1.7 Cloud Computing :

Cloud is a parallel and distributed


computing system consisting of a
collection of inter-connected and
virtualized computers that are
dynamically provisioned and
presented as one or more unified
computing resources based on
service-level agreements (SLA)
established through negotiation
between the service provider and
consumers.

• Key characteristics of cloud


computing as:
i. the illusion of infinite computing resources
ii. the elimination of an up-front commitment by cloud users; and
iii. the ability to pay for use . . . as needed .
 There are basically 5 essential characteristics of Cloud Computing :
1. On-demand self-services: The Cloud computing services does not require any
human administrators, user themselves are able to provision, monitor and
manage computing resources as needed.
2. Broad network access: The Computing services are generally provided over
standard networks and heterogeneous devices.
3. Rapid elasticity: The Computing services should have IT resources that are able
to scale out and in quickly and on as needed basis. Whenever the user require
services it is provided to him and it is scale out as soon as its requirement gets
over.
4. Resource pooling: The IT resource (e.g., networks, servers, storage, applications,
and services) present are shared across multiple applications and occupant in an
uncommitted manner. Multiple clients are provided service from a same physical
resource.
5. Measured service: The resource utilization is tracked for each application and
occupant, it will provide both the user and the resource provider with an account
of what has been used. This is done for various reasons like monitoring billing
and effective use of resource.

Types of Cloud Computing


Cloud computing is Internet-based computing in which there are four different types
of cloud. They are :
1. Public cloud
2. Private cloud
3. Hybrid cloud
4. Community cloud

Public Cloud
 Public clouds are managed by third parties which provide cloud services over the
internet to the public, these services are available as pay-as-you-go billing models.
 The fundamental characteristics of public clouds are multitenancy. A public cloud is
meant to serve multiple users, not a single customer. A user requires a virtual
computing environment that is separated, and most likely isolated, from other users.

Private cloud
 Private clouds are distributed systems that work on private infrastructure and provide
the users with dynamic provisioning of computing resources. Instead of a pay-as-you-
go model in private clouds, there could be other schemes that manage the usage of
the cloud and proportionally billing of the different departments or sections of an
enterprise. Private cloud providers are HP Data Centers, Ubuntu, Elastic-Private
cloud, Microsoft, etc.

 The advantages of using a private cloud are as follows:


1. Customer information protection: In the private cloud security concerns are
less since customer data and other sensitive information do not flow out of
private infrastructure.
2. Infrastructure ensuring SLAs: Private cloud provides specific operations such
as appropriate clustering, data replication, system monitoring, and maintenance,
disaster recovery, and other uptime services.
3. Compliance with standard procedures and operations: Specific procedures
have to be put in place when deploying and executing applications according to
third-party compliance standards. This is not possible in the case of the public
cloud.
 Disadvantages of using a private cloud are:
1. The restricted area of operations: Private cloud is accessible within a
particular area. So the area of accessibility is restricted.
2. Expertise requires: In the private cloud security concerns are less since
customer data and other sensitive information do not flow out of private
infrastructure. Hence skilled people are required to manage & operate cloud
services.

Hybrid cloud:
 A hybrid cloud is a heterogeneous distributed system formed by combining facilities
of the public cloud and private cloud. For this reason, they are also
called heterogeneous clouds.
 A major drawback of private deployments is the inability to scale on-demand and
efficiently address peak loads. Here public clouds are needed. Hence, a hybrid cloud
takes advantage of both public and private clouds.

 Advantages of using a Hybrid cloud are:


1) Cost : Available in cheap cost than other clouds because it is formed by
distributed system.
2) Speed : It is efficiently fast with lower cost, It reduces latency of data transfer
process.
3) Security : Most important thing is security. Hybrid cloud are totally safe and
secured because it works on distributed system network.

Community cloud:
Community clouds are distributed systems created by integrating the services of

different clouds to address the specific needs of an industry, a community, or a


business sector. But sharing responsibilities among the organizations is difficult.
In the community cloud, the infrastructure is shared between organizations that have
shared concerns or tasks. The cloud may be managed by an organization or a third
party.

 Sectors that use community clouds are:


1. Media industry: Media companies are looking for quick, simple, low-cost ways
for increasing the efficiency of content generation. Most media productions
involve an extended ecosystem of partners. In particular, the creation of digital
content is the outcome of a collaborative process that includes the movement of
large data, massive compute-intensive rendering tasks, and complex workflow
executions.
2. Healthcare industry: In the healthcare industry community clouds are used to
share information and knowledge on the global level with sensitive data in the
private infrastructure.
3. Energy and core industry: In these sectors, the community cloud is used to
cluster a set of solution which collectively addresses the management,
deployment, and orchestration of services and operations.
4. Scientific research: In this organization with common interests in science share
a large distributed infrastructure for scientific computing.

Cloud Computing Services


Most cloud computing services fall into five broad categories:
1. Software as a service (SaaS)
2. Platform as a service (PaaS)
3. Infrastructure as a service (IaaS)
4. Anything/Everything as a service (XaaS)
5. Function as a Service (FaaS)

1. Software as a Service(SaaS)
Software-as-a-Service (SaaS) is a way of delivering services and applications over the
Internet. Instead of installing and maintaining software, we simply access it via the
Internet, freeing ourselves from the complex software and hardware management.
 SaaS provides a complete software solution that you purchase on a pay-as-you-
go basis from a cloud service provider. The SaaS applications are sometimes
called Web-based software, on-demand software, or hosted software.

Advantages of SaaS

1. Cost-Effective: Pay only for what you use.


2. Reduced time: Users can run most SaaS apps directly from their web browser
without needing to download and install any software. This reduces the time spent
in installation and configuration and can reduce the issues that can get in the way
of the software deployment.
3. Accessibility: We can Access app data from anywhere.
4. Automatic updates: Rather than purchasing new software, customers rely on a
SaaS provider to automatically perform the updates.
5. Scalability: It allows the users to access the services and features on-demand.

2. Platform as a Service (PaaS)


PaaS is a category of cloud computing that provides a platform and environment to
allow developers to build applications and services over the internet. A PaaS provider
hosts the hardware and software on its own infrastructure. As a result, the
development and deployment of the application take place independent of the
hardware.
Advantages of PaaS:
1. Simple and convenient for users: It provides much of the infrastructure and other
IT services, which users can access anywhere via a web browser.
2. Cost-Effective: It charges for the services provided on a per-use basis thus
eliminating the expenses one may have for on-premises hardware and software.
3. Efficiently managing the lifecycle: It is designed to support the complete web
application lifecycle: building, testing, deploying, managing, and updating.
4. Efficiency: It allows for higher-level programming with reduced complexity thus,
the overall development of the application can be more effective.

3. Infrastructure as a Service (IaaS)


Infrastructure as a service (IaaS) is a service model that delivers computer
infrastructure on an outsourced basis to support various operations. Typically IaaS is
a service where infrastructure is provided as outsourcing to enterprises such as
networking equipment, devices, database, and web servers.
It is also known as Hardware as a Service (HaaS).

Advantages of IaaS:
1. Cost-Effective: Eliminates capital expense and reduces ongoing cost and IaaS
customers pay on a per-user basis, typically by the hour, week, or month.
2. Website hosting: Running websites using IaaS can be less expensive than
traditional web hosting.
3. Security: The IaaS Cloud Provider may provide better security than your existing
software.
4. Maintenance: There is no need to manage the underlying data center or the
introduction of new releases of the development or underlying software. This is all
handled by the IaaS Cloud Provider.

4. Anything as a Service (AaaS)


It is also known as Everything as a Service. Most of the cloud service providers
nowadays offer anything as a service that is a compilation of all of the above services
including some additional services.

Advantages of XaaS: As this is a combined service, so it has all the advantages of every
type of cloud service.

5. Function as a Service (FaaS)


FaaS is a type of cloud computing service. It provides a platform for its users or
customers to develop, compute, run and deploy the code or entire application as
functions. It allows the user to entirely develop the code and update it at any time
without worrying about the maintenance of the underlying infrastructure. The
developed code can be executed with response to the specific event. It is also as same
as PaaS.
 FaaS, provides auto-scaling up and scaling down depending upon the demand. PaaS
also provides scalability but here users have to configure the scaling parameter
depending upon the demand.
Advantages of FaaS :

 Highly Scalable: Auto scaling is done by the provider depending upon the
demand.
 Cost-Effective: Pay only for the number of events executed.
 Code Simplification: FaaS allows the users to upload the entire application all at
once. It allows you to write code for independent functions or similar to those
functions.
 Maintenance of code is enough and no need to worry about the servers.
 Functions can be written in any programming language.
 Less control over the system.

1.8 Bio Computing :

• Bio computing systems use the concepts of biologically derived or simulated


molecules that perform computational processes in order to solve a problem.
• Bio computing provides the theoretical background and practical tools for
scientists to explore proteins and DNA.
• The bio computing scientist works on inventing the order suitable for various
applications in biology .
• Bio computing gives a better understanding of life and the molecular causes of
certain diseases.
• Bio computing is defined as the process of building computers that use biological
materials, mimic biological organisms or are used to study biological organisms.

Bio Computing-Biochemical computers :


 Biochemical computers use the immense variety of feedback loops that are
characteristic of biological chemical reactions in order to achieve computational
functionality.
 Feedback loops in biological systems take many forms, and many different factors
can provide both positive and negative feedback to a particular biochemical process,
causing either an increase in chemical output or a decrease in chemical output,
respectively.
 Such factors may include the quantity of catalytic enzymes present, the amount of
reactants present, the amount of products present, and the presence of molecules
that bind to and thus alter the chemical reactivity of any of the aforementioned
factors.

Bio Computing-Bioelectronic computers:


 Biocomputers can also be constructed in order to perform electronic computing.
 Again, like both biomechanical and biochemical computers, computations are
performed by interpreting a specific output that is based upon an initial set of
conditions that serve as input.
 In bioelectronic computers, the measured output is the nature of the electrical
conductivity that is observed in the bioelectronic computer.
 This output comprises specifically designed biomolecules that conduct electricity in
highly specific manners based upon the initial conditions that serve as the input of
the bioelectronic system.

Engineering Biocomputers :
 The behavior of biologically derived computational systems such as these relies on
the particular molecules that make up the system, which are primarily proteins but
may also include DNA molecules.
 Nanobiotechnology provides the means to synthesize the multiple chemical
components necessary to create such a system.
 The chemical nature of a protein is dictated by its sequence of amino acids—the
chemical building blocks of proteins. This sequence is in turn dictated by a specific
sequence of DNA nucleotides—the building blocks of DNA molecules.
 Proteins are manufactured in biological systems through the translation
of nucleotide sequences by biological molecules called ribosomes, which assemble
individual amino acids into polypeptides that form functional proteins based on the
nucleotide sequence that the ribosome interprets.

1.9 Mobile Computing :

• Computing Technologies are the technologies that are used to manage, process,
and communicate the data.
• Wireless simply means without any wire i.e. connecting with other devices without
any physical connection.
• Wireless computing is transferring the data or information between computers or
devices that are not physically connected to each other and having a “wireless
network connection”.
• For example, mobile devices, Wi-Fi, wireless printers and scanners, etc. Mobiles are
not physically connected but then too we can transfer data.
• It’s a Hand Held device, But communications takes place between various
resources using wireless.
• Mobile communication for voice applications (e.g., cellular phone) is widely
established throughout the
world and witnesses a very
rapid growth in all its
dimensions.
• An extension of this technology
is the ability to send and
receive data across various
cellular networks using small
devices such as smart
phones.
• Mobile is a computing device that not require any network connection or any
connection to transfer data or information between devices.
• For example laptops, tablets, smartphones, etc.
• Mobile computing allows transferring of the data/information, audio, video, or any
other document without any connection to the base or central network.
• These computing devices are the most widely used technologies nowadays.
• There are some wireless/mobile computing technologies such as:
5. Global System for Mobile Communications (GSM)
6. Code-Division Multiple Access (CDMA)
7. Wireless in Local Loop (WLL)
8. General Packet Radio Service (GPRS)
9. Short Message Service (SMS)
Mobile communication can be divided in the following four types:
1. Fixed and Wired
2. Fixed and Wireless
3. Mobile and Wired
4. Mobile and Wireless

1. Fixed and Wired: In Fixed and Wired configuration, the devices are fixed at a
position, and they are connected through a physical link to communicate with other
devices.
For Example, Desktop Computer.
2. Fixed and Wireless: In Fixed and Wireless configuration, the devices are fixed at a
position, and they are connected through a wireless link to make communication
with other devices.
For Example, Communication Towers, Wi-Fi router
3. Mobile and Wired: In Mobile and Wired configuration, some devices are wired, and
some are mobile. They altogether make communication with other devices.
For Example, Laptops.
4. Mobile and Wireless: In Mobile and Wireless configuration, the devices can
communicate with each other irrespective of their position. They can also connect to
any network without the use of any wired device.
For Example, WiFi Dongle.

1.10 Quantum Computing :


 Quantum computing is a modern way of computing that is based on the science of
quantum mechanics and its unbelievable phenomena.
 It is a beautiful combination of physics, mathematics, computer science and
information theory.
 It provides high computational power, less energy consumption and exponential
speed over classical computers by controlling the behavior of small physical objects
i.e. microscopic particles like atoms, electrons, photons, etc.
 Small-scale quantum computers are being developed recently.
 This development is heading towards a great future due to their high potential
capabilities and advancements in ongoing research.

Quantum Computing- Data Representation


• A Qubit can exist in a Superposition, carrying the value of both 1 and 0 at the same
time.
• This opens up a whole new world of possibilities for solving complex problems that
require lots of computational power.

 A bit of data is represented by a single atom that is in one of two states denoted by
|0> abd |1> . A single bit of this form is known as a qubit.
 A physical implementation of a qubit could use the two energy levels of an atom. An
excited state representing |1> and a ground state representation |0>.

Quantum Computing- Properties :

1. Superposition
 Given two states, a quantum particle exists in both states at the same time.
 Alternatively, we may say that the particle exists in any combination of the two
states.
 The particle's state is always changing but it can be programmed such that, for
example, 30% of the time it's in one state and 70% in the other state.
2. Entanglement
 Two quantum particles can form a single system and influence each other.
Measurements from one can be correlated from the other.

3. Quantum Interference:
 Trying to measure the current state of a quantum particle leads to a collapse; that
is, the measured state is one of the two states, not something in between.
 External interference influences the probability of particle collapsing to one state or
the other.
 Quantum computing systems must therefore must be protected from external
interference.

1.11 Optical Computing :

• Moore’s Law states that the number of transistors on a computer chip doubles
every eighteen months.
• Traditional transistors can no longer keep up.
• Too many transistors will slow down processor speeds.
• Transistors have physical size limits.
• Metallic wires limit the speed of transmission.
• Resistance per unit length in the chip is being increased, causing more power usage
and excess heating.
• Optical computing system uses the photons in visible light or infrared beams,
rather than electric current, to perform digital computations.
• An electric current flows at only about 10% of the speed of light.
• This limits the rate of data exchanged over long distances and is one of the factors
that led to the evolution of optical fiber
• A computer can be developed that can perform operations 10 or more times faster
than a conventional electronic computer.

Optical Computing: Basic Path of Information Through an Optical Computer


 Information gets sent in from keyboard, mouse, or other external sources and goes
to the processor.
 Processor then sends the information through logic gates and switches to be
programmed.
 The information is then sent through different fiber optic cables depending on it’s
final location.
 Some information will be sent to the holographic memory, where it will then be
saved.
 After information is saved and the program would like to use it, the program sends
a command to the processor, which then sends a command to receive the
information.
 The program receives the information and sends a signal back to the processor to
tell it that the task is complete.
 The main Optical components required for computing in an Optical Computer are:
1. VCSEL (Vertical Cavity Surface Emitting Micro Laser)
2. Spatial Light Modulators.
3. Optical Logical Gates.
4. Smart Pixels.

1. Vertical Cavity Surface Emitting Micro Laser (VCSEL) :


- VCSEL(pronounced ‘vixel’)is a semiconductor vertical cavity surface emitting laser diode
that emits light in a cylindrical beam vertically from the surface of a fabricated wafer.
- But rather than reflective ends, in a VCSEL there are several layers of partially
reflective mirrors above and below the active layer.
- Layers of semiconductors with differing compositions create these mirrors, and each
mirror reflects a narrow range of wavelengths back in to the cavity in order to cause
light emission at just one wavelength.
2. Spatial Light Modulators (SLM) :
 SLM play an important role in several technical areas where the control of light on a
pixel-by-pixel basis is a key element, such as optical processing and displays.
 For display purposes the desire is to have as many pixels as possible in as small and
cheap a device as possible.
3. Wavelength Division Multiplexing (WDM) :
 Wavelength division multiplexing is a method of sending many different wavelengths
down the same optical fiber.
 WDM can transmit up to 32 wavelengths through a single fiber, but cannot meet the
bandwidth requirements of the present day communication systems.
 Nowadays DWDM (Dense wavelength division multiplexing) is used. This can transmit
up to 1000 wavelengths through a single fiber. That is by using this we can improve the
bandwidth efficiency.

4. Smart Pixels Technology (SPT) :


 Smart pixel technology is a relatively new approach to integrating electronic circuitry
and optoelectronic devices in a common framework.
 Here, the electronic circuitry provides complex functionality and programmability.
 While the optoelectronic devices provide high-speed switching and compatibility with
existing optical media.
 Arrays of these smart pixels leverage the parallelism of optics for interconnections as
well as computation.

Optical Computing: Advantages & Limitations


Advantages
- Small size
- Increased speed
- Low heating Reconfigurable
- Scalable for larger or small networks
- More complex functions done faster
- Applications for Artificial Intelligence
- Less power consumption (500 microwatts per interconnect length vs. 10mW
for electrical)
Limitations
- Optical fibers on a chip are wider than electrical traces.
- Crystals need 1mm of length and are much larger than current transistors
- Software needed to design and run the computers.

Optical Computing: Applications


 We currently use DWDM(dense wavelength division multiplexing) fiber optics for
data transfers between cities.
 Corning is making fiber optics to the home available for high speed internet
connection.
 Optical Spatial Filters are used for medical imaging, using Fourier Analysis to
sharpen an image, such as an X-Ray.

1.12 Nano Computing :

• Nano computing refers to computing systems that are constructed from nano scale
components.
• Nanotechnology is science, engineering, and technology conducted at the
nanoscale, which is about 1 to 100 nanometers.
• Nanoscience and nanotechnology are the study and application of extremely small
things and can be used across all the other science fields, such as chemistry,
biology, physics, materials science, and engineering.
• One nanometer is a billionth of a meter, or 10-9 of a meter.
• Nanoscience and nanotechnology involve the ability to see and to control individual
atoms and molecules.
• Everything on Earth is made up of atoms—the food we eat, the clothes we wear, the
buildings and houses we live in, and our own bodies.
• Applications include Interdisciplinary area- Areas ranging from computing and
medicine to stain resistant textiles sutan lotions.

Nano Computing: Nano Computers


- A nanocomputer is a computer whose physical dimensions are microscopic.
- The field of nanocomputing is part of the emerging field of nanotechnology.
- Several types of nanocomputers have been suggested or proposed by
researchers and futurists.
1. Electronic nano computers
2. Chemical and biochemical nano computers
3. Mechanical nano computers
4. Quantum nano computer

Nano Computing: Types

Electronic nanocomputers would operate in a manner similar to the way present-day


microcomputers work.
• The main difference is one of physical scale.
• More and more transistor s are squeezed into silicon chips with each passing year;
witness the evolution of integrated circuits ( IC s) capable of ever-increasing storage
capacity and processing power.
• The ultimate limit to the number of transistors per unit volume is imposed by the
atomic structure of matter.
• Most engineers agree that technology has not yet come close to pushing this limit.
• In the electronic sense, the term nanocomputer is relative.
• By 1970s standards, today's ordinary microprocessors might be called nano
devices.

Chemical and biochemical nanocomputers would store and process information in


terms of chemical structures and interactions.
• Biochemical nanocomputers already exist in nature; they are manifest in all living
things. But these systems are largely uncontrollable by humans.
• We cannot, for example, program a tree to calculate the digits of pi , or program an
antibody to fight a particular disease (although medical science has come close to
this ideal in the formulation of vaccines, antibiotics, and antiviral medications).
• The development of a true chemical nanocomputer will likely proceed along lines
similar to genetic engineering.
• Engineers must figure out how to get individual atoms and molecules to perform
controllable calculations and data storage tasks.

Mechanical nanocomputers would use tiny moving components called nanogears to


encode information.
• Such a machine is reminiscent of Charles Babbage 's analytical engines of the 19th
century.
• For this reason, mechanical nanocomputer technology has sparked controversy;
some researchers consider it unworkable.
• All the problems inherent in Babbage's apparatus, according to the naysayers, are
magnified a millionfold in a mechanical nanocomputer.
• Nevertheless, some futurists are optimistic about the technology, and have even
proposed the evolution of nanorobots that could operate, or be controlled by,
mechanical nanocomputers.

A quantum nanocomputer would work by storing data in the form of atomic quantum
states or spin.
• Technology of this kind is already under development in the form of single-electron
memory (SEM) and quantum dots.
• The energy state of an electron within an atom, represented by the electron energy
level or shell, can theoretically represent one, two, four, eight, or even 16 bits of
data.
• The main problem with this technology is instability.
• Instantaneous electron energy states are difficult to predict and even more difficult
to control.
• An electron can easily fall to a lower energy state, emitting a photon ; conversely, a
photon striking an atom can cause one of its electrons to jump to a higher energy
state.

Nano Computing: Advantages & Disadvantages

Advantages
- High computing performance
- Low power computing
- Easily portable flexible
- Faster processing
- Lighter and small computer devices
- Noise Immunity

Disadvantages
- It is very expensive and developing it can cost you a lot of money.
- It is also pretty difficult to manufacture.
- These particles are very small, problems can actually arise from the
inhalation of these minute particles.

Nano Computing: Applications

- Braking Ciphers
- Statistical Analysis
- Factoring large numbers
- Solving problems in theoretical physics
- Solving optimization problems in many variables

Classical Computing Nano Computing


Used for the representation and
Used by large scale multi-purpose
manipulation of data by computers
computers and devices.
smaller than a microcomputer.
Information is stored in quantum dots or
Information is stored in bits.
spins.
There are an infinite, continuous number
There are a discrete number of possible of possible states. Number of nanoscale
states, 0 or 1. structures including biomolecules such as
DNA and proteins.
Calculations are deterministic, meaning Calculations are probabilistic, meaning
repeating the same input results in the there are multiple possible outputs to the
same output. same input.
Data processing is carried out by
Data processing is carried out by logic
computers that are smaller than a
and in sequential order.
microcomputer.
Operations are defined by Boolean Operations are defined by solid-state
Algebra. quantum bit operation
Circuit behavior is defined by very small
Circuit behavior is defined by classical
electronic devices and molecules, their
physics.
fabrication.

You might also like