0% found this document useful (0 votes)
32 views25 pages

Research On Computing System Models

Uploaded by

lucyngei2021
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views25 pages

Research On Computing System Models

Uploaded by

lucyngei2021
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

LUCY MWELU

DCS-01-0121/2023

RESEARCH ON COMPUTING SYSTEM MODELS

1. CENTRALIZED COMPUTING
Centralized computing is computing done from a central location using terminals attached to a
central computer. The centralized computer can control all peripherals directly if they are
physically connected to the central computer. Alternatively, the terminals can connect to the
central computer over the network if they have the capability.
Centralized computing refers to a system where all processing and data storage is handled by a
single, central device or system. This central device is responsible for processing all requests and
managing all data, and all other devices in the system are connected to it and rely on it for their
computing needs.
One example of a centralized computing system is a traditional mainframe system, where a
central mainframe computer handles all processing and data storage for the system. In this type
of system, users access the mainframe through terminals or other devices that are connected to
it.

Components of Centralized Computing

There are several key components of a Centralized Computing System


 Central Device or System: The central device or system handles all processing and data storage
for the system.
 Clients: The clients are devices or systems that request and receive services from the central
device or system.
 Network: The network connects the central device or system and the clients, allowing them to
communicate and exchange data.
The architecture of a centralized computing system is typically a client-server architecture,
where the central device or system acts as a server and the other devices in the system act as
clients.

Centralized Computing Infrastructure


It can be helpful to compare centralized computing to a client/server architecture. In that
model, IT staff connect the PCs of the client to a central server. Typically, each client PC acts as a
resource with limited or nil computing capacity. All they have is a visual display, basic input
tools, and a thin CPU.
IT staff connect the clients’ PCs over the network to a central server for processing
computations. The central server has massive computing resources, as well as expansive
storage. Consequently, many of them also offer advanced computing features. Access to any
applications, storage, computing, web access, and security is done only from the central server.
Besides, all the nodes of the client are managed from the central server interface by the
administrator in a centralized computing infrastructure.
Early computers were not provided with separate terminals. They had built-in output and input
devices. However, experts discovered that they could be helpful to help multiple people use the
computer at the same time. As a result, it saved organizations a tidy sum as early computers
were very pricey. Likewise, it took a lot of capital to manufacture and maintain.
Characteristics
There are several characteristics that define a Centralized Computing System
 Single Central Device or System: All processing and data storage is handled by a single, central
device or system.
 Client-Server Architecture: The central device or system acts as a server, while other devices in
the system act as clients that request and receive services from the server.
 Shared Resources: The central device or system manages and controls access to shared
resources, such as data storage and processing power.
 Vertical Scaling: Scaling a centralized computing system typically involves adding more
resources to the central device or system, such as additional memory or processing power. This
can be done through hardware upgrades or by adding additional devices to the system.

Some Advantages of the Centralized System are:


 Simplicity: Centralized systems are relatively simple, as all processing and data storage is
handled by a single device or system.
 Cost: Centralized systems may be cheaper to set up and maintain, as they require fewer devices
or systems.
 Performance: Centralized systems may offer faster performance, as all processing and data
storage is handled by a single, powerful device or system.

Limitations to Centralized Computing System:


 Single Point of Failure: If the central device or system fails, the entire system may go down, as
all processing and data storage relies on the central device.
 Limited Scalability: Centralized systems may be limited in their ability to scale, as they are
dependent on the capabilities of the central device or system.
 Limited Flexibility: It may be more difficult to reconfigure or adapt a centralized system to meet
changing needs or requirements.

Applications
Centralized Computing Systems have a number of applications, including:
 Mainframe Systems: Traditional mainframe systems are a type of centralized computing system
that is used in a variety of industries, including finance, healthcare, and government.
 Client-Server Systems: Client-server systems are a type of centralized computing system that is
used in a variety of applications, including business applications, web applications, and more.
 Network Servers: Network servers are a type of centralized computing system that is used to
manage and control access to shared resources, such as data storage and printing, on a network.

Some of the centralized computing models in vogue today among many businesses are as
follows.
DISK-LESS NODE MODEL
The disk-less node model is a mix between centralized computing and traditional computing. In
this model, some applications are run locally (E.g., web browsers) are run locally. Meanwhile, a
few applications use the terminal server, e.g. business critical systems. You can implement this
model by running remote desktop software on a standard desktop computer.
HOSTED COMPUTING MODEL
This is a later version of centralized computing. The hosted computing model has the ability to
resolve many problems posed by conventional distributed computing systems. This model
centralizes the processing and storage aspects. The storage happens on powerful server
hardware in a data center instead of a local office.
This eases the responsibility and stress on organizations as they are spared the hassle of owning
and maintaining an IT system. These services are generally available on a subscription basis and
delivered through an ASP or an application service provider.

2. NETWORKED COMPUTING
Network computing refers to the use of computers and other devices in a linked network, rather
than as unconnected, stand-alone devices. As computing technology has progressed during the
last few decades, network computing has become more frequent, especially with the creation of
cheap and relatively simple consumer products such as wireless routers, which turn the typical
home computer setup into a local area network.
Network computing refers to using computers and other devices as part of a linked network
instead of separate, disconnected entities.

As computing technology has improved over the last few decades, network computing has
become increasingly prevalent. With the emergence of affordable and relatively simple
consumer goods such as wireless routers, a traditional home computer setup quickly turns into
a local area network.
Advantages of networked computing

 Information Sharing: People can share information freely across networks. Whether it is
files, emails, or instant messaging, networking saves time and resources compared to
traditional methods like postal services.
 Collaboration: Networks allow multiple users to log in simultaneously from different
locations. This global collaboration enhances teamwork and productivity.
 Cost-Effective: The cost of joining a computer network has decreased over time. Modern
devices like Chromebooks provide internet access and network capabilities at an
affordable price.
 Offline Data Storage: Computer networking data can be stored offline, protecting it from
online threats. This flexibility ensures data security.
 Ease of Connection: Anyone with basic computer skills can connect to a network. Simple
prompts or shortcuts make joining a network accessible to all.
Disadvantages of networked computing
 Expensive Setup: The initial setup of a network can be costly. This includes the cost of
cables, equipment, and network infrastructure. High-speed cables and reliable hardware
can contribute to the overall expense.
 Maintenance Challenges: Managing a large network can be complex and requires
specialized training. Organizations often need to employ network managers to handle
maintenance tasks, troubleshoot issues, and ensure smooth operation.
 Security Vulnerabilities: Computer networks are susceptible to security breaches.
Unauthorized access, data leaks, and cyberattacks pose significant risks. Implementing
robust security measures is crucial to safeguard sensitive information.
 Network Congestion: As more devices connect to a network, congestion can occur.
Increased traffic affects performance, leading to slower data transfer speeds and delays
in communication.
 Dependency on Infrastructure: Organizations heavily rely on network infrastructure. If
the network experiences downtime due to hardware failures or other issues, it can
disrupt operations and productivity.
 Compatibility Issues: Integrating different devices and operating systems within a
network can be challenging. Ensuring compatibility and seamless communication
between diverse components requires careful planning.
 Bandwidth Limitations: Networks have finite bandwidth. When multiple users access
resources simultaneously, it can lead to reduced data speeds and inefficient resource
allocation.
 Power Consumption: Running network devices, servers, and switches consumes
electricity. Organizations need to consider power costs and energy-efficient solutions.
 Human Error Risk: Human errors during network configuration or maintenance can
cause disruptions. Proper training and protocols are essential to minimize such risks.
 Support Resources: Organizations must allocate resources for network support,
troubleshooting, and upgrades. Without adequate support, network issues can escalate.

3. CLOUD COMPUTING
Origins of cloud computing
The origins of cloud computing technology go back to the early 1960s when Dr. Joseph Carl
Robnett Licklider (link resides outside ibm.com), an American computer scientist and
psychologist known as the "father of cloud computing", introduced the earliest ideas of global
networking in a series of memos discussing an Intergalactic Computer Network. However, it
wasn’t until the early 2000s that modern cloud infrastructure for business emerged.

In 2002, Amazon Web Services started cloud-based storage and computing services. In 2006, it
introduced Elastic Compute Cloud (EC2), an offering that allowed users to rent virtual computers
to run their applications. That same year, Google introduced the Google Apps suite (now called
Google Workspace), a collection of SaaS productivity applications. In 2009, Microsoft started its
first SaaS application, Microsoft Office 2011. Today, Gartner predicts worldwide end-user
spending on the public cloud will total USD 679 billion and is projected to exceed USD 1 trillion
in 2027 (link resides outside ibm.com).

What is cloud computing?


Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to
a shared pool of configurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction.
Cloud computing is the on-demand access of computing resources—physical servers or virtual
servers, data storage, networking capabilities, application development tools, software, AI-
powered analytic tools and more—over the internet with pay-per-use pricing.
The cloud computing model offers customers greater flexibility and scalability compared to
traditional on-premises infrastructure.
Cloud computing plays a pivotal role in our everyday lives, whether accessing a cloud application
like Google Gmail, streaming a movie on Netflix or playing a cloud-hosted video game.
Cloud computing has also become indispensable in business settings, from small startups to
global enterprises. Its many business applications include enabling remote work by making data
and applications accessible from anywhere, creating the framework for seamless omnichannel
customer engagement and providing the vast computing power and other resources needed to
take advantage of cutting-edge technologies like generative AI and quantum computing.
A cloud services provider (CSP) manages cloud-based technology services hosted at a
remote data center and typically makes these resources available for a pay-as-you-go or
monthly subscription fee.
Characteristics of cloud computing
 On-demand self-service
Users can access computing services via the cloud when they need to without interaction from
the service provider. The computing services should be fully on-demand so that users have
control and agility to meet their evolving needs.
 Broad network access
Cloud computing services are widely available via the network through users’ preferred tools
(e.g., laptops, desktops, smartphones, etc.).
 Resource pooling
One of the most attractive elements of cloud computing is the pooling of resources to deliver
computing services at scale. Resources, such as storage, memory, processing, and network
bandwidth, are pooled and assigned to multiple consumers based on demand.
 Rapid elasticity
Successful resource allocation requires elasticity. Resources must be assigned accurately and
quickly with the ability to absorb significant increases and decreases in demand without service
interruption or quality degradation.
 Measured service
Following the utility model, cloud computing services are measured and metered. This
measurement allows the service provider (and consumer) to track usage and gauge costs
according to their demand on resources.
Cloud computing components
The following are a few of the most integral components of today’s modern cloud computing
architecture.

Data centers
CSPs own and operate remote data centers that house physical or bare metal servers, cloud
storage systems and other physical hardware that create the underlying infrastructure and
provide the physical foundation for cloud computing.

Networking capabilities
In cloud computing, high-speed networking connections are crucial. Typically, an internet
connection known as a wide-area network (WAN) connects front-end users (for example, client-
side interface made visible through web-enabled devices) with back-end functions (for example,
data centers and cloud-based applications and services). Other advanced cloud computing
networking technologies, including load balancers, content delivery networks (CDNs) and
software-defined networking (SDN), are also incorporated to ensure data flows quickly, easily
and securely between front-end users and back-end resources.

Virtualization
Cloud computing relies heavily on the virtualization of IT infrastructure—servers, operating
system software, networking and other infrastructure that’s abstracted using special software so
that it can be pooled and divided irrespective of physical hardware boundaries. For example, a
single hardware server can be divided into multiple virtual servers. Virtualization enables cloud
providers to make maximum use of their data center resources.

Architecture Of Cloud Computing


Cloud computing architecture refers to the components and sub-components required for cloud
computing. These components typically refer to:
1. Front end ( Fat client, Thin client)
2. Back-end platforms ( Servers, Storage )
3. Cloud-based delivery and a network ( Internet, Intranet, Intercloud )
1. Front End ( User Interaction Enhancement )
The User Interface of Cloud Computing consists of 2 sections of clients. The Thin clients are the
ones that use web browsers facilitating portable and lightweight accessibilities and others are
known as Fat Clients that use many functionalities for offering a strong user experience.

2. Back-end Platforms ( Cloud Computing Engine )


The core of cloud computing is made at back-end platforms with several servers for storage and
processing computing. Management of Applications logic is managed through servers and
effective data handling is provided by storage. The combination of these platforms at the
backend offers the processing power, and capacity to manage and store data behind the cloud.

3. Cloud-Based Delivery and Network


On-demand access to the computer and resources is provided over the Internet, Intranet, and
Intercloud. The Internet comes with global accessibility, the Intranet helps in internal
communications of the services within the organization and the Intercloud enables
interoperability across various cloud services. This dynamic network connectivity ensures an
essential component of cloud computing architecture on guaranteeing easy access and data
transfer.

Types of Cloud Computing Services?


The following are the types of Cloud Computing:

 Infrastructure as a Service (IaaS)


 Platform as a Service (PaaS)
 Software as a Service (SaaS)
 Serveless Computing
1. IaaS (Infrastructure-as-a-Service)
IaaS (Infrastructure-as-a-Service) provides on-demand access to fundamental computing
resources—physical and virtual servers, networking and storage—over the internet on a
pay-as-you-go basis. IaaS enables end users to scale and shrink resources on an as-
needed basis, reducing the need for high up-front capital expenditures or unnecessary
on-premises or "owned" infrastructure and for overbuying resources to accommodate
periodic spikes in usage.

According to a Business Research Company report (link resides outside ibm.com), the
IaaS market is predicted to grow rapidly in the next few years, growing to $212.34 billion
in 2028 at a compound annual growth rate (CAGR) of 14.2%.

2. PaaS (Platform-as-a-Service)
PaaS (Platform-as-a-Service) provides software developers with an on-demand platform
—hardware, complete software stack, infrastructure and development tools—for
running, developing and managing applications without the cost, complexity and
inflexibility of maintaining that platform on-premises. With PaaS, the cloud provider
hosts everything at their data center. These include servers, networks, storage,
operating system software, middleware and databases. Developers simply pick from a
menu to spin up servers and environments they need to run, build, test, deploy,
maintain, update and scale applications.

Today, PaaS is typically built around containers, a virtualized compute model one step
removed from virtual servers. Containers virtualize the operating system, enabling
developers to package the application with only the operating system services it needs
to run on any platform without modification and the need for middleware.

Red Hat® OpenShift® is a popular PaaS built around Docker containers and Kubernetes,
an open source container orchestration solution that automates deployment, scaling,
load balancing and more for container-based applications.

3. SaaS (Software-as-a-Service)
SaaS (Software-as-a-Service), also known as cloud-based software or cloud applications,
is application software hosted in the cloud. Users access SaaS through a web browser, a
dedicated desktop client or an API that integrates with a desktop or mobile operating
system. Cloud service providers offer SaaS based on a monthly or annual subscription
fee. They may also provide these services through pay-per-usage pricing.

In addition to the cost savings, time-to-value and scalability benefits of cloud, SaaS
offers the following:

Automatic upgrades: With SaaS, users use new features when the cloud service
provider adds them without orchestrating an on-premises upgrade.
Protection from data loss: Because SaaS stores application data in the cloud with the
application, users don’t lose data if their device crashes or breaks.
SaaS is the primary delivery model for most commercial software today. Hundreds of
SaaS solutions exist, from focused industry and broad administrative (for example,
Salesforce) to robust enterprise database and artificial intelligence (AI) software.
According to an International Data Center (IDC) survey (the link resides outside IBM),
SaaS applications represent the largest cloud computing segment, accounting for more
than 48% of the $778 billion worldwide cloud software revenue.

4. Serverless computing
Serverless computing, or simply serverless, is a cloud computing model that offloads all
the back-end infrastructure management tasks, including provisioning, scaling,
scheduling and patching to the cloud provider. This frees developers to focus all their
time and effort on the code and business logic specific to their applications.

Moreover, serverless runs application code on a per-request basis only and


automatically scales the supporting infrastructure up and down in response to the
number of requests. With serverless, customers pay only for the resources used when
the application runs; they never pay for idle capacity.

FaaS, or Function-as-a-Service, is often confused with serverless computing when, in


fact, it’s a subset of serverless. FaaS allows developers to run portions of application
code (called functions) in response to specific events. Everything besides the code—
physical hardware, virtual machine (VM) operating system and web server software
management—is provisioned automatically by the cloud service provider in real-time as
the code runs and is spun back down once the execution is complete. Billing starts when
execution starts and stops when execution stops.
What is a Cloud Deployment Model?
Cloud Deployment Model functions as a virtual computing environment with a
deployment architecture that varies depending on the amount of data you want to store
and who has access to the infrastructure.
Types of Cloud Computing Deployment Models
The cloud deployment model identifies the specific type of cloud environment based on
ownership, scale, and access, as well as the cloud’s nature and purpose. The location of
the servers you’re utilizing and who controls them are defined by a cloud deployment
model. It specifies how your cloud infrastructure will look, what you can change, and
whether you will be given services or will have to create everything yourself.
Relationships between the infrastructure and your users are also defined by cloud
deployment types. Different types of cloud computing deployment models are
described below:
Public cloud
A public cloud is a type of cloud computing in which a cloud service provider makes computing
resources available to users over the public internet. These include SaaS applications, individual
virtual machines (VMs), bare metal computing hardware, complete enterprise-grade
infrastructures and development platforms. These resources might be accessible for free or
according to subscription-based or pay-per-usage pricing models.
The public cloud provider owns, manages and assumes all responsibility for the data centers,
hardware and infrastructure on which its customers’ workloads run. It typically provides high-
bandwidth network connectivity to ensure high performance and rapid access to applications
and data.
Public cloud is a multi-tenant environment where all customers pool and share the cloud
provider’s data center infrastructure and other resources. In the world of the leading public
cloud vendors, such as Amazon Web Services (AWS), Google Cloud, IBM Cloud®, Microsoft Azure
and Oracle Cloud, these customers can number in the millions.

Private cloud
A private cloud is a cloud environment where all cloud infrastructure and computing resources
are dedicated to one customer only. Private cloud combines many benefits of cloud computing
—including elasticity, scalability and ease of service delivery—with the access control, security
and resource customization of on-premises infrastructure.
A private cloud is typically hosted on-premises in the customer’s data center. However, it can
also be hosted on an independent cloud provider’s infrastructure or built on rented
infrastructure housed in an offsite data center.
Many companies choose a private cloud over a public cloud environment to meet their
regulatory compliance requirements. Entities like government agencies, healthcare
organizations and financial institutions often opt for private cloud settings for workloads that
deal with confidential documents, personally identifiable information (PII), intellectual property,
medical records, financial data or other sensitive data.

Hybrid cloud
A hybrid cloud is just what it sounds like: a combination of public cloud, private cloud and on-
premises environments. Specifically (and ideally), a hybrid cloud connects a combination of
these three environments into a single, flexible infrastructure for running the organization’s
applications and workloads.
At first, organizations turned to hybrid cloud computing models primarily to migrate portions of
their on-premises data into private cloud infrastructure and then connect that infrastructure to
public cloud infrastructure hosted off-premises by cloud vendors. This process was done
through a packaged hybrid cloud solution like Red Hat® OpenShift® or middleware and IT
management tools to create a "single pane of glass." Teams and administrators rely on this
unified dashboard to view their applications, networks and systems.
Today, hybrid cloud architecture has expanded beyond physical connectivity and cloud
migration to offer a flexible, secure and cost-effective environment that supports the portability
and automated deployment of workloads across multiple environments. This feature enables an
organization to meet its technical and business objectives more effectively and cost-efficiently
than with a public or private cloud alone.

Multicloud
Multicloud uses two or more clouds from two or more different cloud providers. A multicloud
environment can be as simple as email SaaS from one vendor and image editing SaaS from
another. But when enterprises talk about multicloud, they typically refer to using multiple cloud
services—including SaaS, PaaS and IaaS services—from two or more leading public cloud
providers.
Organizations choose multicloud to avoid vendor lock-in, to have more services to select from
and to access more innovation. With multicloud, organizations can choose and customize a
unique set of cloud features and services to meet their business needs. This freedom of choice
includes selecting “best-of-breed” technologies from any CSP, as needed or as they emerge,
rather than being locked into offering from a single vendor. For example, an organization may
choose AWS for its global reach with web-hosting, IBM Cloud for data analytics and machine
learning platforms and Microsoft Azure for its security features.
A multicloud environment also reduces exposure to licensing, security and compatibility issues
that can result from "shadow IT"— any software, hardware or IT resource used on an enterprise
network without the IT department’s approval and often without IT’s knowledge or oversight.

The modern hybrid multicloud


Today, most enterprise organizations use a hybrid multicloud model. Apart from the flexibility to
choose the most cost-effective cloud service, hybrid multicloud offers the most control over
workload deployment, enabling organizations to operate more efficiently, improve performance
and optimize costs. According to an IBM® Institute for Business Value study, the value derived
from a full hybrid multicloud platform technology and operating model at scale is two-and-a-half
times the value derived from a single-platform, single-cloud vendor approach.
Yet the modern hybrid multicloud model comes with more complexity. The more clouds you use
—each with its own management tools, data transmission rates and security protocols—the
more difficult it can be to manage your environment. With over 97% of enterprises operating on
more than one cloud and most organizations running 10 or more clouds, a hybrid cloud
management approach has become crucial. Hybrid multicloud management platforms provide
visibility across multiple provider clouds through a central dashboard where development teams
can see their projects and deployments, operations teams can monitor clusters and nodes and
the cybersecurity staff can monitor for threats.
Benefits of cloud computing
Compared to traditional on-premises IT that involves a company owning and
maintaining physical data centers and servers to access computing power, data storage
and other resources (and depending on the cloud services you select), cloud computing
offers many benefits, including the following:

Cost-effectiveness
Cloud computing lets you offload some or all of the expense and effort of purchasing,
installing, configuring and managing mainframe computers and other on-premises
infrastructure. You pay only for cloud-based infrastructure and other computing
resources as you use them.

Increased speed and agility


With cloud computing, your organization can use enterprise applications in minutes
instead of waiting weeks or months for IT to respond to a request, purchase and
configure supporting hardware and install software. This feature empowers users—
specifically DevOps and other development teams—to help leverage cloud-based
software and support infrastructure.

Unlimited scalability
Cloud computing provides elasticity and self-service provisioning, so instead of
purchasing excess capacity that sits unused during slow periods, you can scale capacity
up and down in response to spikes and dips in traffic. You can also use your cloud
provider’s global network to spread your applications closer to users worldwide.

Enhanced strategic value


Cloud computing enables organizations to use various technologies and the most up-to-
date innovations to gain a competitive edge. For instance, in retail, banking and other
customer-facing industries, generative AI-powered virtual assistants deployed over the
cloud can deliver better customer response time and free up teams to focus on higher-
level work. In manufacturing, teams can collaborate and use cloud-based software to
monitor real-time data across logistics and supply chain processes.

Cloud computing components


The following are a few of the most integral components of today’s modern cloud
computing architecture.

Data centers
CSPs own and operate remote data centers that house physical or bare metal servers,
cloud storage systems and other physical hardware that create the underlying
infrastructure and provide the physical foundation for cloud computing.

Networking capabilities
In cloud computing, high-speed networking connections are crucial. Typically, an
internet connection known as a wide-area network (WAN) connects front-end users (for
example, client-side interface made visible through web-enabled devices) with back-end
functions (for example, data centers and cloud-based applications and services). Other
advanced cloud computing networking technologies, including load balancers, content
delivery networks (CDNs) and software-defined networking (SDN), are also incorporated
to ensure data flows quickly, easily and securely between front-end users and back-end
resources.

Virtualization
Cloud computing relies heavily on the virtualization of IT infrastructure—servers,
operating system software, networking and other infrastructure that’s abstracted using
special software so that it can be pooled and divided irrespective of physical hardware
boundaries. For example, a single hardware server can be divided into multiple virtual
servers. Virtualization enables cloud providers to make maximum use of their data
center resources.

Characteristics of Cloud Computing


The following are the characteristics of Cloud Computing:
1. Scalability: With Cloud hosting, it is easy to grow and shrink the number and size of servers
based on the need. This is done by either increasing or decreasing the resources in the cloud.
This ability to alter plans due to fluctuations in business size and needs is a superb benefit of
cloud computing, especially when experiencing a sudden growth in demand.
2. Save Money: An advantage of cloud computing is the reduction in hardware costs. Instead of
purchasing in-house equipment, hardware needs are left to the vendor. For companies that are
growing rapidly, new hardware can be large, expensive, and inconvenient. Cloud computing
alleviates these issues because resources can be acquired quickly and easily. Even better, the
cost of repairing or replacing equipment is passed to the vendors. Along with purchase costs,
off-site hardware cuts internal power costs and saves space. Large data centers can take up
precious office space and produce a large amount of heat. Moving to cloud applications or
storage can help maximize space and significantly cut energy expenditures.
3. Reliability: Rather than being hosted on one single instance of a physical server, hosting is
delivered on a virtual partition that draws its resource, such as disk space, from an extensive
network of underlying physical servers. If one server goes offline it will have no effect on
availability, as the virtual servers will continue to pull resources from the remaining network of
servers.
4. Physical Security: The underlying physical servers are still housed within data centers and so
benefit from the security measures that those facilities implement to prevent people from
accessing or disrupting them on-site.
5. Outsource Management: When you are managing the business, someone else manages your
computing infrastructure. You do not need to worry about management as well as degradation.
Top Reasons to Switch from On-premise to Cloud Computing
The following are the Top reasons to switch from on-premise to cloud computing:
1. Reduces cost: The cost-cutting ability of businesses that utilize cloud computing over time is one
of the main advantages of this technology. On average 15% of the total cost can be saved by
companies if they migrate to the cloud. By the use of cloud servers businesses will save and
reduce costs with no need to employ a staff of technical support personnel to address server
issues. There are many great business modules regarding the cost-cutting benefits of cloud
servers such as the Coca-Cola and Pinterest case studies.
2. More storage: For software and applications to execute as quickly and efficiently as possible, it
provides more servers, storage space, and computing power. Many tools are available for cloud
storage such as Dropbox, OneDrive, Google Drive, iCloud Drive, etc.
3. Employees Better Work Life Balance: Direct connections between cloud computing benefits,
and the work and personal lives of an enterprise’s workers can both improve because of cloud
computing. Even on holidays, the employees have to work with the server for its security,
maintenance, and proper functionality. But with cloud storage the thing is not the same,
employees get ample of time for their personal life and the workload is even less comparatively.
Top leading Cloud Computing companies
1. Amazon Web Services(AWS)
One of the most successful cloud-based businesses is Amazon Web Services (AWS),
which is an Infrastructure as a Service (Iaas) offering that pays rent for virtual computers
on Amazon’s infrastructure.
2. Microsoft Azure Cloud Platform
Microsoft is creating the Azure platform which enables the .NET Framework Application
to run over the internet as an alternative platform for Microsoft developers. This is the
classic Platform as a Service (PaaS).
3. Google Cloud Platform ( GCP )
 Google has built a worldwide network of data centers to service its search engine. From this
service, Google has captured the world’s advertising revenue. By using that revenue, Google
offers free software to users based on infrastructure. This is called Software as a Service (SaaS).
Advantages of Cloud Computing
The following are main advantages of Cloud Computing:
1. Cost Efficiency: Cloud Computing provides flexible pricing to the users with the principal pay-as-
you-go model. It helps in lessening capital expenditures of Infrastructure, particularly for small
and medium-sized businesses companies.
2. Flexibility and Scalability: Cloud services facilitate the scaling of resources based on demand. It
ensures the efficiency of businesses in handling various workloads without the need for large
amounts of investments in hardware during the periods of low demand.
3. Collaboration and Accessibility: Cloud computing provides easy access to data and applications
from anywhere over the internet. This encourages collaborative team participation from
different locations through shared documents and projects in real-time resulting in quality and
productive outputs.
4. Automatic Maintenance and Updates: AWS Cloud takes care of the infrastructure management
and keeping with the latest software automatically making updates they is new versions.
Through this, AWS guarantee the companies always having access to the newest technologies to
focus completely on business operations and innovations.
Disadvantages of Cloud Computing
The following are the main disadvantages of Cloud Computing:
1. Security Concerns: Storing of sensitive data on external servers raised more security concerns
which is one of the main drawbacks of cloud computing.
2. Downtime and Reliability: Even though cloud services are usually dependable, they may also
have unexpected interruptions and downtimes. These might be raised because of server
problems, Network issues or maintenance disruptions in Cloud providers which negative effect
on business operations, creating issues for users accessing their apps.
3. Dependency on Internet Connectivity: Cloud computing services heavily rely on Internet
connectivity. For accessing the cloud resources the users should have a stable and high-speed
internet connection for accessing and using cloud resources. In regions with limited internet
connectivity, users may face challenges in accessing their data and applications.
4. Cost Management Complexity: The main benefit of cloud services is their pricing model that
coming with Pay as you go but it also leads to cost management complexities. On without
proper careful monitoring and utilization of resources optimization, Organizations may end up
with unexpected costs as per their use scale. Understanding and Controlled usage of cloud
services requires ongoing attention.
Cloud Sustainability
The following are the some of the key points of Cloud sustainability:
 Energy Efficiency: Cloud Providers supports the optimization of data center operations for
minimizing energy consumption and improve efficiency.
 Renewable Energy: On increasing the adoption of renewable energy sources like solar and wind
power to data centers and reduce carbon emissions.
 Virtualization: Server virtualization facilitates better utilization of hardware resources, reducing
the need for physical servers and lowering the energy consumptions.
Cloud Security
Cloud security recommended to measures and practices designed to protect data,
applications, and infrastructure in cloud computing environments. The following are
some of the best practices of cloud security:
 Data Encryption: Encryption is essential for securing data stored in the cloud. It ensures that
data remains unreadable to unauthorized users even if it is intercepted.
 Access Control: Implementing strict access controls and authentication mechanisms helps
ensure that only authorized users can access sensitive data and resources in the cloud.
 Multi-Factor Authentication (MFA): MFA adds an extra layer of security by requiring users to
provide multiple forms of verification, such as passwords, biometrics, or security tokens, before
gaining access to cloud services.
Use Cases Of Cloud Computing
Cloud computing provides many use cases across industries and various applications:
1. Scalable Infrastructure: Infrastructure as a Service (IaaS) enables organizations to scale
computing resources based on demand without investing in physical hardware.
2. Efficient Application Development: Platform as a Service (PaaS) simplifies application
development, offering tools and environments for building, deploying, and managing
applications.
3. Streamlined Software Access: Software as a Service (SaaS) provides subscription-based access
to software applications over the internet, reducing the need for local installation and
maintenance.
4. Data Analytics: Cloud-based platforms facilitate big data analytics, allowing organizations to
process and derive insights from large datasets efficiently.
5. Disaster Recovery: Cloud-based disaster recovery solutions offer cost-effective data replication
and backup, ensuring quick recovery in case of system failures or disasters.

4.UBIQUITOUS COMPUTING

History
Ubiquitous computing was first pioneered at the Olivetti Research Laboratory in
Cambridge, England, where the Active Badge, a "clip-on computer" the size of an
employee ID card, was created, enabling the company to track the location of people in
a building, as well as the objects to which they were attached.

Ubiquitous computing, also called pervasive computing, is the growing trend of


embedding computational capability (generally in the form of microprocessors) into
everyday objects to make them effectively communicate and perform useful tasks in a
way that minimizes the end user's need to interact with computers as computers.
Pervasive computing devices are network-connected and constantly available.
Unlike desktop computing, pervasive computing can occur with any device, at any time,
in any place and in any data format across any network and can hand tasks from one
computer to another as, for example, a user moves from his car to his office. Pervasive
computing devices have evolved to include:
 laptops;
 notebooks;
 smartphones;
 tablets;
 Wearable devices; and sensors (for example, on fleet management and pipeline
components, lighting systems, appliances).

Often considered the successor to mobile computing, ubiquitous computing generally

involves wireless communication and networking technologies, mobile


devices, embedded systems, wearable computers, radio frequency ID (RFID)
tags, middleware and software agents. Internet capabilities, voice recognition and
artificial intelligence (AI) are often also included.
How ubiquitous computing is used
Pervasive computing applications have been designed for consumer use and to help
people do their jobs.
An example of pervasive computing is an Apple Watch that alerts the user to a phone
call and allows the call to be completed through the watch. Another example is when a
registered user for Audible, Amazon's audio book server, starts his or her book using the
Audible app on a smartphone on the train and continues listening to the book through
Amazon Echo at home.
More examples of pervasive computing:
Human-computer interaction (HCI) is enabled through smart speakers like the Amazon
Echo, Google Assistant, or Apple HomePod, which enable communication with the
system.
Voice-activated self-driving cars make commuting easier and potentially save time and
energy.
Smart locks that use the newest technologies keep the owner informed while securing
the home.
Voice-activated smart clocks and lightbulbs that promote energy efficiency.

An environment in which devices, present everywhere, are capable of some form of


computing can be considered a ubiquitous computing environment. Industries spending
money on research and development (R&D) for ubiquitous computing include the
following:
 Energy
 Entertainment
 Healthcare
 Logistics
 military
Importance
Because pervasive computing systems are capable of collecting, processing and
communicating data, they can adapt to the data's context and activity. That means, in
essence, a network that can understand its surroundings and improve the human
experience and quality of life.
Pervasive computing and the internet of things
The internet of things (IoT) has largely evolved out of pervasive computing. Though
some argue there is little or no difference, IoT is likely more in line with pervasive
computing rather than Weiser's original view of ubiquitous computing.
Like pervasive computing, IoT-connected devices communicate and provide notifications
about usage. The vision of pervasive computing is computing power widely dispersed
throughout daily life in everyday objects. IoT is on its way to providing this vision and
turning common objects into connected devices, yet, as of now, requires a great deal of
configuration and human-computer interaction -- something Weiser's ubiquitous
computing does not.
IoT can employ wireless sensor networks. These sensor networks collect data from
devices' individual sensors before relaying them to IoT’ s server. In one application of
the technology, such as when collecting data on how much water is leaking from a city's
water mains, it may be useful to collect data from the wireless sensor network first. In
other cases, for example, wearable computing devices, such as an Apple Watch, the
collection and processing of data is better sent directly to a server on the internet in
which the computing technology is centralized.
Ubiquitous computing advantages
The following are some advantages of ubiquitous computing:

 Reducing service costs,


 Increased industrial efficiency and scheduling,
 Quicker reaction times in healthcare
 More convenient personal financial transactions,
 More precise targeted advertising, and more.
Because it integrates sensors, networking technologies, and data analytics to
track and report on several things, including consumer preferences, production
procedures, and traffic patterns, ubiquitous computing benefits people.

Challenges of Ubiquitous Computing


One of the most serious issues that ubiquitous computing faces is privacy.
Protecting system security, privacy, and safety is critical in ubiquitous
computing.
It's also worth noting that, despite progress in ubiquitous computing, the sector
continues to confront challenges in areas like human-machine interfaces
and data security, as well as technical impediments that cause concerns with
availability and reliability.
Despite the rapid proliferation of smart devices today, making ubiquitous
computing available to everyone with a comprehensive infrastructure and ease
of use is a difficult undertaking. Senior persons and individuals living in rural
areas are still at a disadvantage, which must be addressed if ubiquitous
computing is to be adopted in a healthy way.

Characteristics of ubiquitous computing


Important characteristics of ubiquitous computing include:

 Taking into account the human element and applying the paradigm in a setting
where people are involved rather than computers.
 The use of low-cost processors lowers memory and storage needs.
 Real-time characteristics captured.
 Computers that are fully linked and always available.
 Concentrate on many-to-many relationships in the environment rather than
one-to-one, many-to-one, or one-to-many relationships, as well as the idea of
technology, which is always present.
 Includes characteristics of the local/global, social/personal, public/private,
invisible/visible, and considers both the generation and transmission of
knowledge.
 Utilizes Internet convergence, wireless technologies, and modern electronics.
 Increased monitoring, potential limitations on, and meddling with user privacy.
 The degree of reliability of the various pieces of used equipment.
Ubiquitous computing layers
Layer 1: This layer, known as task management, examines user tasks, context,
and index. Additionally, it handles the intricate dependencies that come with
the region.
Layer 2: This layer, known as environment management, keeps track of
resources and their capabilities, service requirements, and user-level statuses of
certain capabilities.
Layer 3: The environment layer is referred to here, which keeps track of vital
resources and regulates their dependability.
Sentient Computing

Sentient computing is a type of ubiquitous computing that employs sensors to


detect and react to its surroundings. The sensors are frequently used to build a
world model that allows location-aware or context-aware apps to be built.

Sentient computers gather data from a variety of sources, including background


processes, data pushed to the user, the user's location, time of day, current
speed, and average speed over time, previous behaviors such as clicks and
subscriptions, and the user's friends on another platform.

5. DISTRIBUTED COMPUTING

A distributed system is a collection of physically separated servers and data storage that reside across
multiple systems worldwide. These components collaborate and communicate with the objective of
being a single, unified system with powerful computing capabilities.

Examples of Distributed Systems

 The internet (World Wide Web)

 Telecommunication networks with multiple antennas, amplifiers, and other networking devices
that appear as a single system to end-users‍

Distributed Computing Definition: What is Distributed Cloud?

In a distributed cloud, the public cloud infrastructure utilizes multiple data centers to store and run
applications and services. A distributed cloud computing architecture, also known as a distributed
computing architecture, is made up of distributed systems and clouds.
Examples of Distributed Computing

 Content Delivery Networks (CDNs) that utilize geographically separated regions to serve end-
users faster.

 Ridge Cloud is a distributed cloud that can be deployed in any location to give its end users
hyper-low latency.

Distributed Computing vs. Cloud Computing

What is the role of distributed computing in cloud computing? Distributed computing and cloud
computing are not mutually exclusive. Distributed computing is essentially a variant of cloud computing
that operates on a distributed cloud network.

Distributed Cloud vs. Edge Computing

Edge computing is a type of cloud computing that works with data centers or PoPs placed near end-
users. With data centers located physically close to the source of the network traffic, applications serve
users’ requests faster.

Distributed clouds utilize resources spread over a network, irrespective of where they have users.

Cloud architects combine these two approaches to build performance-oriented cloud computing
networks that serve global network traffic with maximum uptime.

How Does Distributed Computing Work?

Distributed computing connects hardware and software resources to accomplish many things, including:

 Collaboration to achieve a single goal through optional resource sharing


 Manage access rights per user authority level;

 Enable resources, to be open for further development

 Achieve concurrency so multiple machines can work on a single process

 Ensure that all computing resources are scalable and can operate faster when working with
multiple machines

 Detect errors in connected components so that the network stays fault-tolerant.

Advanced distributed systems include automated processes and APIs to help them perform better.

From the customization perspective, distributed clouds provide businesses with the ability to connect
their on-premises systems to the cloud computing stack so that they can transform their entire IT
infrastructure without discarding old setups. They can extend existing infrastructure through
comparatively fewer modifications.

The cloud service provider controls the application upgrades, security, reliability, adherence to
standards, governance, and disaster recovery mechanism for the distributed infrastructure.

What Are the Advantages of Distributed Cloud Computing?

Distributed computing systems are becoming a basic service that all cloud services providers offer their
clients. Here is a quick list of its advantages:

Ultimate Scalability

All nodes or components of the distributed network are independent computers. You can easily add or
remove systems from the network without resource straining or downtime.

Improved Fault Tolerance

Distributed systems form a unified network with the architecture allowing any node to enter or exit at
any time. As a result, fault-tolerant distributed systems have a higher degree of reliability.

Boosted Performance and Agility

Distributed clouds allow multiple machines to work on the same process. As a result of this load
balancing, processing speed and cost-effectiveness of operations can significantly improve.

Lower Latency
As resources are globally present, businesses can select cloud-based servers near end-users to
reduce latency and speed up request processing. Companies reap the benefit of localized workloads
together with the convenience of a unified public cloud.

Helpful in Compliance Implementation

For both industry compliance and regional compliance, distributed cloud infrastructure enables
businesses to utilize local or country-based resources across different geographies. This way, they are
able to comply with varying data privacy rules, such as GDPR in Europe or CCPA in California.

Four Types of Distributed Systems

Broadly, we divide distributed cloud systems into four models:

Client-Server Model

In this model, the client directly fetches data from the server and then formats the data and renders it
for the end-user. To modify this data, end-users directly submit their edits back to the server.

An example of this model is Amazon which stores customer information. When a customer updates
their address or phone number, the client sends this to the server, and the server updates the
information in the database.

Three-Tier Model

The three-tier model introduces an additional tier between client and server: the agent tier.

This tier holds the client data and frees the client from needing to manage its own information. The
client can access its data through a web application. As a result, the client application’s and the user’s
work is reduced and is easier to automate.
An example is a cloud storage space with the ability to store files and a document editor. Such a storage
solution makes files available anywhere through the internet, saving the user from the effort of
managing data on a local machine

Multi-Tier Model

Enterprises need business logic to interact with backend data tiers and with frontend presentation tiers.
This logic enables requests to multiple enterprise network services to be sent easily. That’s why large
organizations prefer n-tier or multi-tier distributed computing model.

An example is an enterprise network with n-tiers that collaborates when a user publishes a social media
post to multiple platforms. The post itself goes from the data tier to the presentation tier.

Peer-to-Peer Model

Unlike hierarchical client and server models, this model is comprised of peers. Each peer acts as a client
or server, depending upon the request it is processing. They share their computing power, decision-
making power, and capabilities to work in collaboration.

An example is block chain nodes collaboratively working to make decisions regarding adding, deleting,
and updating data in the network.

Applications of Distributed Computing

CDNs

CDNs locate resources across geographies so users can access the nearest copy to fulfill their requests
faster. Industries such as streaming and video surveillance get maximum benefits from such
deployments.
If a customer in Seattle clicks a link to a video, the distributed network funnels the request to a local
CDN in Washington, allowing the customer to load and watch the video faster.

Real-time or Performance-driven Systems

As real-time applications (that process data in a time-critical manner) must perform efficient data
fetching, distributed machines greatly help such systems to work faster.

Multiplayer games with heavy graphics data (such as PUBG and Fortnite), applications with payment
options, and torrenting apps are three examples of real-time applications where distributing cloud
computing can improve user experience.

Distributed Computing with Ridge

Using the distributed cloud platform by Ridge, companies can build a customized distributed system that
has the agility of edge computing and the power of distributed computing.

As an alternative to the traditional public cloud model, Ridge Cloud enables application owners to utilize
a global network of service providers instead of relying on the availability of computing resources in a
specific location.

And by facilitating interoperability with existing infrastructure, enterprises are empowered to deploy
and infinitely scale applications anywhere they need.

What is the difference between distributed systems and distributed computing?

A distributed system is a networked collection of independent machines that can collaborate remotely
for a single goal. In contrast, distributed computing is the cloud-based technology that enables this
distributed system to operate, collaborate, and communicate.

Why do we need distributed computing?

Distributed computing results in the development of highly fault-tolerant systems that are reliable and
performance-driven. Distributed systems allow real-time applications to execute fast and serve end-
users requests quickly.

What is the difference between parallel and distributed computing?

Parallel and distributed computing differ in how they function. While distributed computing requires
nodes to communicate and collaborate on a task, parallel computing does not require communication.
Rather, it focuses on concurrent processing and shared memory.
For example, a parallel computing implementation could comprise four different sensors that are set to
reveal medical pictures. The final image takes input from each sensor separately to produce a
combination of those variants.

REFRENCES

https://phoenixnap.com/glossary/network-computing

https://www.techopedia.com/definition/23619/network-computing

https://www.geeksforgeeks.org/what-is-centralized-computing/

https://www.baselinemag.com/cloud-computing/centralized-computing-comeback

https://www.techopedia.com/definition/26507/centralized-computing

https://www.geeksforgeeks.org/cloud-computing/

https://www.ibm.com/topics/cloud-computing

https://www.lucidchart.com/blog/cloud-computing-basics

https://cloud.google.com/learn/paas-vs-iaas-vs-saas

https://www.techopedia.com/definition/22702/ubiquitous-computing

https://www.techtarget.com/iotagenda/definition/pervasive-computing-ubiquitous-computing

https://www.webopedia.com/definitions/ubiquitous-computing/

https://www.ridge.co/blog/what-is-distributed-computing/

You might also like