Research On Computing System Models
Research On Computing System Models
DCS-01-0121/2023
1. CENTRALIZED COMPUTING
Centralized computing is computing done from a central location using terminals attached to a
central computer. The centralized computer can control all peripherals directly if they are
physically connected to the central computer. Alternatively, the terminals can connect to the
central computer over the network if they have the capability.
Centralized computing refers to a system where all processing and data storage is handled by a
single, central device or system. This central device is responsible for processing all requests and
managing all data, and all other devices in the system are connected to it and rely on it for their
computing needs.
One example of a centralized computing system is a traditional mainframe system, where a
central mainframe computer handles all processing and data storage for the system. In this type
of system, users access the mainframe through terminals or other devices that are connected to
it.
Applications
Centralized Computing Systems have a number of applications, including:
Mainframe Systems: Traditional mainframe systems are a type of centralized computing system
that is used in a variety of industries, including finance, healthcare, and government.
Client-Server Systems: Client-server systems are a type of centralized computing system that is
used in a variety of applications, including business applications, web applications, and more.
Network Servers: Network servers are a type of centralized computing system that is used to
manage and control access to shared resources, such as data storage and printing, on a network.
Some of the centralized computing models in vogue today among many businesses are as
follows.
DISK-LESS NODE MODEL
The disk-less node model is a mix between centralized computing and traditional computing. In
this model, some applications are run locally (E.g., web browsers) are run locally. Meanwhile, a
few applications use the terminal server, e.g. business critical systems. You can implement this
model by running remote desktop software on a standard desktop computer.
HOSTED COMPUTING MODEL
This is a later version of centralized computing. The hosted computing model has the ability to
resolve many problems posed by conventional distributed computing systems. This model
centralizes the processing and storage aspects. The storage happens on powerful server
hardware in a data center instead of a local office.
This eases the responsibility and stress on organizations as they are spared the hassle of owning
and maintaining an IT system. These services are generally available on a subscription basis and
delivered through an ASP or an application service provider.
2. NETWORKED COMPUTING
Network computing refers to the use of computers and other devices in a linked network, rather
than as unconnected, stand-alone devices. As computing technology has progressed during the
last few decades, network computing has become more frequent, especially with the creation of
cheap and relatively simple consumer products such as wireless routers, which turn the typical
home computer setup into a local area network.
Network computing refers to using computers and other devices as part of a linked network
instead of separate, disconnected entities.
As computing technology has improved over the last few decades, network computing has
become increasingly prevalent. With the emergence of affordable and relatively simple
consumer goods such as wireless routers, a traditional home computer setup quickly turns into
a local area network.
Advantages of networked computing
Information Sharing: People can share information freely across networks. Whether it is
files, emails, or instant messaging, networking saves time and resources compared to
traditional methods like postal services.
Collaboration: Networks allow multiple users to log in simultaneously from different
locations. This global collaboration enhances teamwork and productivity.
Cost-Effective: The cost of joining a computer network has decreased over time. Modern
devices like Chromebooks provide internet access and network capabilities at an
affordable price.
Offline Data Storage: Computer networking data can be stored offline, protecting it from
online threats. This flexibility ensures data security.
Ease of Connection: Anyone with basic computer skills can connect to a network. Simple
prompts or shortcuts make joining a network accessible to all.
Disadvantages of networked computing
Expensive Setup: The initial setup of a network can be costly. This includes the cost of
cables, equipment, and network infrastructure. High-speed cables and reliable hardware
can contribute to the overall expense.
Maintenance Challenges: Managing a large network can be complex and requires
specialized training. Organizations often need to employ network managers to handle
maintenance tasks, troubleshoot issues, and ensure smooth operation.
Security Vulnerabilities: Computer networks are susceptible to security breaches.
Unauthorized access, data leaks, and cyberattacks pose significant risks. Implementing
robust security measures is crucial to safeguard sensitive information.
Network Congestion: As more devices connect to a network, congestion can occur.
Increased traffic affects performance, leading to slower data transfer speeds and delays
in communication.
Dependency on Infrastructure: Organizations heavily rely on network infrastructure. If
the network experiences downtime due to hardware failures or other issues, it can
disrupt operations and productivity.
Compatibility Issues: Integrating different devices and operating systems within a
network can be challenging. Ensuring compatibility and seamless communication
between diverse components requires careful planning.
Bandwidth Limitations: Networks have finite bandwidth. When multiple users access
resources simultaneously, it can lead to reduced data speeds and inefficient resource
allocation.
Power Consumption: Running network devices, servers, and switches consumes
electricity. Organizations need to consider power costs and energy-efficient solutions.
Human Error Risk: Human errors during network configuration or maintenance can
cause disruptions. Proper training and protocols are essential to minimize such risks.
Support Resources: Organizations must allocate resources for network support,
troubleshooting, and upgrades. Without adequate support, network issues can escalate.
3. CLOUD COMPUTING
Origins of cloud computing
The origins of cloud computing technology go back to the early 1960s when Dr. Joseph Carl
Robnett Licklider (link resides outside ibm.com), an American computer scientist and
psychologist known as the "father of cloud computing", introduced the earliest ideas of global
networking in a series of memos discussing an Intergalactic Computer Network. However, it
wasn’t until the early 2000s that modern cloud infrastructure for business emerged.
In 2002, Amazon Web Services started cloud-based storage and computing services. In 2006, it
introduced Elastic Compute Cloud (EC2), an offering that allowed users to rent virtual computers
to run their applications. That same year, Google introduced the Google Apps suite (now called
Google Workspace), a collection of SaaS productivity applications. In 2009, Microsoft started its
first SaaS application, Microsoft Office 2011. Today, Gartner predicts worldwide end-user
spending on the public cloud will total USD 679 billion and is projected to exceed USD 1 trillion
in 2027 (link resides outside ibm.com).
Data centers
CSPs own and operate remote data centers that house physical or bare metal servers, cloud
storage systems and other physical hardware that create the underlying infrastructure and
provide the physical foundation for cloud computing.
Networking capabilities
In cloud computing, high-speed networking connections are crucial. Typically, an internet
connection known as a wide-area network (WAN) connects front-end users (for example, client-
side interface made visible through web-enabled devices) with back-end functions (for example,
data centers and cloud-based applications and services). Other advanced cloud computing
networking technologies, including load balancers, content delivery networks (CDNs) and
software-defined networking (SDN), are also incorporated to ensure data flows quickly, easily
and securely between front-end users and back-end resources.
Virtualization
Cloud computing relies heavily on the virtualization of IT infrastructure—servers, operating
system software, networking and other infrastructure that’s abstracted using special software so
that it can be pooled and divided irrespective of physical hardware boundaries. For example, a
single hardware server can be divided into multiple virtual servers. Virtualization enables cloud
providers to make maximum use of their data center resources.
According to a Business Research Company report (link resides outside ibm.com), the
IaaS market is predicted to grow rapidly in the next few years, growing to $212.34 billion
in 2028 at a compound annual growth rate (CAGR) of 14.2%.
2. PaaS (Platform-as-a-Service)
PaaS (Platform-as-a-Service) provides software developers with an on-demand platform
—hardware, complete software stack, infrastructure and development tools—for
running, developing and managing applications without the cost, complexity and
inflexibility of maintaining that platform on-premises. With PaaS, the cloud provider
hosts everything at their data center. These include servers, networks, storage,
operating system software, middleware and databases. Developers simply pick from a
menu to spin up servers and environments they need to run, build, test, deploy,
maintain, update and scale applications.
Today, PaaS is typically built around containers, a virtualized compute model one step
removed from virtual servers. Containers virtualize the operating system, enabling
developers to package the application with only the operating system services it needs
to run on any platform without modification and the need for middleware.
Red Hat® OpenShift® is a popular PaaS built around Docker containers and Kubernetes,
an open source container orchestration solution that automates deployment, scaling,
load balancing and more for container-based applications.
3. SaaS (Software-as-a-Service)
SaaS (Software-as-a-Service), also known as cloud-based software or cloud applications,
is application software hosted in the cloud. Users access SaaS through a web browser, a
dedicated desktop client or an API that integrates with a desktop or mobile operating
system. Cloud service providers offer SaaS based on a monthly or annual subscription
fee. They may also provide these services through pay-per-usage pricing.
In addition to the cost savings, time-to-value and scalability benefits of cloud, SaaS
offers the following:
Automatic upgrades: With SaaS, users use new features when the cloud service
provider adds them without orchestrating an on-premises upgrade.
Protection from data loss: Because SaaS stores application data in the cloud with the
application, users don’t lose data if their device crashes or breaks.
SaaS is the primary delivery model for most commercial software today. Hundreds of
SaaS solutions exist, from focused industry and broad administrative (for example,
Salesforce) to robust enterprise database and artificial intelligence (AI) software.
According to an International Data Center (IDC) survey (the link resides outside IBM),
SaaS applications represent the largest cloud computing segment, accounting for more
than 48% of the $778 billion worldwide cloud software revenue.
4. Serverless computing
Serverless computing, or simply serverless, is a cloud computing model that offloads all
the back-end infrastructure management tasks, including provisioning, scaling,
scheduling and patching to the cloud provider. This frees developers to focus all their
time and effort on the code and business logic specific to their applications.
Private cloud
A private cloud is a cloud environment where all cloud infrastructure and computing resources
are dedicated to one customer only. Private cloud combines many benefits of cloud computing
—including elasticity, scalability and ease of service delivery—with the access control, security
and resource customization of on-premises infrastructure.
A private cloud is typically hosted on-premises in the customer’s data center. However, it can
also be hosted on an independent cloud provider’s infrastructure or built on rented
infrastructure housed in an offsite data center.
Many companies choose a private cloud over a public cloud environment to meet their
regulatory compliance requirements. Entities like government agencies, healthcare
organizations and financial institutions often opt for private cloud settings for workloads that
deal with confidential documents, personally identifiable information (PII), intellectual property,
medical records, financial data or other sensitive data.
Hybrid cloud
A hybrid cloud is just what it sounds like: a combination of public cloud, private cloud and on-
premises environments. Specifically (and ideally), a hybrid cloud connects a combination of
these three environments into a single, flexible infrastructure for running the organization’s
applications and workloads.
At first, organizations turned to hybrid cloud computing models primarily to migrate portions of
their on-premises data into private cloud infrastructure and then connect that infrastructure to
public cloud infrastructure hosted off-premises by cloud vendors. This process was done
through a packaged hybrid cloud solution like Red Hat® OpenShift® or middleware and IT
management tools to create a "single pane of glass." Teams and administrators rely on this
unified dashboard to view their applications, networks and systems.
Today, hybrid cloud architecture has expanded beyond physical connectivity and cloud
migration to offer a flexible, secure and cost-effective environment that supports the portability
and automated deployment of workloads across multiple environments. This feature enables an
organization to meet its technical and business objectives more effectively and cost-efficiently
than with a public or private cloud alone.
Multicloud
Multicloud uses two or more clouds from two or more different cloud providers. A multicloud
environment can be as simple as email SaaS from one vendor and image editing SaaS from
another. But when enterprises talk about multicloud, they typically refer to using multiple cloud
services—including SaaS, PaaS and IaaS services—from two or more leading public cloud
providers.
Organizations choose multicloud to avoid vendor lock-in, to have more services to select from
and to access more innovation. With multicloud, organizations can choose and customize a
unique set of cloud features and services to meet their business needs. This freedom of choice
includes selecting “best-of-breed” technologies from any CSP, as needed or as they emerge,
rather than being locked into offering from a single vendor. For example, an organization may
choose AWS for its global reach with web-hosting, IBM Cloud for data analytics and machine
learning platforms and Microsoft Azure for its security features.
A multicloud environment also reduces exposure to licensing, security and compatibility issues
that can result from "shadow IT"— any software, hardware or IT resource used on an enterprise
network without the IT department’s approval and often without IT’s knowledge or oversight.
Cost-effectiveness
Cloud computing lets you offload some or all of the expense and effort of purchasing,
installing, configuring and managing mainframe computers and other on-premises
infrastructure. You pay only for cloud-based infrastructure and other computing
resources as you use them.
Unlimited scalability
Cloud computing provides elasticity and self-service provisioning, so instead of
purchasing excess capacity that sits unused during slow periods, you can scale capacity
up and down in response to spikes and dips in traffic. You can also use your cloud
provider’s global network to spread your applications closer to users worldwide.
Data centers
CSPs own and operate remote data centers that house physical or bare metal servers,
cloud storage systems and other physical hardware that create the underlying
infrastructure and provide the physical foundation for cloud computing.
Networking capabilities
In cloud computing, high-speed networking connections are crucial. Typically, an
internet connection known as a wide-area network (WAN) connects front-end users (for
example, client-side interface made visible through web-enabled devices) with back-end
functions (for example, data centers and cloud-based applications and services). Other
advanced cloud computing networking technologies, including load balancers, content
delivery networks (CDNs) and software-defined networking (SDN), are also incorporated
to ensure data flows quickly, easily and securely between front-end users and back-end
resources.
Virtualization
Cloud computing relies heavily on the virtualization of IT infrastructure—servers,
operating system software, networking and other infrastructure that’s abstracted using
special software so that it can be pooled and divided irrespective of physical hardware
boundaries. For example, a single hardware server can be divided into multiple virtual
servers. Virtualization enables cloud providers to make maximum use of their data
center resources.
4.UBIQUITOUS COMPUTING
History
Ubiquitous computing was first pioneered at the Olivetti Research Laboratory in
Cambridge, England, where the Active Badge, a "clip-on computer" the size of an
employee ID card, was created, enabling the company to track the location of people in
a building, as well as the objects to which they were attached.
Taking into account the human element and applying the paradigm in a setting
where people are involved rather than computers.
The use of low-cost processors lowers memory and storage needs.
Real-time characteristics captured.
Computers that are fully linked and always available.
Concentrate on many-to-many relationships in the environment rather than
one-to-one, many-to-one, or one-to-many relationships, as well as the idea of
technology, which is always present.
Includes characteristics of the local/global, social/personal, public/private,
invisible/visible, and considers both the generation and transmission of
knowledge.
Utilizes Internet convergence, wireless technologies, and modern electronics.
Increased monitoring, potential limitations on, and meddling with user privacy.
The degree of reliability of the various pieces of used equipment.
Ubiquitous computing layers
Layer 1: This layer, known as task management, examines user tasks, context,
and index. Additionally, it handles the intricate dependencies that come with
the region.
Layer 2: This layer, known as environment management, keeps track of
resources and their capabilities, service requirements, and user-level statuses of
certain capabilities.
Layer 3: The environment layer is referred to here, which keeps track of vital
resources and regulates their dependability.
Sentient Computing
5. DISTRIBUTED COMPUTING
A distributed system is a collection of physically separated servers and data storage that reside across
multiple systems worldwide. These components collaborate and communicate with the objective of
being a single, unified system with powerful computing capabilities.
Telecommunication networks with multiple antennas, amplifiers, and other networking devices
that appear as a single system to end-users
In a distributed cloud, the public cloud infrastructure utilizes multiple data centers to store and run
applications and services. A distributed cloud computing architecture, also known as a distributed
computing architecture, is made up of distributed systems and clouds.
Examples of Distributed Computing
Content Delivery Networks (CDNs) that utilize geographically separated regions to serve end-
users faster.
Ridge Cloud is a distributed cloud that can be deployed in any location to give its end users
hyper-low latency.
What is the role of distributed computing in cloud computing? Distributed computing and cloud
computing are not mutually exclusive. Distributed computing is essentially a variant of cloud computing
that operates on a distributed cloud network.
Edge computing is a type of cloud computing that works with data centers or PoPs placed near end-
users. With data centers located physically close to the source of the network traffic, applications serve
users’ requests faster.
Distributed clouds utilize resources spread over a network, irrespective of where they have users.
Cloud architects combine these two approaches to build performance-oriented cloud computing
networks that serve global network traffic with maximum uptime.
Distributed computing connects hardware and software resources to accomplish many things, including:
Ensure that all computing resources are scalable and can operate faster when working with
multiple machines
Advanced distributed systems include automated processes and APIs to help them perform better.
From the customization perspective, distributed clouds provide businesses with the ability to connect
their on-premises systems to the cloud computing stack so that they can transform their entire IT
infrastructure without discarding old setups. They can extend existing infrastructure through
comparatively fewer modifications.
The cloud service provider controls the application upgrades, security, reliability, adherence to
standards, governance, and disaster recovery mechanism for the distributed infrastructure.
Distributed computing systems are becoming a basic service that all cloud services providers offer their
clients. Here is a quick list of its advantages:
Ultimate Scalability
All nodes or components of the distributed network are independent computers. You can easily add or
remove systems from the network without resource straining or downtime.
Distributed systems form a unified network with the architecture allowing any node to enter or exit at
any time. As a result, fault-tolerant distributed systems have a higher degree of reliability.
Distributed clouds allow multiple machines to work on the same process. As a result of this load
balancing, processing speed and cost-effectiveness of operations can significantly improve.
Lower Latency
As resources are globally present, businesses can select cloud-based servers near end-users to
reduce latency and speed up request processing. Companies reap the benefit of localized workloads
together with the convenience of a unified public cloud.
For both industry compliance and regional compliance, distributed cloud infrastructure enables
businesses to utilize local or country-based resources across different geographies. This way, they are
able to comply with varying data privacy rules, such as GDPR in Europe or CCPA in California.
Client-Server Model
In this model, the client directly fetches data from the server and then formats the data and renders it
for the end-user. To modify this data, end-users directly submit their edits back to the server.
An example of this model is Amazon which stores customer information. When a customer updates
their address or phone number, the client sends this to the server, and the server updates the
information in the database.
Three-Tier Model
The three-tier model introduces an additional tier between client and server: the agent tier.
This tier holds the client data and frees the client from needing to manage its own information. The
client can access its data through a web application. As a result, the client application’s and the user’s
work is reduced and is easier to automate.
An example is a cloud storage space with the ability to store files and a document editor. Such a storage
solution makes files available anywhere through the internet, saving the user from the effort of
managing data on a local machine
Multi-Tier Model
Enterprises need business logic to interact with backend data tiers and with frontend presentation tiers.
This logic enables requests to multiple enterprise network services to be sent easily. That’s why large
organizations prefer n-tier or multi-tier distributed computing model.
An example is an enterprise network with n-tiers that collaborates when a user publishes a social media
post to multiple platforms. The post itself goes from the data tier to the presentation tier.
Peer-to-Peer Model
Unlike hierarchical client and server models, this model is comprised of peers. Each peer acts as a client
or server, depending upon the request it is processing. They share their computing power, decision-
making power, and capabilities to work in collaboration.
An example is block chain nodes collaboratively working to make decisions regarding adding, deleting,
and updating data in the network.
CDNs
CDNs locate resources across geographies so users can access the nearest copy to fulfill their requests
faster. Industries such as streaming and video surveillance get maximum benefits from such
deployments.
If a customer in Seattle clicks a link to a video, the distributed network funnels the request to a local
CDN in Washington, allowing the customer to load and watch the video faster.
As real-time applications (that process data in a time-critical manner) must perform efficient data
fetching, distributed machines greatly help such systems to work faster.
Multiplayer games with heavy graphics data (such as PUBG and Fortnite), applications with payment
options, and torrenting apps are three examples of real-time applications where distributing cloud
computing can improve user experience.
Using the distributed cloud platform by Ridge, companies can build a customized distributed system that
has the agility of edge computing and the power of distributed computing.
As an alternative to the traditional public cloud model, Ridge Cloud enables application owners to utilize
a global network of service providers instead of relying on the availability of computing resources in a
specific location.
And by facilitating interoperability with existing infrastructure, enterprises are empowered to deploy
and infinitely scale applications anywhere they need.
A distributed system is a networked collection of independent machines that can collaborate remotely
for a single goal. In contrast, distributed computing is the cloud-based technology that enables this
distributed system to operate, collaborate, and communicate.
Distributed computing results in the development of highly fault-tolerant systems that are reliable and
performance-driven. Distributed systems allow real-time applications to execute fast and serve end-
users requests quickly.
Parallel and distributed computing differ in how they function. While distributed computing requires
nodes to communicate and collaborate on a task, parallel computing does not require communication.
Rather, it focuses on concurrent processing and shared memory.
For example, a parallel computing implementation could comprise four different sensors that are set to
reveal medical pictures. The final image takes input from each sensor separately to produce a
combination of those variants.
REFRENCES
https://phoenixnap.com/glossary/network-computing
https://www.techopedia.com/definition/23619/network-computing
https://www.geeksforgeeks.org/what-is-centralized-computing/
https://www.baselinemag.com/cloud-computing/centralized-computing-comeback
https://www.techopedia.com/definition/26507/centralized-computing
https://www.geeksforgeeks.org/cloud-computing/
https://www.ibm.com/topics/cloud-computing
https://www.lucidchart.com/blog/cloud-computing-basics
https://cloud.google.com/learn/paas-vs-iaas-vs-saas
https://www.techopedia.com/definition/22702/ubiquitous-computing
https://www.techtarget.com/iotagenda/definition/pervasive-computing-ubiquitous-computing
https://www.webopedia.com/definitions/ubiquitous-computing/
https://www.ridge.co/blog/what-is-distributed-computing/