0% found this document useful (0 votes)
25 views16 pages

Unit - 2

Uploaded by

fandugamer25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views16 pages

Unit - 2

Uploaded by

fandugamer25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

UNIT- 2

Web Architecture?
Web architecture refers to the overall structure of a website or web application, including the
way it is designed, implemented, and deployed. It involves the use of technologies and
protocols such as HTML, CSS, JavaScript, and HTTP to build and deliver web pages and
applications to users.

Web architecture consists of several components, including the client, the server, the network,
and the database.

• The client is the web browser or application that the user interacts with,
• The server is the computer or group of computers that host the website or web application.
• The network is the infrastructure that connects the client and the server, such as the internet.
• The database is a collection of data that is used to store and retrieve information for the
website or web application.

Client server architecture?


Client server architecture is a computing model in which the server hosts, delivers, and
manages most of the resources and services requested by the client.

Two factors are involved:

• A server is the one who provides requested services. A server may serve multiple
clients at the same time while a client is in contact with only one server.

• Clients are the ones who request services.


How does client server architecture work?
• The user enters the uniform resource locator (URL) of the website or file and
the browser sends a request to the domain name system (DNS) server.
• The DNS server looks for the address of the web server and the DNS server
responds with the IP address of the web server.
• After the DNS server responds, the browser sends over an HTTP or HTTPS
request to the web server’s IP, which was provided by the DNS server.
• The server then sends over the necessary files of the website.
• Finally, the browser renders the files and the website is displayed.

Types of client server architecture


The functionality of client server architecture is in various tiers or layers.

1-tier architecture: The data present in this layer is usually stored in local
systems or on a shared drive.

1-tier architecture consists of several layers, such as presentation layer, business


layer, and data layer, that are combined with the help of a unique software package.
Two-Tier Client/Server Mean?
In 2-tier Client Server Architecture, the whole application logic is divided into 2
layers.

The two-tier architecture is based on the Client-Server model.


In 2-tier Client Server Architecture, the whole application logic is divided into 2
layers. It consists of Client-Application tier and Database tier.

The Client-Application server communicates directly with the


Database server. Data or information transfer between the two
components is fast due to the absence of a middleware.

Three-tiered client/server architecture


A common design of client/server systems uses three tiers:

1. A client that interacts with the user


2. An application server that contains the business logic of the application
3. A resource manager or Database, that stores data
Building blocks of fast and scalable data access Concepts:

As the network grow, there are two main challenges:

• Scaling access to the app server


• Access to the database.
In a highly scalable application design, the app (or web) server is typically minimized and
often embodies a shared-nothing architecture. This makes the app server layer of the
system horizontally scalable.
As a result of this design, the heavy lifting is pushed down the stack to the database server
and supporting services; it's at this layer where the real scaling and performance challenges
come into play.

The building blocks of a scalable data access layer are:


• Caches
• Proxies
• Indexes
• Load balancers
• Queues

Caches
Caches take advantage of the locality of reference principle: recently requested
data is likely to be requested again. A cache is like short-term memory: it has a
limited amount of space, but is typically faster than the original data source and
contains the most recently accessed items.

How can a cache be used to make your data access faster in our API example? In
this case, there are a couple of places you can insert a cache.

Cache placement

• Request Node: collocate the cache with the node that requests
the data
o Pros
▪ Each time a request is made, the node can quickly
return cached data if it exists, avoiding any network
hops
▪ Often in-memory and very fast
o Cons
▪ When you have multiple request nodes that are
load balanced, you may have to cache the same
item on all the nodes

• Global Cache: central cache used by all request nodes


o Pros
▪ A given item will only be cached only once
▪ Multiple requests for an item can be collapsed into
one request to the backend
o Cons
▪ Easy to overwhelm a single cache as the number of
clients and requests increase
o Types
▪ Reverse proxy cache: cache is responsible for
retrieval on cache miss (more common, handles its
own eviction)
▪ Cache as a service: request nodes are responsible
for retrieval on cache miss (typically used when the
request nodes understand the eviction strategy or
hot spots better than the cache)
• Distributed Cache: each of the nodes that make up the cache
own part of the cached data; divided using a consistent
hashing function
o Pros
▪ Cache space and load capacity can be increased by
scaling out (increasing the number of nodes)
o Cons
▪ Node failure must be handled or intentionally
ignored

Proxies:
Proxies are used to filter requests, log requests, or sometimes transform requests (by
adding/removing headers, encrypting/decrypting, or compression).

A proxy server is an intermediate piece of hardware/software that receives


requests from clients and relays them to the backend origin servers.
Proxies are also immensely helpful when coordinating requests from multiple
servers, providing opportunities to optimize request traffic from a system-wide
perspective.

One way to use a proxy to speed up data access is to collapse the same (or
similar) requests together into one request, and then return the single result to
the requesting clients. This is known as collapsed forwarding.

Another great way to use the proxy is to not just collapse requests for the same
data, but also to collapse requests for data that is spatially close together in the
origin store (consecutively on disk). Employing such a strategy maximizes data
locality for the requests, which can result in decreased request latency.

Indexes :
Indexes are helpful in the data access layers above the database.
Consider a system which is backed by multiple database clusters.
Creating an index that maps keys to the database responsible for
those keys would eliminate the need to query multiple databases.
An index can be used like a table of contents that directs you to the location
where your data lives.

Multiple layers of indexes


Once the correct cluster has been identified, another index layer
may identify the node within the cluster, and so on. This leads to the
point that often creating multiple layers of indexes is worth the
increased write latency. This figure from the chapter illustrates how
multiple indexes can guide reads to the correct data:
Load balancers : Load balancers are a principal part of any
architecture, their main role is to distribute load across a set of
nodes responsible for servicing requests. This allows multiple nodes
to transparently service the same function in a system.

Like caches, load balancers are placed in many strategic places


throughout an architecture.
Their main purpose is to handle a lot of simultaneous connections and route
those connections to one of the request nodes, allowing the system to scale to
service more requests by just adding nodes.

Load balancers can be implemented as software or hardware appliances. One


open source software load balancer that has received wide adoption is HAProxy).

In a distributed system, load balancers are often found at the very front of the
system, such that all incoming requests are routed accordingly. In a complex
distributed system, many load balancers may be used.
Queues : When systems are simple, with minimal processing loads and small
databases, writes can be predictably fast; however, in more complex systems
writes can take an almost non-deterministically long time. For example, data may
have to be written several places on different servers or indexes, or the system
could just be under high load. In the cases where writes, or any task for that
matter, may take a long time, achieving performance and availability requires
building asynchrony into the system; a common way to do that is with queues.

synchronous request, : Imagine a system where each client is requesting a task to


be remotely serviced. Each of these clients sends their request to the server,
where the server completes the tasks as quickly as possible and returns the
results to their respective clients. In small systems where one server (or logical
service) can service incoming clients just as fast as they come, this sort of
situation should work just fine. However, when the server receives more requests
than it can handle, then each client is forced to wait for the other clients' requests
to complete before a response can be generated.

This kind of synchronous behavior can severely degrade client performance; the
client is forced to wait, effectively performing zero work, until its request can be
answered.
Queues enable clients to work in an asynchronous manner, providing a strategic
abstraction of a client's request and its response. On the other hand, in a
synchronous system, there is no differentiation between request and reply, and
they therefore cannot be managed separately. In an asynchronous system the
client requests a task, the service responds with a message acknowledging the
task was received, and then the client can periodically check the status of the
task, only requesting the result once it has completed. While the client is waiting
for an asynchronous request to be completed it is free to perform other work,
even making asynchronous requests of other services. The latter is an example of
how queues and messages are leveraged in distributed systems.
Web Application architecture (WAA) :

Web application architecture describes the relationships between


databases, servers, and applications in a system. It determines how the
functionality and logic of a system are distributed between server-side and
client-side.

• Client-side (front-end): the code that’s stored in the browser and displayed
to a user. Users interact with the client-side of the application.
• Server-side: the code that application runs on the server and uses to communicate with
the hardware.
A web app architecture presents a layout with all the software components (such
as databases, applications and middleware) and how they interact with each other. It
defines how the data is delivered through HTTP and ensures that the client-side server and
the backend server can understand.

Web Application Architecture Components

Web application architecture works on various components. These components can


be divided into two areas.
1. User Interface App Components: As the name suggests this category is much
more related to the user interface/experience. In this category, the role of the web page
is related to the display, dashboards, logs, notifications, statistics, configuration
settings, etc and it has nothing to do with the functionality or working of the web
application.
2. Structural Components: This category is mainly concerned with the functionality
of the web application with which a user interacts, the control, and the database
storage. As the name suggests it is much more about the structural part of the web
application. This structural part comprises…
• The web browser or client
• The web application server
• The database server

Web Application Three Tier Architecture Layers

Web application architectural patterns are separated into many different layers or
tiers which is called Multi- or Three-Tier Architecture. You can easily replace and
upgrade each layer independently.

Presentation Layer: This layer is accessible to the client via a browser and it includes
user interface components and UI process components. These UI components are built
with HTML, CSS, and JavaScript (and its frameworks or library) where each of them
plays a different role in building the user interface.

Business Layer: It is also referred to as a Business Logic or Domain Logic or


Application Layer. It accepts the user’s request from the browser, processes it, and
regulates the routes through which the data will be accessed. The whole workflow is
encoded in this layer.
Example- booking a hotel on a website. A traveller will go through a sequence of
events to book the hotel room and the whole workflow will be taken care of by the
business logic.
Persistence/ Data Layer: It is also referred to as a storage or data access layer. This
layer collects all the data calls and provides access to the persistent storage of an
application. The business layer is closely attached to the persistence layer, so the logic
knows which database to talk to and the process of retrieving data becomes more
optimized. A server and a database management system software exist in data storage
infrastructure which is used to communicate with the database itself, applications, and
user interfaces to retrieve data and parse it. We can store the data in hardware servers
or in the cloud.

Types of Web Application Architecture


It is always a good practice to consider selecting the most appropriate
architecture while considering certain aspects. Always pay attention to the
app logic, functionalities, features, and business requirements. Here is a
division of web application architecture.

• Single page application architecture


The introduction was for overcoming the traditional limitations to
achieve smooth performance. Besides, you can get an innovative and
interactive user experience. Instead of just loading the page, you will
be getting the single page application architecture and a web page and
then reloading the requested data on the same page with dynamically
updated content.

The rest of the webpage, however, remains untouched. This finds


development on the client-side utilizing the JavaScript framework in
the form of the entire logic getting shifted to the front end.

• Microservice architecture
Microservice architecture is also becoming the perfect alternative to
service-oriented architecture as well as monolithic architecture. These
services are always loosely coupled for the development, testing,
maintenance, and deployment.

These services can also ensure communication with the other server.
It does so with the help of the API for solving complex business
problems.

• Serverless architecture

The serverless architecture is good enough in terms of design


patterns. The applications are always run without the manual
intervention on the servers finding management by the third-party
cloud services. Some of these components are like Amazon and
Microsoft. It always gives you more focus on the quality of the product
while lessening the complexity to make it highly scalable and reliable.

• Progressive web applications

Google has also introduced progressive web applications according to


the 2015 update that can ensure the development of the apps that will
be offering that native functionality with enhanced reliability and
capabilities. Besides, you can also find the convenience in terms of the
easy installation.

You might also like