0% found this document useful (0 votes)
21 views8 pages

System Design

System Design

Uploaded by

Ali Bin Anwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views8 pages

System Design

System Design

Uploaded by

Ali Bin Anwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 8

System Design:

It is the process of design the elements of a system. Such as the architecture modules and components the different
interfaces of those components and the data goes through system.

Types of system design:

I. High level design


Describe the main that would be developed for the main component like system architecture, database design,
services and processes, relation between various modules and features
II. Low level design
Describe the design of each element mentioned in the HLD of the system like classes, interfaces, relationship
between classes, actual logic of each component

Monolithic Architecture:

Frontend, backend and DB written and deployed at same place, all the components and functionalities are in
single codebase. It has less complexity and easy to understand, it is also known as centralized.

Can’t be scaled and if any error occur it destroy the whole system.

Distributed system:
Is a collection of multiple individual system connected through network that shares common resources.

Like we purchase DB in different machines and connect them, and each machines should have replica or
extra machine in which we can copy data which will act as backup in case of machine failure.

It is complex, need additional management, difficult to secure, need load balancer.

Latency:
In monolithic system only computational delay

Latency=network delay + computational delay

We can use cache layer or CDN to reduce latency

Content delivery network:

Geographically distributed network of proxy servers and their objective is to serve content to users more
rapidly

Caching:

Caching is the process of storing information for a set period of time on a computer

Throughput:
Amount of data transmitted per unit of time, it is the process flow rate
Measured in bps, bits per second.

Availability:
We make replicas of each resource in distributed system, if any failure occur then
system will be available due to resources are distributed and it also includes replications.

consistency:

Sync between different systems

In monolithic more consistency bcz we don’t share our resources so no need sync with other
systems.

CAP Theorem:

C=consistency

A=availability

P=partition tolerance

It is possible to attain only two properties, third one will be compromised.

System designer have to compromise any of them

CA: in partition tolerance will be compromised, availability and consistency can be


achieveable during monolithic system

P is most important in cap during distributed system, so we have to compromised C/A.

Lamport logic clock:

Time sequence/event, when we have distriuted system in different region then it is


important to manage events according to time of region time.

Horizental scalabling:

We increase load on server, we make distributed system and distribute system in


different machines and location
Vertical scalabling:

We increase load on server we increase the configuration of that machine in which we are
running our server

Redendency:

It is duplication of nodes, if any node or component fail the other server will be available to
customer.

Active Redendency:

If all server are working at a time, req is managed with load balancer

passive Redendency:

one server will be in working state, it one fail then other will start

Replication:

Retendency+sync

Duplication in live time, when one data change in one server at same time it will change in
other server.

In active different machine working in same time managed with load balancer

In passive, master server works and change data in slave server, if master dies and slave
become master

Load balancer:

It is the process of efficient distribution of network traffic nodes in distributed systems.

Virtual IP are in servers, loadbalancer is program that distribute to different Ips.

Load balancer insure high scalability, availability and health of servers.

It is used in microservices/distributed_system

Load balancing algorithms:

 Round robins (1st request to 1st server, 2nd to 2nd and so on)

 Waited rounded (one server have limit to handle when limit complete then move to
next)
 IP hash algorithms (hash function deside to which ip to be redirected)

 Source IP hash (source IP and the nearest location of server compare to redirect)

 Least connection algo: (minimum reponse time from server)

 Least connection algo (redirect to server who have less load)

Caching:

Primary memory(ram) is used to store data temporary. It is used as a layer between DB and
server to reduce latency. When we load data 1st time from DB it stores in caches and we give
time cache to remove automatically after sometime. We use cache where reading is extensive
or web contain static content

Types of caches:

 In memory/ local cache

 Distributed

CDN:

It is also type of cache, we store cache in cdn and served through cdn instead of DB

Cache eviction techniques:

It is the Process of deleting cache from system.

LRV: process to delete data which is not used since some time.

MRV: most recently use data deleted

LFV: least used data will be deleted and most used data will remain

FIFO: 1st cached data will be deleted 1st after some time

Lifo: Last data deleted 1st

RR: random replacement,


File based storage system:

Data stores in the form files (e.g. txt) but security and inconsistency problems occur.

RDBMS:

It is software which do operations on relational DB.

No data redundancy, data concurrency, data searching and data integrity.

Horizontal scaling cannot be done in relation DB, so we use non-relational system

Non-SQL/ non-relational:

Key-value based like reddix for caching, document based like mongoDB, columnar db for
data analysis or machine learning, graph DB used for social networking DB like graphql.

Polyglot persistence:

Used different type of DB according to different requirement in same app.

Normalization in DB:

Putting data in multiple tables to reduce redundancy

Denormalization:

putting data from multiple tables to single tables and merge it into one table.

It make faster data read operation, high data available, easy management.

Indexing:

way to implement binary search in DB. We do indexing to allocate separate memory in


which specific type of data sorted in ordered manner.

Use in read intensive DB.

For write intensive Indexing should not be used bcz time complexity increase
Synchronous communication/known as blocking call:
If one app sending req to other app then it will wait for res from other app, till that it will not
do any other work. It is sequential process in we have to complete each step in sequence.

Use to achieve consistency in DBs.

Async communication/ non-blocking:

Things not work in sequence. The step to which other task are not dependent can
be do in non-sequence manner.

Message based communication:

P2P, agent send message it goes to agent (queue) then agent forward it to the
consumer.

It is achieved through tools like RabbitMQ or kafka.

Webserver:

Tools to run web application

Hardware, software or both that store component files of software and run software to
interchange data between users

Communication programme:

Can be implement through:

 Push:

Client open the connection with server and keeps it always active and server give data
when ever new data come, no need to req again and again.

e.g. like feature in fb, it automativcally shows like increase without refreshing or
requesting of user

 Pull/polling:

Client req then server response


If more rush on server than the server can handle it will fail response rate which will cause
bad user experience.

 Long polling:

User req then server response after delay if more rush

 Socket:

Continue frequent connection.

It is connection between two server nodes, like whatsapp

 Server sent events:

Long lived connection, clent subscribe the server stream , server will send msg to
client on each event happen until client unsubscribe the server.

REST:(representational state transfer)

It is a style or standard to provide these requirements

o Language independent (give response to any req regardless of language)

o Fast

o Enable communication over the network

o Light weighted

SOA (service oriented Architecture): (divide application into components )

it is the style of architecture that promote loose coupling and granular


application to make the components reusables.

Services can share data storage

Use in agile

Selective scaling (mean most used api can be scale amongst the others api)

Can be Used in different stacks

Microservices Architecture:

Evolved version of soa. Software components to be loosely coupled.


Every service is independent from other services and different storage

Highly scalable

Authorization: (what can you do)

Authentication(who are you)

Oauth:

Authentication through google or another api.

Proxy:

It is hardware/software between client and server to provide intermediary


services

Forward proxy: (hides the client identity)

Client send req, proxy forward to main server but the server don’t
know who is client

Like. We use vpn, our req go to vpn then vpn forward to blocked site,
blocked site send req to vpn and vpn forward to us

Reverse Proxy: (hide server identity)

if we have multiple server in distributive system then we use this


proxy to make feel the user that we are using only one servre.

Provide load balancing

You might also like