0% found this document useful (0 votes)
12 views

Computer Networks and Distributed Systems

Computer networks and distributed systems allow for sharing of resources between interconnected autonomous computers. The seven layers of the OSI reference model provide standardization for network communication, with each layer performing a specific function like physical transmission of bits, error checking, routing, and ensuring reliable end-to-end delivery of data. Key aspects of data transmission include protocols for understanding between devices, representation of content like text, images, audio and video, and serial vs parallel transmission methods.

Uploaded by

Naser Arab
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Computer Networks and Distributed Systems

Computer networks and distributed systems allow for sharing of resources between interconnected autonomous computers. The seven layers of the OSI reference model provide standardization for network communication, with each layer performing a specific function like physical transmission of bits, error checking, routing, and ensuring reliable end-to-end delivery of data. Key aspects of data transmission include protocols for understanding between devices, representation of content like text, images, audio and video, and serial vs parallel transmission methods.

Uploaded by

Naser Arab
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Computer Networks and Distributed

Systems
1.Computer Networks
1.1. Basics of Data Transmission
 A computer network is a set of interconnected and autonomous computers.
 Interconnected computers can share information between them, Autonomous computers cannot
be forcibly started or stopped by other computers as they are not part of a master/slave
relationship.
 In computer networks, users are aware of multiple autonomous computers, However, in
distributed systems, users are not aware of the existence of multiple autonomous systems.
 The core objective of computer networks is to share resources (including files and any other
hardware or software resources) effectively among multiple users. To improve data availability,
multiple copies of crucial contents are stored on different computers, meaning that replicated
copies can be used if one is not available due to hardware issues. Moreover, costs are drastically
reduced as many small autonomous computers can cost less than one large server or mainframe
computer, with equal or close performance.
 Data are represented as analog or digital signals, but analog data such as audio and video can
also be stored digitally.
 In the context of computer networks, the term “data” refers to a collection of binary digits
(bits), which may represent a character, number, image, audio, or video content.
 Data transmission is the act of transmitting and receiving data between two or more devices.
These devices are also called nodes, and include computers, mobile phones, printers, radio, and
television.
 The technique of converting analog signals to digital is called digitization and can be achieved
by sampling the analog signal at discrete time intervals.
 Frequency refers to the number of cycles per unit of time. It is calculated as the inverse of the
duration of one cycle and is measured in hertz (Hz).
o Assume the duration of one cycle is four milliseconds (ms). One millisecond is 1/1000th
of a second and four milliseconds is 4/1000th of a second, which is equal to 0.004
seconds.

o Now, let us assume the duration of one cycle is 2 microseconds. One microsecond is 1/106
of a second and 2 microseconds is 2/106 of a second.

1
When the frequency of a signal is given, the duration of one clock cycle can be calculated
as the inverse of the frequency.

 Effective data transmission should ensure that the data are transmitted to the intended
destination. The data are not corrupted and they remain correct during transmission, being
delivered within a specific time.
 There are five entities involved in any data transmission:
o Content: The content is to be transmitted or received.
o Transmitter: The device that transmits the content or data is called a transmitter.
o Receiver: The device that receives the content or data is called a receiver.
o Communication media: This is the channel or path through which the transmitter and
receiver communicate. Communication media can be either wired or wireless.
o Protocol: The set of rules governing data transmission is called a protocol, which is the
understanding between the communicating devices.

o
 The original version of ASCII uses seven bits to represent 128 (27) characters. The latest
version of ASCII uses eight bits to represent 256 (28) characters.
 Binary Coded Decimal uses six bits to represent characters. The two most significant bits
represent the zone and the four least significant bits represent either alphabets, numbers, or
special symbols.
 EBCDIC is an eight-bit code. The four most significant bits represent the zone and the four
least significant bits represent numbers (0 to 9) or English letters (A-Z).
 Unicode is available in three formats: UTF-8, UTF-16, and UTF-32. The most commonly used
is UTF-16, which uses 16 bits to represent a character. Unicode is compatible with ASCII
 Images are represented as a two-dimensional (2-D) array of pixels. The number of pixels used
to represent an image varies according to the image resolution. As the resolution of the image
increases, so do the number of pixels.
 Audio is a continuous analog signal involving recording and transmitting voice or music.
Analog audio can be converted to digital to get it processed by a computer.

2
 Video can be a discrete or continuous signal. Discrete video signals are created by showing
images with minor changes to the human eye at a fast rate. The human brain cannot recognize
these changing images and perceive them as a motion picture or video.
 Data transmission and reception can occur in two ways: serial and parallel transmission.
o Serial transmission: a device can only transmit and receive one bit at a time. The speed of
data transmission will be measured as the bit rate (bits per second). Some of the serial
standards are universal serial bus (USB).
o Parallel transmission: the device can transmit and receive 8/16 bits at a time. The speed of
data transmission is measured as bytes per second. The most common parallel port
standard is DB25F connector.
 In half-duplex transmissions, both devices are able to transmit and receive data, but not
simultaneously. If one device is transmitting, the other should be receiving, and vice versa, as
the transmitting device uses the total channel bandwidth.
 In full-duplex transmissions, both devices can transmit and receive data simultaneously, such as
on a telephone or mobile network.

1.2. OSI Reference Model


 The OSI provides a layered approach for the network devices to communicate. The OSI
reference model was introduced to provide standardization of the protocols used in the various
layers. Each layer performs a well-defined function and represents a level of abstraction.
 The OSI model includes seven layers:
o Physical: The physical layer is concerned with the electrical and mechanical details of the
interface and the communication channel. The physical layer should make sure that bits
are intact (“1” should be received as “1” and “0” should be received as “0”) during
transmission from the sender to the receiver. The physical layer also specifies the

3
transmission methodology (simplex, half-duplex, or full-duplex), as well as the topology
(bus, star, ring, or hybrid) used for communication.
o Data link: The data link layer divides the packets received from the network layers into
frames. Node-to-node delivery of data frames is performed at the data link layer, with
computers in the network being identified using two addresses—a medium access control
(MAC) address (otherwise referred to as the physical address) and an IP address
(otherwise referred to as the logical address). MAC addresses are part of the NIC and are
used to identify a device locally (i.e., within a network), whereas IP addresses are used to
identify a device globally (i.e., between different networks or on the internet). It also
performs node-to-node error and flow control.
o Network: The key functions of the network layer are forwarding and routing. Forwarding
is when the source and destination systems are connected to the same router. Data packets
are received from one port and forwarded to another system through another port of the
same router. Routing is the process of moving the data packet through several routers,
each with a table dictating how the packets are routed. These tables can be either static or
dynamic. In static routing, the routes are “wired into” the network. With dynamic routing,
the table entries are dynamically updated according to the network load. The network layer
is responsible for handling network congestion, as well as for translating the logical
address to the physical address.
o Transport: The primary task of the transport layer is to ensure end-to-end delivery of
complete messages. It ensures that the data are received by the right application in the
destination system. The transport layer also performs segmentation and reassembly. The
transport layer at the sender divides the data received from the session layer into smaller
units called segments, each identified by a segment number. Upon arrival, the transport
layer at the destination reassembles the segments based on these numbers. They are also
used to identify any segments that get lost during transmission, so they can be
retransmitted by the sender. Transport layers also perform end-to-end error and flow
control, as opposed to data link layers, which perform node-to-node error and flow
control. The transport layer can provide either a connection-oriented or a connectionless
service. With connection-oriented service, a logical connection is established between the
transport layer at both the sender and receiver prior to data transmission. Once this
connection is established, the sender can transmit any number of data segments to the
receiver, which the receiver then acknowledges. After receiving the acknowledgement, the
sender transmits the next data segment and the connection is terminated upon completion.
This connection-oriented model provides a reliable data delivery mechanism The
connectionless model provides unreliable data delivery, for example, during online video
streaming.
o Session: The session layer establishes a session between the applications running on the
sender and the receiver, performing authentication and ensuring security. This one-way
traffic flow uses a concept called token management, in which the session layer provides a
token that only one system at a time can possess. The system currently in possession of the
token can send data, while the other receives. It also performs dialog control.
o Presentation: The presentation layer deals with the syntax and semantics of information
communicated between two systems. The key functions of the presentation layer are
translation, encryption, and compression.
o Application: The application layer is the topmost layer of the OSI model and provides the
user interface. It also offers multiple services, such as remote file access, mail services,
shared database management, and directory services.

4
 In a single computer, each layer receives services from the layer immediately below it and
offers services to the layer immediately above it. For example, layer five receives services from
layer four and offers services to layer six.
 In the case of communication between two computers, layer “k” on the first computer
communicates with layer “k” on the other. This is logical communication, not physical.
Physical communication happens through the layers. For example, layer three of computer A
can logically communicate with layer three of computer B if computer A and computer B are
interconnected. However, the actual flow of data will be from layer three of computer A to
layer two and then to layer one of computer A. From layer one of computer A, the data will be
sent to layer one of computer B, then on to layer two, and finally layer three of computer B.

 Encapsulation: Refers to the lower layer receiving packets from the higher layer, adding its own
header and trailer (optional), and passing the packet on to the next lower layer.
 Decapsulation: Happens at the receiver, the process running at each layer performs the actions
as specified in the header/trailer by the corresponding layer of the sender, removes the
header/trailer, and passes on the data packet to the layer immediately above.

5
1.3. Network Topologies
 A topology refers to how computers are arranged in a network. There are five common
topologies:
o Bus: In bus topologies, two or more computers are connected with a single long cable.
This provides a multipoint connection, meaning that multiple devices can share a single
link, all connected to one backbone cable.
o Ring: In ring topologies, each device is only connected with the adjacent system. Every
device is attached to a repeater, a device used to regenerate the signal. Data are transmitted
in only one direction and keep moving in the ring from one system to another. If a device
receives data for which it is not the destination, the signal is regenerated and passed along
the ring until it reaches the intended destination.
o Star: In star topologies, each device is connected to a centralized controller called a hub.
The individual devices are directly connected both with each other and the hub, with
point-to-point (dedicated link) connections between each device and the hub. If one device
wants to communicate with another, it sends the data to the hub, which retransmits them to
the actual destination.
o Mesh: Through a point-to-point (dedicated link) connection, every device in the network
is connected. Mesh topology eliminates the problem of congestion or traffic load because
each link carries its own traffic. It also ensures privacy and security as data are transmitted
through dedicated links between the intended users. So to calculate how many cables are
needed to make a mesh topology, its (n * (n - 1)) / 2, n here is the number of computers.
o Hybrid: A hybrid topology refers to a network being created by combining any of the four
previously mentioned topologies.

6
2.TCP/IP and Internet
2.1. Origin and Structure of the internet
 Packet-switched network: Each of the data packets may take a different path and arrive at the
destination in a different order. Packets are reassembled at the destination. It would allow data
packets to follow different paths, meaning that if one path were disconnected, packets could be
transmitted through another.
 Circuit-switched network: A dedicated path (connection or circuit) is established before the
transmission begins. All packets are transmitted through the same path in a sequence. After
transmitting all packets, the connection is severed. It would be more suitable for voice or video
communication between two parties, where a connection must be established before actual
communication.
 Domain Name System: The DNS is a look-up table used to map website addresses or computer
names to their respective IP addresses.
 A device is said to be on the internet if it has an IP address, runs TCP\/IP protocol stack, and
can send IP packets to other devices on the internet.
 Web browsers are client-side software used to access websites hosted in WWW.
 Modem: A modulator-demodulator (modem) converts digital data from a computer to analog
and sends them through the telephone network, as well as taking analog data from telephone
lines, converting them to digital, and sending them to a computer.
 The maximum internet speed offered by a 3G network is 7.2 Mbps, the maximum speed of 4G
networks is 100 Mbps, and that of 5G mobile networks is 20 Gbps.
 Two or more ISPs connect with each other to exchange packets, with the connecting points
referred to as “Internet eXchange Points” (IXP).

7
2.2. TCP/IP Protocol Stack
 TCP\/IP protocol model is the model used in the current internet architecture.
 The TCP\/IP reference model only includes four layers:
o Link: provides an interface between nodes and communication links, specifying the
behaviour of the transmission media, in order to meet the requirements of the
connectionless and unreliable internet layer. The link layer does not define any specific
protocols as all standard and proprietary protocols can be used. The appropriate outgoing
link is chosen for the packet to reach its ultimate destination, but not by the link layer. This
decision is made in the internet layer of the router, after consulting the routing table.
o Internet: The primary protocol at the internet layer is IP, and data packets within this
layer are referred to as datagrams. The main task of the IP protocol is to enable nodes to
send datagrams on any network, ensuring their delivery from one node to another. Every
computer in a network has two addresses: physical and logical. The physical address,
otherwise known as the Medium Access Control (MAC) address, comes with the Network
Interface Cards (NIC), which are interface cards connected to the computer and used for
providing networking options. The logical address is the IP address, used to uniquely
identify a computer on the network or internet.
o Transport: The transport layer is directly above the internet layer and uses Transmission
Control Protocol (TCP) and User Datagram Protocol (UDP) as its main protocols. TCP
protocol provides a reliable (error-free) and connection-oriented service. Before
communicating, two nodes set up a connection through which communication takes place.
The TCP divides incoming data from higher layers into messages called segments, passing
each on to the internet layer. The TCP layer at the receiving node reassembles the packet
to ensure in-order delivery, as well as provide flow control (ensuring that a fast sender
cannot flood a slow receiver by sending messages at a rate that it cannot handle). IP
ensures that the packet will reach the right destination node, whereas UDP ensures that the
packet will reach the right process in the destination node
o Application: The application layer is the topmost layer of the TCP\/IP protocol stack and
includes many high-level protocols, such as hypertext transfer protocol (HTTP), Simple
Mail Transfer Protocol (SMTP), Real Time Transport Protocol (RTP), Domain Naming
System (DNS), Telnet, and File Transfer Protocol (FTP). HTTP is used to fetch web pages
from the WWW; SMTP protocol to compose, send, and receive email; and RTP to
transmit real-time video or audio content. DNS is used to map the host (node) name to its
corresponding IP address, Telet for remote login, and FTP to transfer files from one node
to another. VoIP is a relatively new concept, through which voice messages can be
transmitted via IP, as opposed to a traditional telephone network.

8
o

2.3. Selected IP-Based Protocols and Services


9
 Packets can be delivered either through connection-oriented or connectionless services. In
connection-oriented services, a connection is established between source and destination, with
all packets following a fixed path to their destination. In comparison, connectionless services
(such as IP) have no connection establishment, so packets may follow different paths to arrive
at their destinations.
 Header length (HLEN): This is a 4-bit field, defining the total length of the datagram as 4-
byte words.
 Services: This is an 8-bit field, originally called service type. The Internet Engineering Task
Force (IETF) has since renamed it “differentiated services.” The first three bits are called
precedence, the next four are called type of service (TOS), and the last bit is not used.
 Precedence: These three bits represent the priority of the datagram, with values from zero to
seven. When a datagram needs to be discarded, the one with the lowest precedence is discarded
first.
 TOS: This is a 4-bit field, referring to the five different types of services applications can
request from the network.
 Total length: This is a 16-bit field and includes header and data. The maximum possible total
length is 216 = 65,536 bytes.
 Time to live (TTL): The lifetime of datagrams is limited. This field was originally used to hold
the timestamp, which is decremented when it passes through a router. When the timestamp
reaches zero, the packet is discarded, which also requires a synchronized clock. When the
datagram passes through a router, that value is decremented by one and will be discarded upon
reaching zero.
 Protocol: This is an 8-bit field specifying the protocol at the higher layers using the services of
IPv4. The protocols in these higher layers can include TCP, UDP, SCTP, ICMP, and IGMP,
among others.
 Checksum: This field is used to detect error occurring during datagram transmission.
 Source IP address: This is a 32-bit field, representing the source node’s IPv4 address.
 Destination IP address: This is a 32-bit field, representing the destination node’s IPv4 address.
 The ARP is used to convert the IP (logical) address to a MAC (physical) address. The sender
knows the IP address of the destination system and creates an ARP request message to obtain
the MAC address.
 RARP is used to convert the MAC address to an IP address. Every host or router is assigned
one or more IP addresses, used to uniquely identify a node in the network.

3.Communication and Coordination


3.1. Basic Concepts

10
 A process is a program under execution. It differs from a program in the sense that a program is
a passive entity, whereas a process is active. When a program resides on the disk, it is called a
program; when it is loaded into the memory and starts executing, it is called a process.
 For every process, a separate block of memory is allocated by the operating system, which
includes:
o Text section: contains the actual program code.
o Stack section: contains the function parameters, return address, and local variables.
o Data section: holds the global variables.
o Heap section: contains memory for dynamic memory allocation.
 A process may be in any of these five states during its execution:
o Newborn state: The process has just been created.
o Ready state: The process is in the ready queue and waiting for the processor to be
assigned.
o Running state: The process is being executed by the processor.
o Waiting state: The process is waiting for an event to be completed (I\/O operation,
arbitrary wait, or sleep time).
o Terminate state: The process has been executed.
 Interrupt: An interrupt is a signal raised by software or hardware to get an immediate response
from the processor. Once an interrupt occurs, the processor will temporarily halt the job it is
executing and serve the interrupt by executing the appropriate Interrupt Service Routine (ISR).

 The state information of a process is represented through the Process Control Block (PCB), or
Task Control Block (TCB). Typically, PCBs include the process state; the value of the program
counter (a special register which holds the address of the next instruction to be executed); and
the value of CPU registers, among other information.

3.2. Concurrency, Semaphores, and Deadlock


 Processes can execute either concurrently or parallel to one another. In the case of concurrent
execution, the processor context switches from one process to another. Concurrency creates the
illusion of the processes being executed simultaneously; however, in reality, the processor is
quickly context switching from one process to another, which the user cannot sense.
 In case of parallel execution, multiple processors or cores are available in the system, with each
executing a different process.

11
 Mutual exclusion: While one process is in its critical section for a shared resource, mutual
exclusion means no other process is allowed to enter the critical section for the same shared
resource.
 Semaphore K is an integer variable, accessed through only two standard operations: wait () and
signal ().
 Generally, operating systems support two types of semaphores:
o Binary: is used when only one instance of the shared resource is available.
o Counting: is used when there are multiple instances of a shared resource available.
 For binary semaphore, the initial count value is 1, and for the counting semaphore, the initial
count value is equal to the available number of instances of the resource.
 In a multiprogramming system, two or more processes waiting for a shared resource may never
get the resource at all and will be waiting indefinitely. This is an issue called deadlock.
 There are four necessary conditions for a system to be considered deadlocked: Mutual
exclusion. There is a non-shareable resource currently being held by process (P[i]). If another
process (P[j]) wishes to use the same resource, it needs to wait for it to be released by P[i]. Hold
and wait. There are some processes in the system holding resources and waiting for other
resources. Non-pre-emption. Resources cannot be forcibly removed from a process and can
only be voluntarily released. Circular wait. In this example, there is one set of processes (P1,
P2, P3,..., and Pn) and another of resources (R1, R2, R3, …, and Rn). P1 is holding R1, P2 is
holding R2, and so on, until we reach Pn holding Rn. If P1 is waiting for R2, P2 is waiting for
R3, and so on, until we reach Pn-1 waiting for Rn and Pn waiting for R0.
 To calculate the completion time of P1 using round-robin scheduling with a time quantum of 4
msec, we can use the following steps:
o Calculate the total burst time of all processes: 12 msec + 8 msec + 4 msec = 24 msec.
o Divide the total burst time by the time quantum: 24 msec / 4 msec = 6.
o Since there are 6 rounds of execution, P1 will be completed after the 6th round.
o Calculate the completion time of P1 by multiplying the number of rounds by the time
quantum: 6 rounds * 4 msec/round = 24 msec.
o Therefore, P1 will be completed after 24 msec.

3.3. Remote Procedure Call (RPC)


 RPC is a method by which communication between processes running on different systems can
be achieved using procedure calls, as though they were running in the local system.
 When the process in machine A (the client) calls a procedure in the remote machine B, the
request is first sent to a process called the client stub, which is running in the client machine.
The client stub converts the procedure call to a message by extracting the parameters and sends
it to machine B using the system call send. Upon sending, the client stub calls receive and it
remains in a blocked state until receiving a response from machine B. Once the message
reaches machine B, machine B’s operating system forwards the message to the server stub, the
counterpart of the client stub in machine A. The server stub converts the message to a
procedure call, calling the appropriate procedure in the local system (machine B). The
procedure is then executed and the results are passed on to the server stub, which converts the
result to a message and sends it to machine A using the send system call. Upon sending, the

12
server stub also makes the receive system call and is in a blocked state until it receives a
message back from the client. Once the message (result) reaches machine A, the operating
system forwards it to the client stub, which can then leave its blocked state. The client stub
receives the result and forwards it to the calling process. All these processes, such as the client
stub, are transparent to the calling process. The calling process makes both a remote and local
procedure call.
 RPCs hide the information communicated, while also requiring server and client to work
synchronously. A client is blocked until a reply is received from the server and vice versa.

3.4. Message-Oriented Communication


 Message-oriented communication: This addresses issues such as whether the communication
should be
o Persistent: Message transmitted by the sender is available in the network buffers until it
reaches its destination, and sender and receiver need not be running to make transmission
successful.
o Transient: No buffers are available; therefore, the receiver must be running to receive the
message.
o Synchronous: The sender is blocked until its message reaches the destination successfully.
o Asynchronous: The sender can continue with its next job immediately after sending the
message.
 Simple transient message-oriented communication is achieved through socket programming.
 A socket is a logical port (endpoint) to which applications write and from which they read data.
On the server end, the socket is created with the system call socket, creating a communication
end point for the transport protocols Transmission Control Protocol\/User Datagram Protocol
(TCP\/UDP). The system call bind binds the socket with an IP address and port number, on
which the server will receive all messages.

3.5. Cobra
 CORBA stands for Common ORB Architecture, with ORB standing for Object Request Broker.
ORB is an object-oriented representation of RPCs and provides a mechanism for invoking
operations on an object in a remote process running in the same or a different machine.
 CORBA is used to enable different stand-alone applications to communicate with each other
and, for this reason, is also known as middleware or integration software. CORBA is a
distributed middleware, which means that it allows applications to communicate with each
other even if they are running in different computers, operating systems, or CPU types, or are
implemented using different languages.
 CORBA can also be called object-oriented, distributed middleware as the CORBA clients call
server objects as opposed to server processes.
 As CORBA allows communication between objects from different programming languages,
there is a requirement for Interface Data Language (IDL). This provides a universal interface
which objects from different programming languages can use to communicate with each other.
 The interfaces defined through IDL can be either static or dynamic. The static interfaces are
defined during the compile time, with dynamic interfaces not yet being known. The static
interfaces are represented on the client side through stubs. The Dynamic Invocation Interface
(DII) enables clients to use dynamic CORBA objects, which are only available during the run
time.

13
3.6. EJB
 EJB stands for Enterprise JavaBean and is used to develop robust and secure distributed
applications. An application is said to be distributed if it can be executed on a different
hardware platform, operating system, or CPU type (among others).
 An EJB container, otherwise referred to as an application server, is required to run an EJB
application
 Two types of JavaBeans exist:
o Session: Session beans are created by the clients, with a typical lifetime equal to that of a
single Client-Server session. Session beans are used to perform simple calculations or
access the database on behalf of the client and cannot be recovered during a system crash
or network failure. Three types of session beans are available:
3.6...1. Stateless: the state of a client between multiple method calls (requests) is not
maintained. The stateless session bean uses three important annotations: @stateless,
@PostConstruct, and @PreDestroy. The class is annotated as @stateless and may
include methods that can be called from a remote location. The method annotated as
@PostConstruct will only be called once, immediately after initializing the bean
object. The method annotated as @PreDestroy will also only be called once, just
before removing the bean from the application.
3.6...2. Stateful: the state of the client is stored across multiple method calls (requests).
3.6...3. Singleton: one instance per application is shared across multiple clients
o Entity: Persistent data objects, such as data stored in the database, are represented as entity
beans, with every instance identified through a primary key. Database transactions can be
carried out using entity beans, which are recoverable in the event of a system crash or
network failure as they are persistent

14
4.Distributed Systems Architecture
4.1. Client-Server Systems and Distributed Applications
 The two major entities of a Client-Server systems are the client and the server. The clients are
single user systems that request services and are normally equipped with Graphical Use
Interface (GUI) to provide a user-friendly experience for the end user. The servers are the
systems providing one or more services to the clients, typically including file, print, database,
and email services, with each server named according to the service they provide. The Client-
Server model of computing is also called distributed computing. Since the users, applications,
and resources are distributed across the network.
 The set of functions and programs that allows clients and servers to communicate with each
other is known as the Application Programming Interface (API). The set of software, APIs, and
driver software that enhances the communication between the client and the server is called
middleware.
 Distributed applications are software systems that run on a distributed or cloud network, as
opposed to a dedicated individual server.
 In a distributed application, system resources, such as storage space, processing power, and
input\/output (I\/O) devices, are decentralized (distributed) across multiple systems, making the
application robust against attack.
 The core features of the distributed applications are:
o Resource sharing: The resources are shared across multiple systems, so a single point
failure will not break down the system.
o Openness: This is the ability to access remote resources in the same way as local
resources. Users can share the resources and publish their interfaces for uniform access.
o Concurrency control: Consistency is ensured when multiple clients try to access a shared
resource at the same time for read, write, or update operations.
o Scalability: The ability of the system to remain efficient despite increases in the number of
resources or users.

15
o Fault tolerance: This is the ability of the system to continue operating and providing basic
services despite the failure of a few components.

4.2. Service Orientation: SOA, Web Services, and Microservice


 Service-oriented architecture (SOA) is used for building business applications that are
sustainable, reusable, and extensible.
 SOA provides standard methodology and enables the separation of business logic from
computer logic, making it much easier for business and software developers to use business
paradigms while communicating.
 SOA also allows services (not only web services, but also high-level business services) to be
reused across multiple departments, thereby avoiding duplicates.
 Web services are software using standard web interfaces to communicate with other software
(with web interfaces).
 XML is a markup language allowing programmers to define data so they are understandable to
programs in other languages and is used to standardize commands sent by different programs.
WSDL describes all commands and data associated with XML, and SOAP allows different
software components to communicate with each other.
 The core purpose of microservice architecture is to create software systems that include
multiple modules with specific objectives, with each having well-defined interfaces and
operations.
 Monolithic applications are developed as a single autonomous entity—if small modifications
are to be done, then the entire software is updated and deployed as a new version. This is also
applicable for scaling—as there is no option for scaling a specific function, the entire software
must be updated and deployed as a new version. Therefore, it takes a lot of time to change a
monolithic application, which in turn impacts the entire system.
 Applications developed using microservices are scalable and flexible, with individual
components or services able to be developed with different programming languages and using
different storage techniques.

16
4.3. Cloud Applications
 Cloud computing refers to computing services offered by third parties, which enable utility
computing. These services can be provided and scaled dynamically according to requirements.
Computing resources can be provided like any other utility, such as electricity, telephone, or
water, with users only paying for the amount and duration of use. The individuals and
organizations providing cloud services are simply referred to as cloud service providers.
 Three types of cloud are available: public, private, and hybrid. Public clouds are managed by an
organization and are open for public use. Information technology (IT) infrastructure shared
within an organization is referred to as a private cloud, with hybrid clouds combining the
features of both public and private clouds.
 Cloud services can be classified as
o Infrastructure as a Service (IaaS): offers computing, networking, and storage resources on
demand, billed depending on the utility in question. The user need not build their own
infrastructure and invest money in storage or networking, but rather can access these
services from the cloud service provider as required. Examples of such providers include
Amazon Web Services (AWS) and Microsoft Azure.
o Platform as a Service (Paas): offers hardware and software resources to users via the
internet. These tools are available from service providers and users are able to access
services on demand. These hardware and software tools are used to create applications, for
example, AWS Elastic Beanstalk.
o Software as a Service (SaaS): SaaS is a distributed model, allowing cloud service
providers to host application software, such as Google Workspace and Dropbox, which
can be accessed by users over the internet.
 The core principles of cloud computing are
o Pooling of computing resources: Cloud service providers maintain a set of pooled
computing resources, either externally purchased or internally available within the
organization. Users can subscribe to these resources and are charged according to their
usage. Costs incurred by an organization can be classified as capital expense (CAPEX)
and operational expense (OPEX), with CAPEX representing costs incurred in building the
necessary hardware and software infrastructure and OPEX representing the operational
costs involved in developing and maintaining the software
o Virtualization of computing resources: physical servers are partitioned into several virtual
servers, which can run an operating system and other applications. Virtualization is one of
the key concepts for the evolution of cloud technology. When the user requests a physical
server, the cloud service provider provides a virtual instance.
o Elastic scaling: The cloud environment allows applications to scale resources up or down
dynamically according to load conditions.
o Automatic creation and deletion of virtual machines: Cloud computing enables us to
provision new resources as needed. These resources will be made available online for a

17
short period (a few minutes). Once peak demand has passed, these resources can be de-
provisioned and made offline. Billing will only be done for the duration the resource was
utilized.
o Usage-based billing: Cloud infrastructures work on a pay-as-you-go model, with resources
typically accessed and paid at an hourly rate, based on usage. Organizations need not raise
funds or wait for approvals to build infrastructure and can use cloud resources at minimal
cost.

4.4. Distributed Database Systems


 The integrated collection of databases distributed physically across a computer network is
referred to as the distributed databases.
 Distributed Database Management Systems (DDBMS) is a software that manages the
distributed databases so that users are not aware of their distributed nature.
 A database system is said to be distributed if its data are distributed across multiple sites and if
there is a common interface to access them.
 DDBS offers better availability, meaning that if a site fails, this does not cause the failure of all
the others, as data are available from other sites. In the case of centralized databases, the failure
of a single point causes the entire system to fail.
 Some of the best-known distributed DBMS are INGRES or star, Oracle, and IBM’s distributed
DBMS products.
 Distributed database architecture based on the Client-Server model is otherwise known as a
database server model. This model includes clients, application servers, database servers, and
databases. The application server executes application programs and the database server
executes database management functions. This is a three-tier model.
 The advantage of the Client-Server model is that it makes data management easier, increases
data reliability and availability, and enhances performance by the tight integration of database
systems and dedicated operating systems. The drawback of this model is the communication
overhead involved, with every query needing to pass through two servers—an application and a
database server.
 The distributed database architecture based on the peer-to-peer model provides massive
distribution (the database can be distributed across multiple sites), heterogeneity, and
autonomy.
 In DDBS, a relation can be fragmented:
o Horizontally (horizontal fragmentation): This is the concept of fragmenting a relation into
several smaller relations, with each of the smaller relations having a subset of the tuples
(rows) from the original.
Vertically (vertical fragmentation): This is the concept of fragmenting a relation into
several smaller relations, with each of the smaller relations having a subset of the
attributes (columns) of the original.
Both can be distributed across different sites, with some being replicated across sites.
Local databases should be considered fragments of the integrated database.
 The DDBMS supports four types of transparency:
o Distribution transparency: A relation can be fragmented and distributed across multiple
sites. Hiding these details from the user is called distribution transparency.
o Replication transparency: Some fragments of a relation can be stored across multiple sites.
Hiding the replication details from the user is called replication transparency.

18
o Location transparency: When a user issues a query, the result of the query may be fetched
from the databases of one or more sites. Hiding the site details of the result from the user
is called location transparency.
o Transaction transparency: When the user issues a query to the DDBMS, the query may be
processed by coordinating with multiple databases of various sites to ensure consistency
and integrity. Hiding the synchronization details from the users is called transaction
transparency. Every transaction ensures database consistency and integrity by
synchronizing with multiple databases without the user’s knowledge. A global transaction
accessing data from multiple sites divides the transactions into several smaller transactions
to access the specific sites.

4.5. High-Performance Computing Cluster


 A cluster is defined as a group of whole computers interconnected as a single computing
resource. These are independent computers that can run on their own, with each in the cluster
referred to as a node. Each node may have more than one processor. The core benefits of cluster
computing are:

19
o Absolute scalability: This is the ability to build large clusters that provide better
performance in terms of time and space even when compared with the largest possible
standalone computers.
o Incremental scalability: This is the ability to add new computers or nodes to the existing
cluster. A user can start with a small cluster with limited number of nodes and can extend
it depending on the demand.
o High availability: This is the ability to offer continuous service despite the failure of one
or more nodes. The clusters contain several nodes, meaning that failure of one will not
affect the services offered. In most cases, the software is also equipped with fault-tolerant
features.
o Superior price or performance: This is the ability to build a cluster with equal or more
computing power than a powerful stand-alone machine, at a much lower price.
 Generally, there are two ways to classify clusters based on networking style, depending on
whether the computers in a cluster share access to the same disk or not. Nodes in a cluster can
be interconnected through either a high-speed link or shared disk.
 RAID is a massive storage device containing multiple disks, with contents replicated at multiple
places. This ensures high availability of data, and the cluster cannot be compromised by a
single-point failure.
 In terms of functionality, clusters can be classified as either passive standby or active
secondary. In cases of passive standby, one computer handles all the processing load, while the
other remains inactive in standby mode, taking over in the event of the primary computer
failing.
 In cases of active secondary, the secondary server can also be used for processing. Active
secondary can be further classified into three types:
o Separate servers: Every node in the cluster has its own disk. Data are continuously copied
from the primary server to the secondary server to ensure high availability. However, this
is achieved at the cost of network and server overhead due to continuous copying.
o Shared nothing: This method is also called servers connected to disk. In this case, a disk is
separated into two volumes and shared between two computers, with each volume owned
by a single server. If one of the servers fails, the other is given ownership of its volume.
o Shared memory: More than one server can share a disk at the same time, with all parts able
to be accessed by both servers. A locking scheme should be introduced to ensure that data
can only be accessed by one server at a time.
 Failures are handled in clusters using two approaches:
o Highly available: ensure that all resources are available at a high probability. However, if
there is a failure in hardware resources like processing units or disks, then the queries
currently being executed will still be lost. The lost queries will be executed by another
computer if the user retries the same query again. Partially executed queries may be lost,
as cluster operating systems do not provide any guarantees for them.
o A fault-tolerant system: Uses redundant shared disks, backs up uncommitted transactions,
and commits completed transactions, ensuring that all resources are always available.
 The concept of moving applications and data resources from failed systems to alternate systems
is called failover, with the moving of applications and resources back to the original system
after its repair known as failback.

20
4.6. Distributed Ledger Technologies:
 A ledger is a book or digital content used to record financial transactions.
 DLT has created a new world order in the field of financial technology (FinTech) by providing
robust, foolproof, secure, scalable, and reliable financial services.
 In DLT, there is no central database or control facility. The transactions and their details are
recorded in multiple locations at the same time. The consensus mechanism is used in DLT to
ensure that all nodes have identical copies of the transactions, with the objective of ensuring:
o Unified agreement: This is the ability to ensure that the node’s status is current and
updated according to the latest agreement. Every transaction should be agreed upon by the
majority (51 percent) of the participants.
o Preventing double-spending: This is the ability to ensure that only one valid entry is
included in the public ledger for any given transaction. Double-spending is the concept of
using the same digital currency for more than one transaction, which is illegal.
o Enabling self-regulation: This is the ability to build a trust less system based on the self-
regulation of the individuals involved. The system should be built in such a way that the

21
desired user behaviour should be rewarded through incentives and any undesirable
behaviour should be penalized, thus ensuring the system resources are best utilized.
o Equality: This is the ability to ensure that participants are treated equally and without
discrimination. One way to ensure fairness is by making the source code open so users can
check the protocol’s fairness.
o Fault-tolerance: This is the ability to ensure that participants are treated equally and
without discrimination. One way to ensure fairness is by making the source code open so
users can check the protocol’s fairness.
 The first consensus protocol for cryptocurrency was developed for Bitcoin (a digital currency),
ensuring that the stakeholders effectively arrive at a consensus. This protocol is called proof of
work (PoW), with a set of transactions being called a block.
 A block should be validated by at least 51 percent of the participants to be accepted. This is a
two-step process, involving the verification of the hash code and solving a complex
cryptographic puzzle associated with the block. Every block is digitally signed by passing it
through a hash function. The most commonly used hash algorithm is SHA-256. The first miner
to validate the hash code and solve the cryptographic puzzle is rewarded.
 The other consensus protocols are proof of stake (PoS), proof of elapsed time (PoET), proof of
space (PoSpace), proof of retrievability (PoR), and Practical Byzantine Fault Tolerance (PBFT).
 Some of the commonly used DLTs are Blockchain, Tangle, Corda, Ethereum, and Hyperledger
Fabric.

5.Mobile Computing
5.1. Fundamentals, Techniques, and Protocols of Mobile
Computing
 Data are physically represented as signals, which are typically functions of time and space.
Users of wireless systems communicate with each other by transmitting and receiving
electromagnetic signals. Bits are converted to signals and signals are converted to bit streams
by the physical layer of the International Organization of Standardization\/Open Systems
Interconnection (ISO\/OSI) reference model. Wireless systems use radio waves for
communication at frequencies ranging from 30 Hz to 300 GHz.
 Signals shifted to the left or right of the original signal (without shift) is called the phase shift of
the signal.
 Any periodic signal can be constructed using the sine and cosine functions.
 Both wired and wireless media have limited bandwidth, meaning that, in reality, periodic
signals can be constructed with a limited number of sine and cosine functions.
 Wireless communication requires no media for the transport of electromagnetic waves.
Antennae are used to couple electromagnetic waves from the transmitter to the outside world
and vice versa.
 To calculate the length of antenna, if the wave length of the signal is X, the antenna length is
X/2.

22
 In wired network, the signals travel along a fixed path through cables such as copper wire,
coaxial cables, or Fiber optic cables. The communication channels in wired networks exhibit
the same characteristics at every point in the cable, making signal strength precisely
determinable, with signal power at the receiver end depending on the cable length.
 The strength of the wireless signal decreases with increasing distance. Based on its strength and
distance from the source, a wireless signal can be categorized as:
o Transmission range: is the distance between a sender and a receiver at which the receiver
can clearly receive the sender's signal and return communication without error.
o Detection range: is a distance between sender and receiver at which the receiver can
receive the sender’s signal with some additional (background) noise, from which the
signal can still be differentiated. However, the receiver cannot establish communication
with the sender due to the high error rate.
o Interference range: is a distance between sender and receiver at which signals transmitted
by the sender cannot reach the receiver. The sender’s signal may interfere with other
signals, adding to the background noise
 Free space loss is when the strength of a transmitted radio signal decreases, even in a vacuum.
 Low frequency radio waves can penetrate objects, with the degree of penetration depending on
how low the frequency is.
 Low frequency radio waves can penetrate objects, with the degree of penetration depending on
how low the frequency is.
 The propagation behaviour of radio waves varies depending on their frequency; three basic
propagation behaviours can be observed.
o Ground waves: Radio waves with frequency less than 2 MHz. These waves follow the
Earth’s surface and can be used for long distance communication, e.g., submarine or radio
communication.
o Sky waves: have frequencies ranging from 2 to 30 MHz and are often used by amateur
radio stations and international broadcasts for communication.
o High frequency waves: Cordless telephone systems, satellite systems, and mobile phones
use (>30 MHz).
 The radio signals travel in a straight line through free space, very similarly to light. The line-of-
sight (LOS) is a straight line of communication path between the transmitter and receiver.
 A loss of signal strength is called attenuation and is measured in decibels (dB). As the
frequency of the wireless signal increases, it behaves more like light and can be easily blocked
by objects such as trees, walls, or trucks, referred to as blocking or shadowing.
 Radio waves are reflected when signals hit a large obstacle such as a mountain, building, or the
Earth’s surface. During reflection, the direction of the wave is changed, with part of the signal
being absorbed by the object, causing a reduction in signal strength. Refraction of radio waves
occurs when signals are transmitted through a dense medium, upon which they tend to bend.
When a wireless signal hits an object larger than the signal wavelength, this results in
shadowing and reflection.
 Scattering converts an incoming signal to several weaker outgoing signals.
 When radio waves hit at the edge of an object, they may deflect and get propagated in different
directions, which is called diffraction.
 If an LOS exists, then the signal may follow that path. If not, the signal may reach its
destination through multiple paths via reflection and scattering, which is referred to as
multipath propagation.

23
 To compensate for disturbances (distortion), the sender sends a known training sequence to the
receiver. The receiver compares it with the original training sequence and programs an
equalizer to compensate for any distortion (disturbances).
 A substantial change in received signal strength over a period of time is called short-term
fading and may occur when the receiver is moving.
 The normal reduction of received signal strength over time is called a long-time fading, the
impact of which can be compensated by increasing or decreasing the transmission power.
 Modulation refers to varying the amplitude, frequency, or phase of a signal, in accordance with
the original input signal.
 Digital modulation is used to transmit a digital signal through an analog medium, for example,
a modulator-demodulator (modem) used to connect the old analog telephone system with a
computer. Wireless networks do not support digital transmission, so binary bit-stream (digital
signal) should be converted to an analog signal before transmission.
 The three most commonly used techniques to convert digital signals to analog are amplitude
shift keying (ASK), frequency shift keying (FSK), and phase shift keying (PSK).
 Generally, modulation occurs at the transmission end and demodulation at the receiver end.
 Handover is the concept of changing the channel frequency or base station to which a mobile
station is assigned while a call is in progress.
 Hard handover is also known as “break before make”, meaning the resources currently assigned
to a mobile station are released before new resources are allocated.
 Soft handover is also known as “make before break”, meaning resources currently assigned to a
mobile station are not released until new resources have been allocated.

24
5.2. Mobile Internet and Its Applications
 The technology that enables the delivery of data or information on devices with small screens
and limited memory or processing power is called Wireless Application Protocol (WAP).
 WAP is built on top of existing internet standards, such as hypertext transfer protocol (HTTP),
hypertext markup language (HTML), extensible markup language (XML), and internet protocol
(IP).
 WAP architecture is made up of six layers: bearer services and transport, security, transaction,
session, and application.
 Only the transport layer deals with the physical network-dependent issues.
 The equivalent protocols in the wired internet are Transmission Control Protocol (TCP),
Universal Datagram Protocol (UDP), and internet protocol (IP).
 The three primary services offered by Wireless Transaction Protocol WTP are TR-Invoke, TR-
Result, and TR-Abort. New transactions are invoked using TR-Invoke, the result of the recently
completed transaction is shared by invoking TR-Result; and an ongoing transaction can be
aborted by calling TR-Abort.
 WTP offers three types (Class 0, Class 1, and Class 2) of message transfer services, with class 0
being unreliable and Class 1 and Class 2 being reliable. Class 0 and Class 2 do not return result
messages, whereas Class 1 returns one.
 The transaction layer communicates with the next higher layer through the Transaction Service
Access Point (TR-SAP) interface.
 The session layer uses the Wireless Session Protocol (WSP), which establishes a session
between the mobile client and server.
 The session layer communicates with the higher layers using the Session Service Access Point
(S-Sap) interface.
 The equivalent protocol for session layer and transaction layer functionalities in wired internet
is HTTP.
 The application layer uses Wireless Application Environment (WAE), which provides a
framework to integrate the World Wide Web (WWW) with cellular or mobile applications.
 Some of the major issues addressed in this layer are data formats for handheld mobile devices,
interfaces to telephony applications, special markup languages, and scripting languages.
 The application layer interacts with the applications through the Application Service Access
Point (A-SAP) interface. The equivalent tools in wired internet are HTML and Java.

25
5.3. Mobile Communication Networks
 The Global System for Mobile Communications (GSM) is the most well-known and successful
digital mobile telecommunication system in the world.
 The communication from the mobile station (MS)(Mobile phone) to the base station (BS)( Baz

istasyonu 😊 ) is called the uplink and the communication from the BS to MS is called the
downlink.
 The three subsystems of GSM architecture are the radio subsystem (RSS), the network and
switching subsystem (NSS), and the operation subsystem (OSS).
 An MS consists of hardware and software and includes a subscriber identity module (SIM),
which contains all user-related data that is relevant to GSM.
 An MS can be identified through the international mobile equipment identity (IMEI) and
personalized using the SIM.
 Functionalities such as the worldwide localization of users, roaming across countries, and
accounting are performed by the NSS.
 The NSS includes the mobile switching centre (MSC), the Home Location Register (HLR), and
the Visitor Location Register (VLR).
 The operation subsystem (OSS) includes the essential tasks for network operation and
maintenance and accesses other entities through SS7 signalling. The OSS includes:
o Operation and maintenance centre (OMC): The “O” interface enables the OMC to monitor
and control other network entities. This typically takes place as accounting, security
management, network status reports, and traffic monitoring.
o Authentication centre (AuC): The AuC is used to protect user identity and data
transmission on the vulnerable radio interface and is located in a highly confidential part
of the HLR. It includes the encryption keys and the authentication algorithms.
o Equipment Identity Register (EIR): The EIR is the database for all IMEIs and includes all
device identifications registered for a particular network. MS can be stolen and used with a
valid SIM; however, the moment the user reports the theft of the MS to the network
service provider, they can lock it. The EIR contains a blacklist of stolen (or locked)
devices, which cannot be used by anyone. However, this is not synchronized across
different service providers. Therefore, a locked device or MS can be used in another
network. The white list is the list of valid IMEIs and the gray list is the list of
malfunctioning devices.

26
6.Network Security
6.1. Introduction to Network Security
 Network security refers to the set of policies, processes, and practices that preserve the
confidentiality, integrity, and availability of the data, information, software, firmware, and
hardware of the network. The three key aspects of security, often referred to as the CIA triad:
o Confidentiality: Confidentiality ensures that system resources (hardware, software,
information, and data) are only accessed by authorized persons. It also includes protection
of proprietary information and personal privacy.
o Integrity: Integrity ensures that system resources cannot be tampered with (modified or
destroyed) through unauthorized access.
o Availability: Availability refers to the ability to reliably access system resources when
needed.
 Encryption: Is the process of converting a plaintext to ciphertext.
 There are generally two types of attacks on networks:
o Passive: In a passive attack, the attacker may not modify any information communicated
between the sender and the receiver, merely passively observing the communication
channel and potentially receiving confidential information exchanged between sender and
receiver.
o Active: In an active attack, the attacker will attempt to modify information communicated
between the sender and receiver. There are generally four types of active attacks:
6.1...1. Masquerade: Involves an entity or person pretending to be somebody else.
6.1...2. Replay: Involves the passive capture of a data unit and its subsequent retransmission
to gain access to resources for which the attacker would otherwise lack authorization.
6.1...3. Modification of messages: Refers to the ability of the attacker to modify, delay, or
reorder part of a genuine message.
6.1...4. Denial of Service (DOS): Is the ability of the attacker to prevent a legitimate user
accessing the communication facility, e.g., messages to a particular destination may
be suppressed, or the attacker may overload the network with messages to degrade
performance.
 Cryptography is the art of secret writing, which allows only the sender and receiver of a
message to read and understand its contents.
 In the basic version, the sender encrypts the message using an encryption algorithm and a secret
key, with the resultant ciphertext sent to the receiver. The receiver then decrypts the ciphertext
using a decryption algorithm and the same secret key used by the sender and will get back the
plaintext.
 The basic version of cryptography uses a common shared secret key at both the sender and
receiver end, only known to the respective sender and the receiver.

6.2. Authentication in Distributed Systems


 Generally, authentication is achieved by using a special one-way function called the hash
function (i.e., a function that can be easily performed in one direction but is practically
impossible to perform in the other). Example is comparing an entered password with one from
the database. This if control is called applying hash function.

27
 Two types of authentication protocols are available for distributed systems:
o Authentication based on symmetric cryptography (SYM): is also known as secret key
cryptography and involves a common secret key shared between and only known by the
sender and receiver. The other name for asymmetric cryptography is public key
cryptography, with all the users having two keys, one private and one public. A user’s
public keys are known to other users, but a user’s private key is known only to them and is
not shared. One primary disadvantage of the basic SYM authentication protocol is its
vulnerability to replay attacks, i.e., an attacker could pretend to be S (masquerade) by
storing the message msg’ and later sending (replaying) it to R. Replay attacks can be
mitigated by using nonces or time stamps.
o Authentication based on asymmetric cryptography (ASY): This authentication protocol is
based on the principle of public-key encryption. The message is encrypted using the
sender’s private key and can only be decrypted using the sender’s public key. The sender’s
private key is known only to the sender, meaning that only the intended sender can only
perform the encryption. The sender’s public key is known to all. Therefore, anyone can
verify the sender by decrypting the message using the sender’s public key.

6.3. Secure Internet Protocols


 Hypertext transfer protocol (HTTP) with security features is called HTTPS, and provides
encrypted communication.
 SFTP is commonly used in FTP clients and servers, usually to share files over the internet and
upload web pages to a web server. SFTP ensures security by encrypting commands and data
and uses SSH to send files over the internet.
 SSH is used to log in to remote computers in a secure way. Typically, it is used to provide
secure connectivity over a file transfer server, a virtual private network (VPN), or an email
server. SSH is also called “tunnelling protocol” because it effectively establishes a secure
tunnel through the internet cloud.
 IPSec is a suite of protocols developed by companies such as Microsoft and CISCO to provide
secure, encrypted communication between nodes in the internet. IPSec protects against data
theft and data corruption, as well as providing protection from attacks by untrusted nodes. The
IPSec suite contains two protocols, the internet key exchange protocol (IKE) and IPSec itself.
IKE is used to securely exchange keys between two nodes, and IPSec ensures security services
such as authentication, confidentiality, and protection against replay attacks.
 SSL uses both public key (asymmetric) cryptography and symmetric key cryptography for
communication. Asymmetric cryptography is used for SSL handshakes and symmetric key
cryptography for actual data transfers.

6.4. Security and Data Protection in Mobile Systems


 A “safe” system protects the system from errors created by trusted users, whereas a “secure”
system provides system protection from untrusted users and intruders.
 The security policy of a mobile system should ensure services such as authentication,
confidentiality, access control, nonrepudiation, availability, and integrity.
 Digital mobile systems provide security through encryption, generally either symmetric or
asymmetric. The commonly used symmetric key algorithm is the Data Encryption Standard
(DES), developed by IBM.
 This increased key size provides better encryption and makes it more difficult to break.

28
 Stream cipher: Is a symmetric key algorithm, combining plaintext with a pseudo-random bit
stream to create the ciphertext.
 The Global System for Mobile Communication (GSM) uses an A5 encryption algorithm to
encrypt data communicated between the mobile and base stations. The A5 algorithm is a stream
cipher algorithm and uses a randomly generated 64-bit symmetric key.
 The most commonly used asymmetric encryption technique is the RSA algorithm.
 Asymmetric encryption techniques use a lot of CPU time and cannot be used to encrypt an
entire message in a mobile phone.
 Cellular networks are infrastructure-based, meaning all communication takes place through the
infrastructure (base station). Communication does not take place directly between mobile
stations, but rather between the mobile station and access point. On the other hand, in ad-hoc
wireless networks, each node can communicate directly with the other without using any
infrastructure.
 In ad-hoc networks, the nodes within the communication range can communicate directly.
Nodes not within the communication range can be communicated with by relaying messages
through other nodes. As the wireless nodes keep changing position, the topology of ad-hoc
networks change frequently.
 Ad-hoc networks can be deployed quickly with minimal cost, but are vulnerable to attacks,
requiring proper security mechanisms to be in place. Ad-hoc networks are more vulnerable than
cellular networks, due to their lack of infrastructure, meaning that there are threats even to the
basic network structure. The security mechanisms used in ad-hoc networks should ensure that
the basic network structure is protected, along with offering general security services, such as
authentication, confidentiality, integration, availability, and nonrepudiation

29

You might also like