0% found this document useful (0 votes)
25 views49 pages

Os 4

Uploaded by

Nitya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views49 pages

Os 4

Uploaded by

Nitya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Module IV

Subject: Operating System


Subject Code : PGCSA102

Department of Computer Science & Applications

Ms. Mayuree Katara


Assistant Professor
Department of Computer Science & Applications
PGCSA102 Operating System
3 credits (3-0-0)
Module 1: Introduction- OS Concepts – Evolution of OS, OS Structures- Kernel, Shell, General
Structure of MSDOS, Windows 2000, Difference between ANSI C and C++.Introduction and need of
operating system, layered architecture/logical structure of Operating system, Type of OS, operating
system as resource manager and virtual machine, OS services, BIOS, System Calls/Monitor Calls,
Firmware- BIOS, Boot Strap Loader.
Module 2: Process Management- Process & Threads - Process States - Process Control Block. Process
Scheduling - Operations on Processes, Threads, CPU Scheduler - Preemptive and Non Preemptive;
Dispatcher, Scheduling Criteria, Concurrent Processes, Co-operating Processes, Precedence Graph,
Hierarchy of Processes, Critical Section Problem, Two process solution, Synchronization Hardware,
Semaphores - Deadlock- detection, handling, prevention, avoidance, recovery, Starvation, Critical
Regions, Monitors, Inter process communication.
Module 3: Memory Management - Objectives and functions, Simple Resident Monitor Program (No
design), Overlays - Swapping; Schemes - Paging - Simple, Multi-level Paging; Internal and External
Fragmentation; Virtual Memory Concept, Demand Paging - Page Interrupt Fault, Page Replacement
Algorithms; Segmentation - Simple, Multi-level, Segmentation with Paging, Cache Memory.
Module 4: Inter Process Communication: Virtual Memory-Concept, virtual address space, paging
scheme, pure segmentation and segmentation with paging scheme hardware support and implementation
details, memory fragmentation,
Overview of IPC Methods, Pipes, popen, pclose Functions, Coprocesses, FTFOs, System VIPC,
Message Queues, Semaphores, Interprocess Communication
Shared Memory, Client-Server Properties, Stream Pipes, Passing File Descriptors, An Open Server-
Version 1, Client-Server Connection Functions.
Module 5: Information Management- Files and Directories - Directory Structure Directory
Implementation - Linear List - Hash Table. Device Management: Dedicated, Shared and Virtual Devices -
Serial Access Devices, Direct Access Devices, Direct Access Storage Devices Channels and Control
Modules -- Disk Scheduling methods.
Text Books:
1. Operating Systems Concepts – Silberschatz, Galvin, Wiley Publications (2008)
2. Modern Operating Systems - Andrew S. Tenenbaum, Pearson Education Asia / PHI (2005)
3. Operating Systems – William Stallings, Pearson Education Asia (2002)
Reference Books:
1. UNIX System Programming Using C++,by Terrence Chan: Prentice Hall India, 1999.
2. Advanced Programming in UNIX Environment, by W. Richard Stevens: 2nd Ed, Pearson Education,
2005.
Module 4: Inter Process Communication: Virtual Memory-Concept, virtual address space, paging
scheme, pure segmentation and segmentation with paging scheme hardware support and implementation
details, memory fragmentation,
Overview of IPC Methods, Pipes, popen, pclose Functions, Coprocesses, FTFOs, System VIPC,
Message Queues, Semaphores, Interprocess Communication
Shared Memory, Client-Server Properties, Stream Pipes, Passing File Descriptors, An Open Server-
Version 1, Client-Server Connection Functions.

4.1 Virtual Memory Concepts, Virtual Address Space and Paging Scheme
4.2 pure segmentation and segmentation with paging scheme hardware support and
implementation details
4.3 Memory Fragmentation, Overview of IPC Methods-pipes,popen and pclose functions
4.4 Co processes, FTFOs, System VIPC, Message Queues, Semaphores, Interprocess
Communication
4.5 Shared Memory, Client-Server Properties, Stream Pipes,
4.6 Passing File Descriptors, An Open Server-Version 1, Client-Server Connection
Functions.
4.1 Virtual Memory Concepts, Virtual Address Space and Paging Scheme

4.1.1 Virtual Memory Concepts- Virtual Memory is a storage scheme that provides user an illusion of
having a very big main memory. This is done by treating a part of secondary memory as the main memory.
In this ,User can load the bigger size processes than the available main memory by having the illusion
that the memory is available to load the process.
Instead of loading one big process in the main memory, the Operating System loads the different parts of
more than one process in the main memory.
By doing this, the degree of multiprogramming will be increased and therefore, the CPU utilization will
also be increased.
How Virtual Memory Works?
In modern word, virtual memory has become quite common these days. In this scheme, whenever some
pages needs to be loaded in the main memory for the execution and the memory is not available for those
many pages, then in that case, instead of stopping the pages from entering in the main memory, the OS
search for the RAM area that are least used in the recent times or that are not referenced and copy that
into the secondary memory to make the space for the new pages in the main memory.
Since all this procedure happens automatically, therefore it makes the computer feel like it is having the
unlimited RAM.
Demand Paging?
Demand Paging is a popular method of virtual memory management. In demand paging, the pages of a
process which are least used, get stored in the secondary memory.
A page is copied to the main memory when its demand is made or page fault occurs. There are various
page replacement algorithms which are used to determine the pages which will be replaced. We will
discuss each one of them later in detail.
Snapshot of a virtual memory management system
Let us assume 2 processes, P1 and P2, contains 4 pages each. Each page size is 1 KB. The main
memory contains 8 frame of 1 KB each. The OS resides in the first two partitions. In the third partition,
1st page of P1 is stored and the other frames are also shown as filled with the different pages of
processes in the main memory.
The page tables of both the pages are 1 KB size each and therefore they can be fit in one frame each.
The page tables of both the processes contain various information that is also shown in the image.
The CPU contains a register which contains the base address of page table that is 5 in the case of P1 and
7 in the case of P2. This page table base address will be added to the page number of the Logical address
when it comes to accessing the actual corresponding entry.

Advantages of Virtual Memory-

1. The degree of Multiprogramming will be increased.


2. User can run large application with less real RAM.
3. There is no need to buy more memory RAMs.
Disadvantages of Virtual Memory-

1. The system becomes slower since swapping takes time.


2. It takes more time in switching between applications.
3. The user will have the lesser hard disk space for its use.
4.1.2 Virtual Address Space-In operating systems, Virtual memory plays a very vital role, in
managing the memory allotted to different processes and efficiently isolating the different memory
addresses. The role of the virtual address is to assign a space to the ledger of all the virtual memory
areas that are provided to different processes. This process enables the process to view the respective
memory independently and be more flexible and maintainable.
What is Virtual Address Space in an Operating System?
Virtual address space refers to the room of addresses which usually refers to the reference to the
different slots of virtual memory allotted to different processes. Operating systems allocate this set of
addresses for the processes to use in order to access their designated virtual memory. The address space
is divided into many regions, each of which serves a specific function. An operating system’s processes
each have their own distinct virtual memory space, where all of their addresses are kept. Every process,
therefore, creates the illusion that they have dedicated physical memory as a result of this characteristic.
Key Terminologies of Virtual Address Space in Operating System-
Page: A page is a fixed-size memory block that is used to manage virtual memory.
Code segment: A part of a virtual address that contains the executable instructions of the process.
Data Segment: The part of a virtual address that contains the allocated memory and process variables.
Page Table: It is a data structure that the operating system manages to keep track of the relationship
between virtual pages and actual memory frames.
Page Fault: This is a case scenario where the page which is requested is not present in the physical
memory.
Characteristics of Virtual Address Space-
 Virtual address space enables dynamic memory allocation as it is mainly needed to carry out the
task of assigning memory blocks to the processes when they request dynamically.
 A page table is used to maintain the mapping of the Virtual address to the corresponding Physical
address which can be referred to as address translation which is used by the Virtual address
space to maintain the mapping.
 The Virtual address space also contains the different access specifiers for specific virtual memory
blocks that specify whether a space will be read-only, read-write access, or no access. This
ensures that the memory or data is safe from any misdoings.
 Using Virtual Address Space to give the processes are independent of any running process, as
they have noninterfering virtual address space.
 This technique when used to enable memory sharing as helps to map the specific virtual memory
addresses to the same physical memory so that two/more processes can use the same memory
space to make more efficient use of it.

Virtual Address Space


How Does Virtual Address Space Works in Operating System?
 The Virtual Address Space’s working starts with the allocation of the virtual address space’s
single page to the individual process as soon as a process is created.
 The Virtual Address Space has two spaces and their individual jobs:
o The instruction statements which are attached to the process are stored in the code
segment of the address space to execute them when needed.
o All the process variables and data of the process are stored in the Data segment of the
address space to make them more accessible.
 The operating system uses the Page Table to get the virtual address to access the virtual page
frame which is linked to the Physical address in the Physical memory using the Page table.
 The Virtual Addresses have both the virtual page number of the individual pages and the page
offset which is used to combine with the physical page number from the page table to get the
actual physical address.
 When the required page is not present in the Physical memory the OS fetched it from the
secondary memory which is the hard disk and swapped it to any available page frame in the
memory. This function is executed using the Page Replacement Algorithms in Operating Systems.
Advantages of Using Virtual Address Space in Operating System-
 Allows efficient memory management and helps to avoid fragmentation issues
 Using this technique gives memory protection as it helps to prevent the virtual memory frames
from getting accessed anonymously.
 Avoids interference between processes’ memory spaces which helps to increase the stability of
the system.
 It helps to make efficient usage of the memory as it helps to share the same memory space for
two/more processes.

Disadvantages of Using Virtual Address Space in Operating System-


 Using the Virtual Adress Space process adds more complexity to the memory management
system as there are several steps that are carried on to make it work.
 Managing the Page table is more work and it also adds to the increase in complexity of the
process.
 Additional space is used to include the Page Table which maintains the mapping between the
virtual and physical memory.

4.1.3 Paging-Paging is a storage mechanism that allows OS to retrieve processes from the secondary
storage into the main memory in the form of pages. In the Paging method, the main memory is divided
into small fixed-size blocks of physical memory, which is called frames. The size of a frame should be
kept the same as that of a page to have maximum utilization of the main memory and to avoid external
fragmentation. Paging is used for faster access to data, and it is a logical concept.
Example of Paging in OS-
For example, if the main memory size is 16 KB and Frame size is 1 KB. Here, the main memory will be
divided into the collection of 16 frames of 1 KB each.
There are 4 separate processes in the system that is A1, A2, A3, and A4 of 4 KB each. Here, all the
processes are divided into pages of 1 KB each so that operating system can store one page in one frame.
At the beginning of the process, all the frames remain empty so that all the pages of the processes will
get stored in a contiguous way.
In this example you can see that A2 and A4 are moved to the waiting state after some time. Therefore,
eight frames become empty, and so other pages can be loaded in that empty blocks. The process A5 of
size 8 pages (8 KB) are waiting in the ready queue.

p
In this example, you can see that there are eight non-contiguous frames which is available in the
memory, and paging offers the flexibility of storing the process at the different places. This allows us to
load the pages of process A5 instead of A2 and A4.
What is Paging Protection?
The paging process should be protected by using the concept of insertion of an additional bit called
Valid/Invalid bit. Paging Memory protection in paging is achieved by associating protection bits with
each page. These bits are associated with each page table entry and specify protection on the
corresponding page.
Advantages of Paging
Here, are advantages of using Paging method:
 Easy to use memory management algorithm
 No need for external Fragmentation
 Swapping is easy between equal-sized pages and page frames.
Disadvantages of Paging
Here, are drawback/ cons of Paging:
 May cause Internal fragmentation
 Page tables consume additonal memory.
 Multi-level paging may lead to memory reference overhead.
4.2 pure segmentation and segmentation with paging scheme hardware support and
implementation details-
4.2.1 Segmentation- A process is divided into Segments. The chunks that a program is divided
into which are not necessarily all of the exact sizes are called segments. Segmentation gives
the user’s view of the process which paging does not provide. Here the user’s view is
mapped to physical memory.
Types of Segmentation in Operating Systems
 Virtual Memory Segmentation: Each process is divided into a number of segments, but the
segmentation is not done all at once. This segmentation may or may not take place at the
run time of the program.
 Simple Segmentation: Each process is divided into a number of segments, all of which are
loaded into memory at run time, though not necessarily contiguously.
There is no simple relationship between logical addresses and physical addresses in segmentation.
A table stores the information about all such segments and is called Segment Table.
What is Segment Table?
It maps a two-dimensional Logical address into a one-dimensional Physical address. It’s each table
entry has:
 Base Address: It contains the starting physical address where the segments reside in
memory.
 Segment Limit: Also known as segment offset. It specifies the length of the segment.
Translation of Two-dimensional Logical Address to Dimensional Physical Address.

The address generated by the CPU is divided into:


 Segment number (s): Number of bits required to represent the segment.
 Segment offset (d): Number of bits required to represent the position of data within a
segment.
Advantages of Segmentation in Operating System-
 Internal Fragmentation: It occurs only when unused space arises, because the memory
can be divided into fixed size parts. But a process can’t use entire divided part.
 Segment Table consumes less space in comparison to Page table in paging.
 As a complete module is loaded all at once, segmentation improves CPU utilization.
 The user’s perception of physical memory is quite similar to segmentation. Users can
divide user programs into modules via segmentation. These modules are nothing more
than separate processes’ codes.
 The user specifies the segment size, whereas, in paging, the hardware determines the page
size.
 Segmentation is a method that can be used to segregate data from security operations.
 Flexibility: Segmentation provides a higher degree of flexibility than paging. Segments
can be of variable size, and processes can be designed to have multiple segments, allowing
for more fine-grained memory allocation.
 Sharing: Segmentation allows for sharing of memory segments between processes. This
can be useful for inter-process communication or for sharing code libraries.
 Protection: Segmentation provides a level of protection between segments, preventing
one process from accessing or modifying another process’s memory segment. This can
help increase the security and stability of the system.
Disadvantages of Segmentation in Operating System-
 External Fragmentation: As processes are loaded and removed from memory, the free
memory space is broken into little pieces, causing external fragmentation. This is a notable
difference from paging, where external fragmentation is significantly lesser.
 Overhead is associated with keeping a segment table for each activity.
 Due to the need for two memory accesses, one for the segment table and the other for
main memory, access time to retrieve the instruction increases.
 Fragmentation: As mentioned, segmentation can lead to external fragmentation as
memory becomes divided into smaller segments. This can lead to wasted memory and
decreased performance.
 Overhead: Using a segment table can increase overhead and reduce performance. Each
segment table entry requires additional memory, and accessing the table to retrieve
memory locations can increase the time needed for memory operations.
 Complexity: Segmentation can be more complex to implement and manage than paging.
In particular, managing multiple segments per process can be challenging, and the
potential for segmentation faults can increase as a result.

4.2.2 segmentation with paging scheme hardware support and implementation details-
Paged Segmentation and Segmented Paging are two different memory management
techniques that combine the benefits of paging and segmentation.

1. Paged Segmentation is a memory management technique that divides a process’s address


space into segments and then divides each segment into pages. This allows for a flexible
allocation of memory, where each segment can have a different size, and each page can
have a different size within a segment.
2. Segmented Paging, on the other hand, is a memory management technique that divides the
physical memory into pages, and then maps each logical address used by a process to a
physical page. In this approach, segments are used to map virtual memory addresses to
physical memory addresses, rather than dividing the virtual memory into pages.

3. Both Paged Segmentation and Segmented Paging provide the benefits of paging, such as
improved memory utilization, reduced fragmentation, and increased performance. They
also provide the benefits of segmentation, such as increased flexibility in memory
allocation, improved protection and security, and reduced overhead in memory
management.

However, both techniques can also introduce additional complexity and overhead in the
memory management process. The choice between Paged Segmentation and Segmented
Paging depends on the specific requirements and constraints of a system, and often requires
trade-offs between flexibility, performance, and overhead.

Major Limitation of Single Level Paging-


A big challenge with single level paging is that if the logical address space is large, then the
page table may take up a lot of space in main memory. For instance, consider that logical
address is 32 bit and each page is 4 KB, the number of pages will be 2^20 pages. The page
table without additional bits will be of the size 20 bits * 220 or 2.5 MB. Since each process has
its own page table, a lot of memory will be consumed when single level paging is used. For a
system with 64-bit logical address even a page table of single process will not fit in main
memory. For a process with a large logical address space, a lot of its page table entries are
invalid as a lot of the logical address space goes unused.
Page table with invalid entries
Segmented Paging
A solution to the problem is to use segmentation along with paging to reduce the size of page
table. Traditionally, a program is divided into four segments, namely code segment, data
segment, stack segment and heap segment.

The size of the page table can be reduced by creating a page table for each segment. To
accomplish this hardware support is required. The address provided by CPU will now be
partitioned into segment no., page no. and offset.

The memory management unit (MMU) will use the segment table which will contain the address
of page table(base) and limit. The page table will point to the page frames of the segments in
main memory.
Advantages of Segmented Paging

1. The page table size is reduced as pages are present only for data of segments, hence reducing the
memory requirements.

2. Gives a programmers view along with the advantages of paging.

3. Reduces external fragmentation in comparison with segmentation.

4. Since the entire segment need not be swapped out, the swapping out into virtual memory becomes
easier .

Disadvantages of Segmented Paging

1. Internal fragmentation still exists in pages.

2. Extra hardware is required

3. Translation becomes more sequential increasing the memory access time.


4. External fragmentation occurs because of varying sizes of page tables and varying sizes of
segment tables in today’s systems.

Paged Segmentation

1. In segmented paging, not every process has the same number of segments and the segment
tables can be large in size which will cause external fragmentation due to the varying
segment table sizes. To solve this problem, we use paged segmentation which requires
the segment table to be paged. The logical address generated by the CPU will now consist
of page no #1, segment no, page no #2 and offset.

2. The page table even with segmented paging can have a lot of invalid pages. Instead of
using multi level paging along with segmented paging, the problem of larger page table
can be solved by directly applying multi level paging instead of segmented paging.
Advantages of Paged Segmentation

1. No external fragmentation

2. Reduced memory requirements as no. of pages limited to segment size.

3. Page table size is smaller just like segmented paging,

4. Similar to segmented paging, the entire segment need not be swapped out.

5. Increased flexibility in memory allocation: Paged Segmentation allows for a flexible allocation of
memory, where each segment can have a different size, and each page can have a different size
within a segment.

6. Improved protection and security: Paged Segmentation provides better protection and security by
isolating each segment and its pages, preventing a single segment from affecting the entire
process’s memory.
Increased program structure: Paged Segmentation provides a natural program structure, with each
segment representing a different logical part of a program.

7. Improved error detection and recovery: Paged Segmentation enables the detection of memory
errors and the recovery of individual segments, rather than the entire process’s memory.

8. Reduced overhead in memory management: Paged Segmentation reduces the overhead in memory
management by eliminating the need to maintain a single, large page table for the entire process’s
memory.

9. Improved memory utilization: Paged Segmentation can improve memory utilization by reducing
fragmentation and allowing for the allocation of larger blocks of contiguous memory to each
segment.

Disadvantages of Paged Segmentation

1. Internal fragmentation remains a problem.

2. Hardware is complexer than segmented paging.

3. Extra level of paging at first stage adds to the delay in memory access.

4. Increased complexity in memory management: Paged Segmentation introduces additional


complexity in the memory management process, as it requires the maintenance of multiple page
tables for each segment, rather than a single page table for the entire process’s memory.

5. Increased overhead in memory access: Paged Segmentation introduces additional overhead in


memory access, as it requires multiple lookups in multiple page tables to access a single memory
location.
6. Reduced performance: Paged Segmentation can result in reduced performance, as the additional
overhead in memory management and access can slow down the overall process.

7. Increased storage overhead: Paged Segmentation requires additional storage overhead, as it


requires additional data structures to store the multiple page tables for each segment.

8. Increased code size: Paged Segmentation can result in increased code size, as the additional code
required to manage the multiple page tables can take up valuable memory space.
4.3 Memory Fragmentation, Overview of IPC Methods-pipes,popen and pclose functions
4.3.1 Memory Fragmentation- Segmentation divides processes into smaller subparts known
as modules. The divided segments need not be placed in contiguous memory. Since there is no
contiguous memory allocation, internal fragmentation does not take place. The length of the
segments of the program and memory is decided by the purpose of the segment in the user
program.

We can say that logical address space or the main memory is a collection of segments.

Types of Segmentation
Segmentation can be divided into two types:
1. Virtual Memory Segmentation: Virtual Memory Segmentation divides the processes
into n number of segments. All the segments are not divided at a time. Virtual Memory
Segmentation may or may not take place at the run time of a program.
2. Simple Segmentation: Simple Segmentation also divides the processes into n number of segments
but the segmentation is done all together at once. Simple segmentation takes place at the run time
of a program. Simple segmentation may scatter the segments into the memory such that one segment
of the process can be at a different location than the other(in a noncontinuous manner).
Why Segmentation is required?
Segmentation came into existence because of the problems in the paging technique. In the case of
the paging technique, a function or piece of code is divided into pages without considering that the relative
parts of code can also get divided. Hence, for the process in execution, the CPU must load more than one
page into the frames so that the complete related code is there for execution. Paging took more pages for
a process to be loaded into the main memory. Hence, segmentation was introduced in which the code is
divided into modules so that related code can be combined in one single block.
Other memory management techniques have also an important drawback - the actual view of physical
memory is separated from the user's view of physical memory. Segmentation helps in overcoming the
problem by dividing the user's program into segments according to the specific need.
Advantages of Segmentation in OS
 No internal fragmentation is there in segmentation.
 Segment Table is used to store the records of the segments. The segment table itself consumes small
memory as compared to a page table in paging.
 Segmentation provides better CPU utilization as an entire module is loaded at once.
 Segmentation is near to the user's view of physical memory. Segmentation allows users to partition
the user programs into modules. These modules are nothing but the independent codes of the current
process.
 The Segment size is specified by the user but in Paging, the hardware decides the page size.
 Segmentation can be used to separate the security procedures and data.
Disadvantages of Segmentation in OS
 During the swapping of processes the free memory space is broken into small pieces, which is a
major problem in the segmentation technique.
 Time is required to fetch instructions or segments.
 The swapping of segments of unequal sizes is not easy.
 There is an overhead of maintaining a segment table for each process as well.
 When a process is completed, it is removed from the main memory. After the execution of the
current process, the unevenly sized segments of the process are removed from the main memory.
Since the segments are of uneven length it creates unevenly sized holes in the main memory. These
holes in the main memory may remain unused due to their very small size.
Characteristics of Segmentation in OS
Some of the characteristics of segmentation are discussed below:
 Segmentation partitions the program into variable-sized blocks or segments.
 Partition size depends upon the type and length of modules.
 Segmentation is done considering that the relative data should come in a single segment.
 Segments of the memory may or may not be stored in a continuous manner depending upon the
segmentation technique chosen.
 Operating System maintains a segment table for each process.
Example of Segmentation
Let's take the example of segmentation to understand how it works.
Let us assume we have five segments namely: Segment-0, Segment-1, Segment-2, Segment-3, and
Segment-4. Initially, before the execution of the process, all the segments of the process are stored in the
physical memory space. We have a segment table as well. The segment table contains the beginning entry
address of each segment (denoted by base). The segment table also contains the length of each of the
segments (denoted by limit).
As shown in the image below, the base address of Segment-0 is 1400 and its length is 1000, the base
address of Segment-1 is 6300 and its length is 400, the base address of Segment-2 is 4300 and its length
is 400, and so on.
The pictorial representation of the above segmentation with its segment table is shown below.
4.3.2 Overview of IPC Methods- It allows for a standard connection which is computer and OS
independent. Interprocess communication (IPC) refers to the mechanisms and techniques used
by operating systems to allow different processes to communicate with each other.

Pipes- Pipes are a type of IPC (Inter-Process Communication) technique that allows two or more
processes to communicate with each other by creating a unidirectional or bidirectional channel
between them. A pipe is a virtual communication channel that allows data to be transferred
between processes, either one-way or two-way. Pipes can be implemented using system calls in
most modern operating systems, including Linux, macOS, and Windows.
Here are some advantages and disadvantages of using pipes as an IPC technique:
Advantages:
1. Simplicity: Pipes are a simple and straightforward way for processes to communicate with
each other.
2. Efficiency: Pipes are an efficient way for processes to communicate, as they can transfer
data quickly and with minimal overhead.
3. Reliability: Pipes are a reliable way for processes to communicate, as they can detect
errors in data transmission and ensure that data is delivered correctly.
4. Flexibility: Pipes can be used to implement various communication protocols, including
one-way and two-way communication.
Disadvantages:
1. Limited capacity: Pipes have a limited capacity, which can limit the amount of data that
can be transferred between processes at once.
2. Unidirectional: In a unidirectional pipe, only one process can send data at a time, which
can be a disadvantage in some situations.
3. Synchronization: In a bidirectional pipe, processes must be synchronized to ensure that
data is transmitted in the correct order.
4. Limited scalability: Pipes are limited to communication between a small number of
processes on the same computer, which can be a disadvantage in large-scale distributed
systems.
5. Overall, pipes are a useful IPC technique for simple and efficient communication between
processes on the same computer. However, they may not be suitable for large-scale
distributed systems or situations where bidirectional communication is required.

A Pipe is a technique used for inter process communication. A pipe is a mechanism by


which the output of one process is directed into the input of another process. Thus it
provides one way flow of data between two related processes. Although pipe can be
accessed like an ordinary file, the system actually manages it as FIFO queue. A pipe file
is created using the pipe system call. A pipe has an input end and an output end. One can
write into a pipe from input end and read from the output end. A pipe descriptor, has an
array that stores two pointers, one pointer is for its input end and the other pointer is for its
output end. Suppose two processes, Process A and Process B, need to communicate. In
such a case, it is important that the process which writes, closes its read end of the pipe
and the process which reads, closes its write end of a pipe. Essentially, for a
communication from Process A to Process B the following should happen.
 Process A should keep its write end open and close the read end of the pipe.
 Process B should keep its read end open and close its write end. When a pipe is created, it
is given a fixed size in bytes.
When a process attempts to write into the pipe, the write request is immediately executed
if the pipe is not full. However, if pipe is full the process is blocked until the state of pipe
changes. Similarly, a reading process is blocked, if it attempts to read more bytes that are
currently in pipe, otherwise the reading process is executed. Only one process can access a
pipe at a time.
Limitations :
 As a channel of communication a pipe operates in one direction only.
 Pipes cannot support broadcast i.e. sending message to multiple processes at the same
time.
 The read end of a pipe reads any way. It does not matter which process is connected to the
write end of the pipe. Therefore, this is very insecure mode of communication.

4.3.2 Popen- The popen() function executes the command specified by the string command. It
creates a pipe between the calling program and the executed command, and returns a pointer to
a stream that can be used to either read from or write to the pipe.
The environment of the executed command will be as if a child process were created within the
popen() call using fork(), and the child invoked the sh utility using the call:
execl("/bin/sh", "sh", "-c", command, (char *)0);
The popen() function ensures that any streams from previous popen() calls that remain open in
the parent process are closed in the child process.
The mode argument to popen() is a string that specifies I/O mode:
1. If mode is r, file descriptor STDOUT_FILENO will be the writable end of the pipe when
the child process is started. The file descriptor fileno(stream) in the calling process,
where stream is the stream pointer returned by popen(), will be the readable end of the
pipe.
2. If mode is w, file descriptor STDIN_FILENO will be the readable end of the pipe when
the child process is started. The file descriptor fileno(stream) in the calling process,
where stream is the stream pointer returned by popen(), will be the writable end of the
pipe.
3. If mode is any other value, a NULL pointer is returned and errno is set to EINVAL.
After popen(), both the parent and the child process will be capable of executing independently
before either terminates.
Because open files are shared, a mode r command can be used as an input filter and a
mode w command as an output filter.
Buffered reading before opening an input filter (that is, before popen()) may leave the standard
input of that filter mispositioned. Similar problems with an output filter may be prevented by
buffer flushing with fflush().
A stream opened with popen() should be closed by pclose().
The behavior of popen() is specified for values of mode of r and w. mode values
of rb and wb are supported but are not portable.
If the shell command cannot be executed, the child termination status returned by pclose() will
be as if the shell command terminated using exit(127) or _exit(127).
If the application calls waitpid() with a pid argument greater than 0, and it still has a stream that
was created with popen() open, it must ensure that pid does not refer to the process started by
popen()
The stream returned by popen() will be designated as byte-oriented.
Special behavior for file tagging and conversion: When the FILETAG(,AUTOTAG) runtime
option is specified, the pipe opened for communication between the parent and child process by
popen() will be tagged with the writer''s program CCSID upon first I/O. For example, if
popen(some_command, "r") were specified, then the stream returned by the popen() would be
tagged in the child process'' program CCSID.
Returned value
If successful, popen() returns a pointer to an open stream that can be used to read or write to a
pipe.
If unsuccessful, popen() returns a NULL pointer and sets errno to one of the following values:
Error Code
Description
EINVAL
The mode argument is invalid.
popen() may also set errno values as described by spawn(), fork(), or pipe().

4.3.3 Pclose Function- The pclose() function closes a stream that was opened by popen(), waits
for the command specified as an argument in popen() to terminate, and returns the status of the
process that was running the shell command. However, if a call caused the termination status to
be unavailable to pclose(), then pclose() returns -1 with errno set to ECHILD to report this
situation; this can happen if the application calls one of the following functions:
 wait()
 waitid()
 waitpid() with a pid argument less than or equal to the process ID of the shell command
 any other function that could do one of the above
In any case, pclose() will not return before the child process created by popen() has
terminated.
If the shell command cannot be executed, the child termination status returned by pclose()
will be as if the shell command terminated using exit(127) or _exit(127).
The pclose() function will not affect the termination status of any child of the calling process
other than the one created by popen() for the associated stream.
If the argument stream to pclose() is not a pointer to a stream created by popen(), the
termination status returned will be -1.
Threading Behavior: The pclose() function can be executed from any thread within the parent
process.

Returned value
If successful, pclose() returns the termination status of the shell command.
If unsuccessful, pclose() returns -1 and sets errno to one of the following values:
Error Code
Description
ECHILD
The status of the child process could not be obtained.
4.4 Co processes, FTFOs, System V IPC, Message Queues, Semaphores, Interprocess
Communication
4.4.1 Co Processes- In an operating system, everything is around the process. How the process
goes through several different states. So in this article, we are going to discuss one type of
process called as Cooperating Process. In the operating system there are two types of processes:
 Independent Process: Independent Processes are those processes whose task is not
dependent on any other processes.
 Cooperating Process: Cooperating Processes are those processes that depend on other
processes or processes. They work together to achieve a common task in an operating
system. These processes interact with each other by sharing the resources such as CPU,
memory, and I/O devices to complete the task.

So now let’s discuss the concept of cooperating processes and how they are used in operating
systems.
 Inter-Process Communication (IPC): Cooperating processes interact with each other via
Inter-Process Communication (IPC). As they are interacting to each other and sharing
some resources with another so running task get the synchronization and possibilities of
deadlock decreases. To implement the IPC there are many options such as pipes, message
queues, semaphores, and shared memory.
 Concurrent execution: These cooperating processes executes simultaneously which can
be done by operating system scheduler which helps to select the process from ready queue
to go to the running state. Because of concurrent execution of several processes the
completion time decreases.
 Resource sharing: In order to do the work, cooperating processes cooperate by sharing
resources including CPU, memory, and I/O hardware. If several processes are sharing
resources as if they have their turn, synchronization increases as well as the response time
of process increase.
 Deadlocks: As cooperating processes shares their resources, there might be a deadlock
condition. Deadlock means if p1 process holds the resource A and wait for B and p2
process hold the B and wait for A. In this condition deadlock occur in cooperating process.
To avoid deadlocks, operating systems typically use algorithms such as the Banker’s
algorithm to manage and allocate resources to processes.
 Process scheduling: Cooperating processes runs simultaneously but after context switch,
which process should be next on CPU to executes, this is done by the scheduler. Scheduler
do it by using several scheduling algorithms such as Round-Robin, FCFS, SJF, Priority
etc.
Message Queue- A message queue is an inter-process communication (IPC) mechanism that
allows processes to exchange data in the form of messages between two processes. It allows
processes to communicate asynchronously by sending messages to each other where the
messages are stored in a queue, waiting to be processed, and are deleted after being processed.
The message queue is a buffer that is used in non-shared memory environments, where tasks
communicate by passing messages to each other rather than by accessing shared variables.
Tasks share a common buffer pool. The message queue is an unbounded FIFO queue that is
protected from concurrent access by different threads.
Events are asynchronous. When a class sends an event to another class, rather than sending it
directly to the target reactive class, it passes the event to the operating system message queue.
The target class retrieves the event from the head of the message queue when it is ready to
process it. Synchronous events can be passed using triggered operations instead.
Many tasks can write messages into the queue, but only one can read messages from the
queue at a time. The reader waits on the message queue until there is a message to process.
Messages can be of any size.
Functions of Message Queue-
There are four important functions that we will use in the programs to achieve IPC using
message queues.
1. int msgget (key_t key, int msgflg);
We use the msgget function to create and access a message queue. It takes two parameters.
o The first parameter is a key that names a message queue in the system.
o The second parameter is used to assign permission to the message queue and is ORed with
IPC_CREAT to create the queue if it doesn't already exist. If the queue already exists, then
IPC_CREAT is ignored. On success, the msgget function returns a positive number which
is the queue identifier, while on failure, it returns -1.
2. int msgsnd (int msqid, const void *msg_ptr, size_t msg_sz, int msgflg);
This function allows us to add a message to the message queue.
o The first parameter (msgid) is the message queue identifier returned by the msgget
function.
o The second parameter is the pointer to the message to be sent, which must start with a long
int type.
o The third parameter is the size of the message. It must not include the long int message
type.
o The fourth and final parameter controls what happens if the message queue is full or the
system limit on queued messages is reached. The function on success returns 0 and place
the copy of message data on the message queue. On failure, it returns -1.
There are two constraints related to the structure of the message. First, it must be smaller than
the system limit and, second, it must start with a long int. This long int is used as a message
type in the receive function. The best structure of the message is shown below.

struct my_message
{
long int message_type;
/* The data you wish to transfer */
}
Since the message_type is used in message reception, you can't simply ignore it. You must
declare your data structure to include it, and it's also wise to initialize it to contain a known
value.
3. int msgrcv (int msqid, void *msg_ptr, size_t msg_sz, long int msgtype, int msgflg);
This function retrieves messages from a message queue.
o The first parameter (msgid) is the message queue identifier returned by the msgget
function.
o As explained above, the second parameter is the pointer to the message to be received,
which must start with a long int type.
o The third parameter is the size of the message.
o The fourth parameter allows implementing priority. If the value is 0, the first available
message in the queue is retrieved. But if the value is greater than 0, then the first message
with the same message type is retrieved. If the value is less than 0, then the first message
having the type value same as the absolute value of msgtype is retrieved. In simple words,
0 value means to receive the messages in the order in which they were sent, and non zero
means receive the message with a specific message type.
o The final parameter controls what happens if the message queue is full or the system limit
on queued messages is reached. The function on success returns 0 and place the copy of
message data on the message queue. On failure, it returns -1.
System VS IPC-

SYSTEM V POSIX

AT & T introduced (1983) three


Portable Operating System Interface standards specified by
new forms of IPC facilities namely
IEEE to define application programming interface (API).
message queues, shared memory,
POSIX covers all the three forms of IPC
and semaphores.

SYSTEM V IPC covers all the IPC


mechanisms viz., pipes, named
pipes, message queues, signals, Almost all the basic concepts are the same as System V. It only
semaphores, and shared memory. It differs with the interface
also covers socket and Unix
Domain sockets.

Shared Memory Interface Calls Shared Memory Interface Calls shm_open(), mmap(),
shmget(), shmat(), shmdt(), shmctl() shm_unlink()

Message Queue Interface Calls


Message Queue Interface Calls mq_open(), mq_send(),
msgget(), msgsnd(), msgrcv(),
mq_receive(), mq_unlink()
msgctl()

Semaphore Interface Calls Named Semaphores sem_open(),


sem_close(), sem_unlink(), sem_post(), sem_wait(),
Semaphore Interface Calls semget(),
sem_trywait(), sem_timedwait(), sem_getvalue() Unnamed or
semop(), semctl()
Memory based semaphores sem_init(), sem_post(),
sem_wait(), sem_getvalue(),sem_destroy()

Uses keys and identifiers to identify


Uses names and file descriptors to identify IPC objects
the IPC objects.

POSIX Message Queues can be monitored using select(), poll()


NA
and epoll APIs

Provides functions (mq_getattr() and mq_setattr()) either to


Offers msgctl() call
access or set attributes 11. IPC - System V & POSIX

Multi-thread safe. Covers thread synchronization functions


NA such as mutex locks, conditional variables, read-write locks,
etc.

Offers few notification features for message queues (such as


NA
mq_notify())

Requires system calls such as


Shared memory objects can be examined and manipulated
shmctl(), commands (ipcs, ipcrm) to
using system calls such as fstat(), fchmod()
perform status/control operations.

The size of a System V shared We can use ftruncate() to adjust the size of the underlying
memory segment is fixed at the time object, and then re-create the mapping using munmap() and
of creation (via shmget()) mmap() (or the Linux-specific mremap())

Semaphore- Semaphore is a Hardware Solution. This Hardware solution is written or given to critical
section problem.
What is a Critical Section Problem?
The Critical Section Problem is a Code Snippet. This code snippet contains a few variables. These
variables can be accessed by a few processes. There is a condition for these processes.

The condition is that only one process can only enter the critical section. Remaining Processes which
are interested to enter the critical section have to wait for the process to complete its work and then
enter the critical section.
Critical Section Representation

Problems in Critical Section Problems


There may be a state where one or more processes try to enter the critical state. After multiple processes
enter the Critical Section, the second process try to access variable which already accessed by the first
process.
Explanation
Suppose there is a variable which is also known as shared variable. Let us define that shared variable.
Here, x is the shared variable.
1. int x = 10;
Process 1
1. // Process 1
2. int s = 10;
3. int u = 20;
4. x = s + u;
Process 2
1. // Process 2
2. int s = 10;
3. int u = 20;
4. x = s - u;
If the process is accessed the x shared variable one after other, then we are going to be in a good
position.
If Process 1 is alone executed, then the value of x is denoted as x = 30;
The shared variable x changes to 30 from 10
If Process 2 is alone executed, then the value of x is denoted as x = -10;
The shared variable x changes to -10 from 30
If both the processes occur at the same time, then the compiler would be in a confusion to choose which
variable value i.e. -10 or 30. This state faced by the variable x is Data Inconsistency. These problems
can also be solved by Hardware Locks.

To, prevent such kind of problems can also be solved by Hardware solutions named Semaphores.
Semaphores
The Semaphore is just a normal integer. The Semaphore cannot be negative. The least value for a
Semaphore is zero (0). The Maximum value of a Semaphore can be anything. The Semaphores usually
have two operations. The two operations have the capability to decide the values of the semaphores.
The two Semaphore Operations are:
1. Wait ( )
2. Signal ( )
Wait Semaphore Operation
The Wait Operation is used for deciding the condition for the process to enter the critical state or wait
for execution of process. Here, the wait operation has many different names. The different names are:
1. Sleep Operation
2. Down Operation
3. Decrease Operation
4. P Function (most important alias name for wait operation)
The Wait Operation works on the basis of Semaphore or Mutex Value.
Here, if the Semaphore value is greater than zero or positive then the Process can enter the Critical
Section Area.
If the Semaphore value is equal to zero then the Process has to wait for the Process to exit the Critical
Section Area.
This function is only present until the process enters the critical state. If the Processes enters the critical
state, then the P Function or Wait Operation has no job to do.
If the Process exits the Critical Section we have to reduce the value of Semaphore
Basic Algorithm of P Function or Wait Operation
1. P (Semaphore value)
2. {
3. Allow the process to enter if the value of Semaphore is greater than zero or positive.
4. Do not allow the process if the value of Semaphore is less than zero or zero.
5. Decrement the Semaphore value if the Process leaves the Critical State.
6. }
Signal Semaphore Operation
The Signal Semaphore Operation is used to update the value of Semaphore. The Semaphore value is
updated when the new processes are ready to enter the Critical Section.
The Signal Operation is also known as:
1. Wake up Operation
2. Up Operation
3. Increase Operation
4. V Function (most important alias name for signal operation)

We know that the semaphore value is decreased by one in the wait operation when the process left the
critical state. So, to counter balance the decreased number 1 we use signal operation which increments
the semaphore value. This induces the critical section to receive more and more processes into it.
The most important part is that this Signal Operation or V Function is executed only when the process
comes out of the critical section. The value of semaphore cannot be incremented before the exit of
process from the critical section
Basic Algorithm of V Function or Signal Operation
1. V (Semaphore value)
2. {
3. If the process goes out of the critical section then add 1 to the semaphore value
4. Else keep calm until process exits
5. }
Types of Semaphores
There are two types of Semaphores.
They are:
1. Binary Semaphore
Here, there are only two values of Semaphore in Binary Semaphore Concept. The two values are 1 and
0.
If the Value of Binary Semaphore is 1, then the process has the capability to enter the critical section
area. If the value of Binary Semaphore is 0 then the process does not have the capability to enter the
critical section area.
2. Counting Semaphore
Here, there are two sets of values of Semaphore in Counting Semaphore Concept. The two types of
values are values greater than and equal to one and other type is value equal to zero.
If the Value of Binary Semaphore is greater than or equal to 1, then the process has the capability to
enter the critical section area. If the value of Binary Semaphore is 0 then the process does not have the
capability to enter the critical section area.

Advantages of a Semaphore
o Semaphores are machine independent since their implementation and codes are written in the
microkernel's machine independent code area.
o They strictly enforce mutual exclusion and let processes enter the crucial part one at a time (only
in the case of binary semaphores).
o With the use of semaphores, no resources are lost due to busy waiting since we do not need any
processor time to verify that a condition is met before allowing a process access to the crucial
area.
o Semaphores have the very good management of resources
o They forbid several processes from entering the crucial area. They are significantly more effective
than other synchronization approaches since mutual exclusion is made possible in this way.

Disadvantages of a Semaphore
o Due to the employment of semaphores, it is possible for high priority processes to reach the vital
area before low priority processes.
o Because semaphores are a little complex, it is important to design the wait and signal actions in a
way that avoids deadlocks.
o Programming a semaphore is very challenging, and there is a danger that mutual exclusion won't
be achieved.
o The wait ( ) and signal ( ) actions must be carried out in the appropriate order to prevent
deadlocks.
Interprocess Communication- A process can be of two types:
 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other processes while a co-operating
process can be affected by other executing processes. Though one can think that those
processes, which are running independently, will execute very efficiently, in reality, there are
many situations when co-operative nature can be utilized for increasing computational speed,
convenience, and modularity. Inter-process communication (IPC) is a mechanism that allows
processes to communicate with each other and synchronize their actions. The communication
between these processes can be seen as a method of co-operation between them. Processes can
communicate with each other through both:

1. Shared Memory
2. Message passing
Figure 1 below shows a basic structure of communication between processes via the shared
memory method and via the message passing method.

An operating system can implement both methods of communication. First, we will discuss the
shared memory methods of communication and then message passing. Communication between
processes using shared memory requires processes to share some variable, and it completely
depends on how the programmer will implement it. One way of communication using shared
memory can be imagined like this: Suppose process1 and process2 are executing
simultaneously, and they share some resources or use some information from another process.
Process1 generates information about certain computations or resources being used and keeps it
as a record in shared memory. When process2 needs to use the shared information, it will check
in the record stored in shared memory and take note of the information generated by process1
and act accordingly. Processes can use shared memory for extracting information as a record
from another process as well as for delivering any specific information to other processes.
Let’s discuss an example of communication between processes using the shared memory
method.
i) Shared Memory Method
Ex: Producer-Consumer problem
There are two processes: Producer and Consumer. The producer produces some items and the
Consumer consumes that item. The two processes share a common space or memory location
known as a buffer where the item produced by the Producer is stored and from which the
Consumer consumes the item if needed. There are two versions of this problem: the first one is
known as the unbounded buffer problem in which the Producer can keep on producing items
and there is no limit on the size of the buffer, the second one is known as the bounded buffer
problem in which the Producer can produce up to a certain number of items before it starts
waiting for Consumer to consume it. We will discuss the bounded buffer problem. First, the
Producer and the Consumer will share some common memory, then the producer will start
producing items. If the total produced item is equal to the size of the buffer, the producer will
wait to get it consumed by the Consumer. Similarly, the consumer will first check for the
availability of the item. If no item is available, the Consumer will wait for the Producer to
produce it. If there are items available, Consumer will consume them.
ii) Messaging Passing Method
Now, We will start our discussion of the communication between processes via message
passing. In this method, processes communicate with each other without using any kind of
shared memory. If two processes p1 and p2 want to communicate with each other, they proceed
as follows:

 Establish a communication link (if a link already exists, no need to establish it again.)
 Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)

The message size can be of fixed size or of variable size. If it is of fixed size, it is easy for an
OS designer but complicated for a programmer and if it is of variable size then it is easy for a
programmer but complicated for the OS designer. A standard message can have two
parts: header and body.
The header part is used for storing message type, destination id, source id, message length, and
control information. The control information contains information like what to do if runs out of
buffer space, sequence number, priority. Generally, message is sent using FIFO style.
Advantages of IPC:
1. Enables processes to communicate with each other and share resources, leading to
increased efficiency and flexibility.
2. Facilitates coordination between multiple processes, leading to better overall system
performance.
3. Allows for the creation of distributed systems that can span multiple computers or
networks.
4. Can be used to implement various synchronization and communication protocols, such as
semaphores, pipes, and sockets.
Disadvantages of IPC:
1. Increases system complexity, making it harder to design, implement, and debug.
2. Can introduce security vulnerabilities, as processes may be able to access or modify data
belonging to other processes.
3. Requires careful management of system resources, such as memory and CPU time, to
ensure that IPC operations do not degrade overall system performance.
Can lead to data inconsistencies if multiple processes try to access or modify the same
data at the same time.
4. Overall, the advantages of IPC outweigh the disadvantages, as it is a necessary mechanism
for modern operating systems and enables processes to work together and share resources
in a flexible and efficient manner. However, care must be taken to design and implement
IPC systems carefully, in order to avoid potential security vulnerabilities and performance
issues.
4.5 Shared Memory, Client-Server Properties, Stream Pipes
4.5.1 Shared Memory- Every process has a dedicated address space in order to store data. If a
process wants to share some data with another process, it cannot directly do so since they have
different address spaces. In order to share some data, a process takes up some of the address
space as shared memory space. This shared memory can be accessed by the other process to
read/write the shared data.
Working of Shared Memory
Let us consider two processes P1 and P2 that want to perform Inter-process communication
using a shared memory.
P1 has an address space, let us say A1 and P2 has an address space, let us say A2. Now, P1
takes up some of the available address space as a shared memory space, let us say S1. Since P1
has taken up this space, it can decide which other processes can read and write data from the
shared memory space.
For now, we will assume that P1 has given only reading rights to other processes with respect to
the shared memory. So, the flow of Inter-process communication will be as follows:
 Process P1 takes up some of the available space as shared memory S1
 Process P1 writes the data to be shared in S1
 Process P2 reads the shared data from S1
Working of shared memory
Now, let us assume that P1 has given write rights to P2 as well. So the communication will
shown in the below diagram:

Working of shared memory


Since P1 took up the space for shared memory i.e. since process P1 is the creator process, only
it has the right to destroy the shared memory as well.
Use Cases of Shared Memory
 Inter-Process Communication: Shared memory is primarily used in IPC where two
processes need a shared address space in order to exchange data.
 Parallel Processing: Multiple processes can share and modify data in the shared address
space, thereby speeding up computations.
 Databases: Shared memory is used in databases, in the form of cache, so that reading and
writing of data can be much faster
 Graphics and Multimedia Applications: CPU and GPU can access data concurrently which
is helpful in tasks such as video manipulation and processing.
 Distributed Systems: Two different machines can access data from a shared space and
work as a single system.
Advantages
 Shared memory is one of the fastest means of IPC since it avoids overheads.
 Easy access to data once set up.
 It is memory efficient as processes do not need to separately store shared data.
Disadvantages
 Since it is operating system specific, it is difficult to implement
common synchronization and authorization techniques.
 Memory leaks can take place.
 If the processes wait indefinitely for each other in order to release the shared memory
space, a deadlock can occur.

Client Server Properties-

Server Operating System Client Operating System

It can be used to provide services to multiple


It can obtain services from a server.
client.

It can serve multiple client at a time. It serves a single user at a time.

It is complex operating system. It is simple operating system.

It runs on the client devices like laptop,


It runs on the server.
computer etc.

It is an operating system that is designed to be It is an operating system that operates


used on server. within desktop.

It provides more security. It provides less security.


Server Operating System Client Operating System

It has greater processing power. It has less processing power.

It is more stable. It is less stable.

It is highly efficient. It is less efficient.

Examples: Red Hat, Linux. Examples: Windows, Android.

Characteristics of Server OS
 It can use the CLI or GUI to reach the server.
 It manages and keeps an eye on operating systems and client PCs.
 Web and business apps are installed and used by it.
 The majority of processes can be carried out using OS commands.
 It provides a centralized interface for handling security, user management, and other
administrative duties.
Characteristics of Client OS
Graphical User Interface (GUI): Client OS generally consists of a graphical user interface that
lets in customers to have interaction with the operating gadget and programs using visible
factors consisting of home windows, icons, menus, and buttons.
 Application Support: Client OS affords assist for a various variety of packages and
software program equipment utilized by give up-users for productivity, conversation,
amusement, and personal duties. This consists of web browsers, email clients, office
suites, multimedia players, and gaming packages.
 Device Compatibility: Client OS is designed to paintings with quite a few hardware
devices and peripherals normally utilized by cease-customers, along with printers,
scanners, cameras, and enter gadgets along with keyboards, mice, and touchscreens.
 Ease of Use: Client OS emphasizes ease of use and ease, with intuitive interfaces and
person-pleasant functions geared toward permitting non-technical customers to carry out
duties inclusive of surfing the internet, sending emails, growing documents, and coping
with documents.
 Graphical User Interface (GUI): Client OS typically consists of a graphical consumer
interface that lets in users to have interaction with the running system and packages using
visible elements including home windows, icons, menus, and buttons.

Stream Pipes- A stream pipe is a UNIX interprocess communication (IPC) facility that allows
processes on the same computer to communicate with each other.
Stream-pipe connections have the following advantages:
 Unlike shared-memory connections, stream pipes do not pose the security risk of being
overwritten or read by other programs that explicitly access the same portion of shared
memory.
 Unlike shared-memory connections, stream-pipe connections allow distributed
transactions between database servers that are on the same computer.
Stream-pipe connections have the following disadvantages:
 Stream-pipe connections might be slower than shared-memory connections on some
computers.
 Stream pipes are not available on all platforms.
 When you use shared memory or stream pipes for client/server communications,
the hostname entry is ignored.
4.6 Passing File Descriptors, An Open Server-Version 1, Client-Server Connection
Functions.
4.6.1 Passing File Descriptors- File Descriptors are non-negative integers that act as an
abstract handle to “Files” or I/O resources (like pipes, sockets, or data streams). These
descriptors help us interact with these I/O resources and make working with them very easy.
Every process has it’s own set of file descriptors. Most processes (except for some daemons)
have these three File Descriptors :
 stdin: Standard Input denoted by the File Descriptor 0
 stdout: Standard Output denoted by the File Descriptor 1
 stderr: Standard Error denoted by File Descriptor 2
List All File Descriptors Of A Process
Every process has its own set of File Descriptors. To list them all, we need to find its PID. For
example, if I want to check all the File Descriptors under the process ‘i3‘
First, we need to find the PID of the process by using the ps command:
$ ps aux | grep i3

576

Now, to list all the file descriptors under a particular PID the syntax would be:

$ ls -la /proc/<PID>/fd

Working with File Descriptors in C


Here, we have written a little C program to describe how we can use File Descriptors.
1 #include <unistd.h>
2 #include <string.h>
3 void main()
4 {
5 char buff[20];
6 char hello[20]="I Am ";
7 read(0,buff,20);
8 strcat(hello,buff);
9 write(1,hello,strlen(hello));
10 }
Here we are reading characters from stdin by using File Descriptor 0 [ read() at line 7 ] and
then after concatenating it with a message [ strcat() at line 8 ] and then writes the resultant
string to the I/On stream pointed to by File Descriptor 1, i.e, stdout [ write() at line 9 ].
Compiling and running our program :
$ gcc fd.c -o out
$ ./out
Groot
I Am Groot

Client server Connection Function- Client Server Communication refers to the exchange of data
and Services among multiple machines or processes. In Client client-server communication System
one process or machine acts as a client requesting a service or data, and Another machine or
process acts like a server for providing those Services or Data to the client machine. This
Communication model is widely used for exchanging data among various computing environments
like Distributed Systems, Internet Applications, and Networking Application communication. The
communication between Server and Client takes place with different Protocols and mechanisms.
Different Ways of Client-Server Communication-
In Client Server Communication we can use different ways-
1. Sockets Mechanism
2. Remote Procedure Call
3. Message Passing
4. Inter-process Communication
5. Distributed File Systems
Sockets Mechanism
The Sockets are the End Points of Communication between two machines. They provide a way for
processes to communicate with each other, either on the same on machine or over through Internet
also possible. The Sockets enable the communication connection between Serthe er and the client
to transfer data in a bidirectional way.
Client Server Communication using Sockets
Remote Procedure Call (PRC)
Remote Procedure Call is a Protocol. A Protocol is set of Instructions. It allows a client to execute
a procedure call on remote server, as if it is local procedure call. PRC is commonly used in Client
Server communication Architecture. PRC Provide high level of abstraction to the programmer. In
This The client Program issues a procedure call , which is translated into message that is sent over
the network to the Server, The Server execute the call and send back to the Client Machine.

Remote Procedure Call Process


Message Passing
Message Passing is a communication Method. Where the machines communicated with each one
by send and receiving the messages. This approach is commonly used in Parallel and Distributed
Systems, This approach enables data exchange among the System.

Message Passing Process


Inter process Communication
The Inter Process Communication also called IPC. It allows communication between processes
within the same Machine. The IPC can enable data sharing and Synchronous between different
processes running concurrently on an operating system. And it includes Sharing Memory, message
queues, semaphores and pipes among others.

Inter process Communication Process


Distributed File Systems-
Distributed File Systems provide access to files from multiple machines in network. Client can
access and manipulate files stored on Remote Server, Through Standard Interface Example
Network File System and Server Message Block.

Distributed File System Process

You might also like