Os 4
Os 4
4.1 Virtual Memory Concepts, Virtual Address Space and Paging Scheme
4.2 pure segmentation and segmentation with paging scheme hardware support and
implementation details
4.3 Memory Fragmentation, Overview of IPC Methods-pipes,popen and pclose functions
4.4 Co processes, FTFOs, System VIPC, Message Queues, Semaphores, Interprocess
Communication
4.5 Shared Memory, Client-Server Properties, Stream Pipes,
4.6 Passing File Descriptors, An Open Server-Version 1, Client-Server Connection
Functions.
4.1 Virtual Memory Concepts, Virtual Address Space and Paging Scheme
4.1.1 Virtual Memory Concepts- Virtual Memory is a storage scheme that provides user an illusion of
having a very big main memory. This is done by treating a part of secondary memory as the main memory.
In this ,User can load the bigger size processes than the available main memory by having the illusion
that the memory is available to load the process.
Instead of loading one big process in the main memory, the Operating System loads the different parts of
more than one process in the main memory.
By doing this, the degree of multiprogramming will be increased and therefore, the CPU utilization will
also be increased.
How Virtual Memory Works?
In modern word, virtual memory has become quite common these days. In this scheme, whenever some
pages needs to be loaded in the main memory for the execution and the memory is not available for those
many pages, then in that case, instead of stopping the pages from entering in the main memory, the OS
search for the RAM area that are least used in the recent times or that are not referenced and copy that
into the secondary memory to make the space for the new pages in the main memory.
Since all this procedure happens automatically, therefore it makes the computer feel like it is having the
unlimited RAM.
Demand Paging?
Demand Paging is a popular method of virtual memory management. In demand paging, the pages of a
process which are least used, get stored in the secondary memory.
A page is copied to the main memory when its demand is made or page fault occurs. There are various
page replacement algorithms which are used to determine the pages which will be replaced. We will
discuss each one of them later in detail.
Snapshot of a virtual memory management system
Let us assume 2 processes, P1 and P2, contains 4 pages each. Each page size is 1 KB. The main
memory contains 8 frame of 1 KB each. The OS resides in the first two partitions. In the third partition,
1st page of P1 is stored and the other frames are also shown as filled with the different pages of
processes in the main memory.
The page tables of both the pages are 1 KB size each and therefore they can be fit in one frame each.
The page tables of both the processes contain various information that is also shown in the image.
The CPU contains a register which contains the base address of page table that is 5 in the case of P1 and
7 in the case of P2. This page table base address will be added to the page number of the Logical address
when it comes to accessing the actual corresponding entry.
4.1.3 Paging-Paging is a storage mechanism that allows OS to retrieve processes from the secondary
storage into the main memory in the form of pages. In the Paging method, the main memory is divided
into small fixed-size blocks of physical memory, which is called frames. The size of a frame should be
kept the same as that of a page to have maximum utilization of the main memory and to avoid external
fragmentation. Paging is used for faster access to data, and it is a logical concept.
Example of Paging in OS-
For example, if the main memory size is 16 KB and Frame size is 1 KB. Here, the main memory will be
divided into the collection of 16 frames of 1 KB each.
There are 4 separate processes in the system that is A1, A2, A3, and A4 of 4 KB each. Here, all the
processes are divided into pages of 1 KB each so that operating system can store one page in one frame.
At the beginning of the process, all the frames remain empty so that all the pages of the processes will
get stored in a contiguous way.
In this example you can see that A2 and A4 are moved to the waiting state after some time. Therefore,
eight frames become empty, and so other pages can be loaded in that empty blocks. The process A5 of
size 8 pages (8 KB) are waiting in the ready queue.
p
In this example, you can see that there are eight non-contiguous frames which is available in the
memory, and paging offers the flexibility of storing the process at the different places. This allows us to
load the pages of process A5 instead of A2 and A4.
What is Paging Protection?
The paging process should be protected by using the concept of insertion of an additional bit called
Valid/Invalid bit. Paging Memory protection in paging is achieved by associating protection bits with
each page. These bits are associated with each page table entry and specify protection on the
corresponding page.
Advantages of Paging
Here, are advantages of using Paging method:
Easy to use memory management algorithm
No need for external Fragmentation
Swapping is easy between equal-sized pages and page frames.
Disadvantages of Paging
Here, are drawback/ cons of Paging:
May cause Internal fragmentation
Page tables consume additonal memory.
Multi-level paging may lead to memory reference overhead.
4.2 pure segmentation and segmentation with paging scheme hardware support and
implementation details-
4.2.1 Segmentation- A process is divided into Segments. The chunks that a program is divided
into which are not necessarily all of the exact sizes are called segments. Segmentation gives
the user’s view of the process which paging does not provide. Here the user’s view is
mapped to physical memory.
Types of Segmentation in Operating Systems
Virtual Memory Segmentation: Each process is divided into a number of segments, but the
segmentation is not done all at once. This segmentation may or may not take place at the
run time of the program.
Simple Segmentation: Each process is divided into a number of segments, all of which are
loaded into memory at run time, though not necessarily contiguously.
There is no simple relationship between logical addresses and physical addresses in segmentation.
A table stores the information about all such segments and is called Segment Table.
What is Segment Table?
It maps a two-dimensional Logical address into a one-dimensional Physical address. It’s each table
entry has:
Base Address: It contains the starting physical address where the segments reside in
memory.
Segment Limit: Also known as segment offset. It specifies the length of the segment.
Translation of Two-dimensional Logical Address to Dimensional Physical Address.
4.2.2 segmentation with paging scheme hardware support and implementation details-
Paged Segmentation and Segmented Paging are two different memory management
techniques that combine the benefits of paging and segmentation.
3. Both Paged Segmentation and Segmented Paging provide the benefits of paging, such as
improved memory utilization, reduced fragmentation, and increased performance. They
also provide the benefits of segmentation, such as increased flexibility in memory
allocation, improved protection and security, and reduced overhead in memory
management.
However, both techniques can also introduce additional complexity and overhead in the
memory management process. The choice between Paged Segmentation and Segmented
Paging depends on the specific requirements and constraints of a system, and often requires
trade-offs between flexibility, performance, and overhead.
The size of the page table can be reduced by creating a page table for each segment. To
accomplish this hardware support is required. The address provided by CPU will now be
partitioned into segment no., page no. and offset.
The memory management unit (MMU) will use the segment table which will contain the address
of page table(base) and limit. The page table will point to the page frames of the segments in
main memory.
Advantages of Segmented Paging
1. The page table size is reduced as pages are present only for data of segments, hence reducing the
memory requirements.
4. Since the entire segment need not be swapped out, the swapping out into virtual memory becomes
easier .
Paged Segmentation
1. In segmented paging, not every process has the same number of segments and the segment
tables can be large in size which will cause external fragmentation due to the varying
segment table sizes. To solve this problem, we use paged segmentation which requires
the segment table to be paged. The logical address generated by the CPU will now consist
of page no #1, segment no, page no #2 and offset.
2. The page table even with segmented paging can have a lot of invalid pages. Instead of
using multi level paging along with segmented paging, the problem of larger page table
can be solved by directly applying multi level paging instead of segmented paging.
Advantages of Paged Segmentation
1. No external fragmentation
4. Similar to segmented paging, the entire segment need not be swapped out.
5. Increased flexibility in memory allocation: Paged Segmentation allows for a flexible allocation of
memory, where each segment can have a different size, and each page can have a different size
within a segment.
6. Improved protection and security: Paged Segmentation provides better protection and security by
isolating each segment and its pages, preventing a single segment from affecting the entire
process’s memory.
Increased program structure: Paged Segmentation provides a natural program structure, with each
segment representing a different logical part of a program.
7. Improved error detection and recovery: Paged Segmentation enables the detection of memory
errors and the recovery of individual segments, rather than the entire process’s memory.
8. Reduced overhead in memory management: Paged Segmentation reduces the overhead in memory
management by eliminating the need to maintain a single, large page table for the entire process’s
memory.
9. Improved memory utilization: Paged Segmentation can improve memory utilization by reducing
fragmentation and allowing for the allocation of larger blocks of contiguous memory to each
segment.
3. Extra level of paging at first stage adds to the delay in memory access.
8. Increased code size: Paged Segmentation can result in increased code size, as the additional code
required to manage the multiple page tables can take up valuable memory space.
4.3 Memory Fragmentation, Overview of IPC Methods-pipes,popen and pclose functions
4.3.1 Memory Fragmentation- Segmentation divides processes into smaller subparts known
as modules. The divided segments need not be placed in contiguous memory. Since there is no
contiguous memory allocation, internal fragmentation does not take place. The length of the
segments of the program and memory is decided by the purpose of the segment in the user
program.
We can say that logical address space or the main memory is a collection of segments.
Types of Segmentation
Segmentation can be divided into two types:
1. Virtual Memory Segmentation: Virtual Memory Segmentation divides the processes
into n number of segments. All the segments are not divided at a time. Virtual Memory
Segmentation may or may not take place at the run time of a program.
2. Simple Segmentation: Simple Segmentation also divides the processes into n number of segments
but the segmentation is done all together at once. Simple segmentation takes place at the run time
of a program. Simple segmentation may scatter the segments into the memory such that one segment
of the process can be at a different location than the other(in a noncontinuous manner).
Why Segmentation is required?
Segmentation came into existence because of the problems in the paging technique. In the case of
the paging technique, a function or piece of code is divided into pages without considering that the relative
parts of code can also get divided. Hence, for the process in execution, the CPU must load more than one
page into the frames so that the complete related code is there for execution. Paging took more pages for
a process to be loaded into the main memory. Hence, segmentation was introduced in which the code is
divided into modules so that related code can be combined in one single block.
Other memory management techniques have also an important drawback - the actual view of physical
memory is separated from the user's view of physical memory. Segmentation helps in overcoming the
problem by dividing the user's program into segments according to the specific need.
Advantages of Segmentation in OS
No internal fragmentation is there in segmentation.
Segment Table is used to store the records of the segments. The segment table itself consumes small
memory as compared to a page table in paging.
Segmentation provides better CPU utilization as an entire module is loaded at once.
Segmentation is near to the user's view of physical memory. Segmentation allows users to partition
the user programs into modules. These modules are nothing but the independent codes of the current
process.
The Segment size is specified by the user but in Paging, the hardware decides the page size.
Segmentation can be used to separate the security procedures and data.
Disadvantages of Segmentation in OS
During the swapping of processes the free memory space is broken into small pieces, which is a
major problem in the segmentation technique.
Time is required to fetch instructions or segments.
The swapping of segments of unequal sizes is not easy.
There is an overhead of maintaining a segment table for each process as well.
When a process is completed, it is removed from the main memory. After the execution of the
current process, the unevenly sized segments of the process are removed from the main memory.
Since the segments are of uneven length it creates unevenly sized holes in the main memory. These
holes in the main memory may remain unused due to their very small size.
Characteristics of Segmentation in OS
Some of the characteristics of segmentation are discussed below:
Segmentation partitions the program into variable-sized blocks or segments.
Partition size depends upon the type and length of modules.
Segmentation is done considering that the relative data should come in a single segment.
Segments of the memory may or may not be stored in a continuous manner depending upon the
segmentation technique chosen.
Operating System maintains a segment table for each process.
Example of Segmentation
Let's take the example of segmentation to understand how it works.
Let us assume we have five segments namely: Segment-0, Segment-1, Segment-2, Segment-3, and
Segment-4. Initially, before the execution of the process, all the segments of the process are stored in the
physical memory space. We have a segment table as well. The segment table contains the beginning entry
address of each segment (denoted by base). The segment table also contains the length of each of the
segments (denoted by limit).
As shown in the image below, the base address of Segment-0 is 1400 and its length is 1000, the base
address of Segment-1 is 6300 and its length is 400, the base address of Segment-2 is 4300 and its length
is 400, and so on.
The pictorial representation of the above segmentation with its segment table is shown below.
4.3.2 Overview of IPC Methods- It allows for a standard connection which is computer and OS
independent. Interprocess communication (IPC) refers to the mechanisms and techniques used
by operating systems to allow different processes to communicate with each other.
Pipes- Pipes are a type of IPC (Inter-Process Communication) technique that allows two or more
processes to communicate with each other by creating a unidirectional or bidirectional channel
between them. A pipe is a virtual communication channel that allows data to be transferred
between processes, either one-way or two-way. Pipes can be implemented using system calls in
most modern operating systems, including Linux, macOS, and Windows.
Here are some advantages and disadvantages of using pipes as an IPC technique:
Advantages:
1. Simplicity: Pipes are a simple and straightforward way for processes to communicate with
each other.
2. Efficiency: Pipes are an efficient way for processes to communicate, as they can transfer
data quickly and with minimal overhead.
3. Reliability: Pipes are a reliable way for processes to communicate, as they can detect
errors in data transmission and ensure that data is delivered correctly.
4. Flexibility: Pipes can be used to implement various communication protocols, including
one-way and two-way communication.
Disadvantages:
1. Limited capacity: Pipes have a limited capacity, which can limit the amount of data that
can be transferred between processes at once.
2. Unidirectional: In a unidirectional pipe, only one process can send data at a time, which
can be a disadvantage in some situations.
3. Synchronization: In a bidirectional pipe, processes must be synchronized to ensure that
data is transmitted in the correct order.
4. Limited scalability: Pipes are limited to communication between a small number of
processes on the same computer, which can be a disadvantage in large-scale distributed
systems.
5. Overall, pipes are a useful IPC technique for simple and efficient communication between
processes on the same computer. However, they may not be suitable for large-scale
distributed systems or situations where bidirectional communication is required.
4.3.2 Popen- The popen() function executes the command specified by the string command. It
creates a pipe between the calling program and the executed command, and returns a pointer to
a stream that can be used to either read from or write to the pipe.
The environment of the executed command will be as if a child process were created within the
popen() call using fork(), and the child invoked the sh utility using the call:
execl("/bin/sh", "sh", "-c", command, (char *)0);
The popen() function ensures that any streams from previous popen() calls that remain open in
the parent process are closed in the child process.
The mode argument to popen() is a string that specifies I/O mode:
1. If mode is r, file descriptor STDOUT_FILENO will be the writable end of the pipe when
the child process is started. The file descriptor fileno(stream) in the calling process,
where stream is the stream pointer returned by popen(), will be the readable end of the
pipe.
2. If mode is w, file descriptor STDIN_FILENO will be the readable end of the pipe when
the child process is started. The file descriptor fileno(stream) in the calling process,
where stream is the stream pointer returned by popen(), will be the writable end of the
pipe.
3. If mode is any other value, a NULL pointer is returned and errno is set to EINVAL.
After popen(), both the parent and the child process will be capable of executing independently
before either terminates.
Because open files are shared, a mode r command can be used as an input filter and a
mode w command as an output filter.
Buffered reading before opening an input filter (that is, before popen()) may leave the standard
input of that filter mispositioned. Similar problems with an output filter may be prevented by
buffer flushing with fflush().
A stream opened with popen() should be closed by pclose().
The behavior of popen() is specified for values of mode of r and w. mode values
of rb and wb are supported but are not portable.
If the shell command cannot be executed, the child termination status returned by pclose() will
be as if the shell command terminated using exit(127) or _exit(127).
If the application calls waitpid() with a pid argument greater than 0, and it still has a stream that
was created with popen() open, it must ensure that pid does not refer to the process started by
popen()
The stream returned by popen() will be designated as byte-oriented.
Special behavior for file tagging and conversion: When the FILETAG(,AUTOTAG) runtime
option is specified, the pipe opened for communication between the parent and child process by
popen() will be tagged with the writer''s program CCSID upon first I/O. For example, if
popen(some_command, "r") were specified, then the stream returned by the popen() would be
tagged in the child process'' program CCSID.
Returned value
If successful, popen() returns a pointer to an open stream that can be used to read or write to a
pipe.
If unsuccessful, popen() returns a NULL pointer and sets errno to one of the following values:
Error Code
Description
EINVAL
The mode argument is invalid.
popen() may also set errno values as described by spawn(), fork(), or pipe().
4.3.3 Pclose Function- The pclose() function closes a stream that was opened by popen(), waits
for the command specified as an argument in popen() to terminate, and returns the status of the
process that was running the shell command. However, if a call caused the termination status to
be unavailable to pclose(), then pclose() returns -1 with errno set to ECHILD to report this
situation; this can happen if the application calls one of the following functions:
wait()
waitid()
waitpid() with a pid argument less than or equal to the process ID of the shell command
any other function that could do one of the above
In any case, pclose() will not return before the child process created by popen() has
terminated.
If the shell command cannot be executed, the child termination status returned by pclose()
will be as if the shell command terminated using exit(127) or _exit(127).
The pclose() function will not affect the termination status of any child of the calling process
other than the one created by popen() for the associated stream.
If the argument stream to pclose() is not a pointer to a stream created by popen(), the
termination status returned will be -1.
Threading Behavior: The pclose() function can be executed from any thread within the parent
process.
Returned value
If successful, pclose() returns the termination status of the shell command.
If unsuccessful, pclose() returns -1 and sets errno to one of the following values:
Error Code
Description
ECHILD
The status of the child process could not be obtained.
4.4 Co processes, FTFOs, System V IPC, Message Queues, Semaphores, Interprocess
Communication
4.4.1 Co Processes- In an operating system, everything is around the process. How the process
goes through several different states. So in this article, we are going to discuss one type of
process called as Cooperating Process. In the operating system there are two types of processes:
Independent Process: Independent Processes are those processes whose task is not
dependent on any other processes.
Cooperating Process: Cooperating Processes are those processes that depend on other
processes or processes. They work together to achieve a common task in an operating
system. These processes interact with each other by sharing the resources such as CPU,
memory, and I/O devices to complete the task.
So now let’s discuss the concept of cooperating processes and how they are used in operating
systems.
Inter-Process Communication (IPC): Cooperating processes interact with each other via
Inter-Process Communication (IPC). As they are interacting to each other and sharing
some resources with another so running task get the synchronization and possibilities of
deadlock decreases. To implement the IPC there are many options such as pipes, message
queues, semaphores, and shared memory.
Concurrent execution: These cooperating processes executes simultaneously which can
be done by operating system scheduler which helps to select the process from ready queue
to go to the running state. Because of concurrent execution of several processes the
completion time decreases.
Resource sharing: In order to do the work, cooperating processes cooperate by sharing
resources including CPU, memory, and I/O hardware. If several processes are sharing
resources as if they have their turn, synchronization increases as well as the response time
of process increase.
Deadlocks: As cooperating processes shares their resources, there might be a deadlock
condition. Deadlock means if p1 process holds the resource A and wait for B and p2
process hold the B and wait for A. In this condition deadlock occur in cooperating process.
To avoid deadlocks, operating systems typically use algorithms such as the Banker’s
algorithm to manage and allocate resources to processes.
Process scheduling: Cooperating processes runs simultaneously but after context switch,
which process should be next on CPU to executes, this is done by the scheduler. Scheduler
do it by using several scheduling algorithms such as Round-Robin, FCFS, SJF, Priority
etc.
Message Queue- A message queue is an inter-process communication (IPC) mechanism that
allows processes to exchange data in the form of messages between two processes. It allows
processes to communicate asynchronously by sending messages to each other where the
messages are stored in a queue, waiting to be processed, and are deleted after being processed.
The message queue is a buffer that is used in non-shared memory environments, where tasks
communicate by passing messages to each other rather than by accessing shared variables.
Tasks share a common buffer pool. The message queue is an unbounded FIFO queue that is
protected from concurrent access by different threads.
Events are asynchronous. When a class sends an event to another class, rather than sending it
directly to the target reactive class, it passes the event to the operating system message queue.
The target class retrieves the event from the head of the message queue when it is ready to
process it. Synchronous events can be passed using triggered operations instead.
Many tasks can write messages into the queue, but only one can read messages from the
queue at a time. The reader waits on the message queue until there is a message to process.
Messages can be of any size.
Functions of Message Queue-
There are four important functions that we will use in the programs to achieve IPC using
message queues.
1. int msgget (key_t key, int msgflg);
We use the msgget function to create and access a message queue. It takes two parameters.
o The first parameter is a key that names a message queue in the system.
o The second parameter is used to assign permission to the message queue and is ORed with
IPC_CREAT to create the queue if it doesn't already exist. If the queue already exists, then
IPC_CREAT is ignored. On success, the msgget function returns a positive number which
is the queue identifier, while on failure, it returns -1.
2. int msgsnd (int msqid, const void *msg_ptr, size_t msg_sz, int msgflg);
This function allows us to add a message to the message queue.
o The first parameter (msgid) is the message queue identifier returned by the msgget
function.
o The second parameter is the pointer to the message to be sent, which must start with a long
int type.
o The third parameter is the size of the message. It must not include the long int message
type.
o The fourth and final parameter controls what happens if the message queue is full or the
system limit on queued messages is reached. The function on success returns 0 and place
the copy of message data on the message queue. On failure, it returns -1.
There are two constraints related to the structure of the message. First, it must be smaller than
the system limit and, second, it must start with a long int. This long int is used as a message
type in the receive function. The best structure of the message is shown below.
struct my_message
{
long int message_type;
/* The data you wish to transfer */
}
Since the message_type is used in message reception, you can't simply ignore it. You must
declare your data structure to include it, and it's also wise to initialize it to contain a known
value.
3. int msgrcv (int msqid, void *msg_ptr, size_t msg_sz, long int msgtype, int msgflg);
This function retrieves messages from a message queue.
o The first parameter (msgid) is the message queue identifier returned by the msgget
function.
o As explained above, the second parameter is the pointer to the message to be received,
which must start with a long int type.
o The third parameter is the size of the message.
o The fourth parameter allows implementing priority. If the value is 0, the first available
message in the queue is retrieved. But if the value is greater than 0, then the first message
with the same message type is retrieved. If the value is less than 0, then the first message
having the type value same as the absolute value of msgtype is retrieved. In simple words,
0 value means to receive the messages in the order in which they were sent, and non zero
means receive the message with a specific message type.
o The final parameter controls what happens if the message queue is full or the system limit
on queued messages is reached. The function on success returns 0 and place the copy of
message data on the message queue. On failure, it returns -1.
System VS IPC-
SYSTEM V POSIX
Shared Memory Interface Calls Shared Memory Interface Calls shm_open(), mmap(),
shmget(), shmat(), shmdt(), shmctl() shm_unlink()
The size of a System V shared We can use ftruncate() to adjust the size of the underlying
memory segment is fixed at the time object, and then re-create the mapping using munmap() and
of creation (via shmget()) mmap() (or the Linux-specific mremap())
Semaphore- Semaphore is a Hardware Solution. This Hardware solution is written or given to critical
section problem.
What is a Critical Section Problem?
The Critical Section Problem is a Code Snippet. This code snippet contains a few variables. These
variables can be accessed by a few processes. There is a condition for these processes.
The condition is that only one process can only enter the critical section. Remaining Processes which
are interested to enter the critical section have to wait for the process to complete its work and then
enter the critical section.
Critical Section Representation
To, prevent such kind of problems can also be solved by Hardware solutions named Semaphores.
Semaphores
The Semaphore is just a normal integer. The Semaphore cannot be negative. The least value for a
Semaphore is zero (0). The Maximum value of a Semaphore can be anything. The Semaphores usually
have two operations. The two operations have the capability to decide the values of the semaphores.
The two Semaphore Operations are:
1. Wait ( )
2. Signal ( )
Wait Semaphore Operation
The Wait Operation is used for deciding the condition for the process to enter the critical state or wait
for execution of process. Here, the wait operation has many different names. The different names are:
1. Sleep Operation
2. Down Operation
3. Decrease Operation
4. P Function (most important alias name for wait operation)
The Wait Operation works on the basis of Semaphore or Mutex Value.
Here, if the Semaphore value is greater than zero or positive then the Process can enter the Critical
Section Area.
If the Semaphore value is equal to zero then the Process has to wait for the Process to exit the Critical
Section Area.
This function is only present until the process enters the critical state. If the Processes enters the critical
state, then the P Function or Wait Operation has no job to do.
If the Process exits the Critical Section we have to reduce the value of Semaphore
Basic Algorithm of P Function or Wait Operation
1. P (Semaphore value)
2. {
3. Allow the process to enter if the value of Semaphore is greater than zero or positive.
4. Do not allow the process if the value of Semaphore is less than zero or zero.
5. Decrement the Semaphore value if the Process leaves the Critical State.
6. }
Signal Semaphore Operation
The Signal Semaphore Operation is used to update the value of Semaphore. The Semaphore value is
updated when the new processes are ready to enter the Critical Section.
The Signal Operation is also known as:
1. Wake up Operation
2. Up Operation
3. Increase Operation
4. V Function (most important alias name for signal operation)
We know that the semaphore value is decreased by one in the wait operation when the process left the
critical state. So, to counter balance the decreased number 1 we use signal operation which increments
the semaphore value. This induces the critical section to receive more and more processes into it.
The most important part is that this Signal Operation or V Function is executed only when the process
comes out of the critical section. The value of semaphore cannot be incremented before the exit of
process from the critical section
Basic Algorithm of V Function or Signal Operation
1. V (Semaphore value)
2. {
3. If the process goes out of the critical section then add 1 to the semaphore value
4. Else keep calm until process exits
5. }
Types of Semaphores
There are two types of Semaphores.
They are:
1. Binary Semaphore
Here, there are only two values of Semaphore in Binary Semaphore Concept. The two values are 1 and
0.
If the Value of Binary Semaphore is 1, then the process has the capability to enter the critical section
area. If the value of Binary Semaphore is 0 then the process does not have the capability to enter the
critical section area.
2. Counting Semaphore
Here, there are two sets of values of Semaphore in Counting Semaphore Concept. The two types of
values are values greater than and equal to one and other type is value equal to zero.
If the Value of Binary Semaphore is greater than or equal to 1, then the process has the capability to
enter the critical section area. If the value of Binary Semaphore is 0 then the process does not have the
capability to enter the critical section area.
Advantages of a Semaphore
o Semaphores are machine independent since their implementation and codes are written in the
microkernel's machine independent code area.
o They strictly enforce mutual exclusion and let processes enter the crucial part one at a time (only
in the case of binary semaphores).
o With the use of semaphores, no resources are lost due to busy waiting since we do not need any
processor time to verify that a condition is met before allowing a process access to the crucial
area.
o Semaphores have the very good management of resources
o They forbid several processes from entering the crucial area. They are significantly more effective
than other synchronization approaches since mutual exclusion is made possible in this way.
Disadvantages of a Semaphore
o Due to the employment of semaphores, it is possible for high priority processes to reach the vital
area before low priority processes.
o Because semaphores are a little complex, it is important to design the wait and signal actions in a
way that avoids deadlocks.
o Programming a semaphore is very challenging, and there is a danger that mutual exclusion won't
be achieved.
o The wait ( ) and signal ( ) actions must be carried out in the appropriate order to prevent
deadlocks.
Interprocess Communication- A process can be of two types:
Independent process.
Co-operating process.
An independent process is not affected by the execution of other processes while a co-operating
process can be affected by other executing processes. Though one can think that those
processes, which are running independently, will execute very efficiently, in reality, there are
many situations when co-operative nature can be utilized for increasing computational speed,
convenience, and modularity. Inter-process communication (IPC) is a mechanism that allows
processes to communicate with each other and synchronize their actions. The communication
between these processes can be seen as a method of co-operation between them. Processes can
communicate with each other through both:
1. Shared Memory
2. Message passing
Figure 1 below shows a basic structure of communication between processes via the shared
memory method and via the message passing method.
An operating system can implement both methods of communication. First, we will discuss the
shared memory methods of communication and then message passing. Communication between
processes using shared memory requires processes to share some variable, and it completely
depends on how the programmer will implement it. One way of communication using shared
memory can be imagined like this: Suppose process1 and process2 are executing
simultaneously, and they share some resources or use some information from another process.
Process1 generates information about certain computations or resources being used and keeps it
as a record in shared memory. When process2 needs to use the shared information, it will check
in the record stored in shared memory and take note of the information generated by process1
and act accordingly. Processes can use shared memory for extracting information as a record
from another process as well as for delivering any specific information to other processes.
Let’s discuss an example of communication between processes using the shared memory
method.
i) Shared Memory Method
Ex: Producer-Consumer problem
There are two processes: Producer and Consumer. The producer produces some items and the
Consumer consumes that item. The two processes share a common space or memory location
known as a buffer where the item produced by the Producer is stored and from which the
Consumer consumes the item if needed. There are two versions of this problem: the first one is
known as the unbounded buffer problem in which the Producer can keep on producing items
and there is no limit on the size of the buffer, the second one is known as the bounded buffer
problem in which the Producer can produce up to a certain number of items before it starts
waiting for Consumer to consume it. We will discuss the bounded buffer problem. First, the
Producer and the Consumer will share some common memory, then the producer will start
producing items. If the total produced item is equal to the size of the buffer, the producer will
wait to get it consumed by the Consumer. Similarly, the consumer will first check for the
availability of the item. If no item is available, the Consumer will wait for the Producer to
produce it. If there are items available, Consumer will consume them.
ii) Messaging Passing Method
Now, We will start our discussion of the communication between processes via message
passing. In this method, processes communicate with each other without using any kind of
shared memory. If two processes p1 and p2 want to communicate with each other, they proceed
as follows:
Establish a communication link (if a link already exists, no need to establish it again.)
Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)
The message size can be of fixed size or of variable size. If it is of fixed size, it is easy for an
OS designer but complicated for a programmer and if it is of variable size then it is easy for a
programmer but complicated for the OS designer. A standard message can have two
parts: header and body.
The header part is used for storing message type, destination id, source id, message length, and
control information. The control information contains information like what to do if runs out of
buffer space, sequence number, priority. Generally, message is sent using FIFO style.
Advantages of IPC:
1. Enables processes to communicate with each other and share resources, leading to
increased efficiency and flexibility.
2. Facilitates coordination between multiple processes, leading to better overall system
performance.
3. Allows for the creation of distributed systems that can span multiple computers or
networks.
4. Can be used to implement various synchronization and communication protocols, such as
semaphores, pipes, and sockets.
Disadvantages of IPC:
1. Increases system complexity, making it harder to design, implement, and debug.
2. Can introduce security vulnerabilities, as processes may be able to access or modify data
belonging to other processes.
3. Requires careful management of system resources, such as memory and CPU time, to
ensure that IPC operations do not degrade overall system performance.
Can lead to data inconsistencies if multiple processes try to access or modify the same
data at the same time.
4. Overall, the advantages of IPC outweigh the disadvantages, as it is a necessary mechanism
for modern operating systems and enables processes to work together and share resources
in a flexible and efficient manner. However, care must be taken to design and implement
IPC systems carefully, in order to avoid potential security vulnerabilities and performance
issues.
4.5 Shared Memory, Client-Server Properties, Stream Pipes
4.5.1 Shared Memory- Every process has a dedicated address space in order to store data. If a
process wants to share some data with another process, it cannot directly do so since they have
different address spaces. In order to share some data, a process takes up some of the address
space as shared memory space. This shared memory can be accessed by the other process to
read/write the shared data.
Working of Shared Memory
Let us consider two processes P1 and P2 that want to perform Inter-process communication
using a shared memory.
P1 has an address space, let us say A1 and P2 has an address space, let us say A2. Now, P1
takes up some of the available address space as a shared memory space, let us say S1. Since P1
has taken up this space, it can decide which other processes can read and write data from the
shared memory space.
For now, we will assume that P1 has given only reading rights to other processes with respect to
the shared memory. So, the flow of Inter-process communication will be as follows:
Process P1 takes up some of the available space as shared memory S1
Process P1 writes the data to be shared in S1
Process P2 reads the shared data from S1
Working of shared memory
Now, let us assume that P1 has given write rights to P2 as well. So the communication will
shown in the below diagram:
Characteristics of Server OS
It can use the CLI or GUI to reach the server.
It manages and keeps an eye on operating systems and client PCs.
Web and business apps are installed and used by it.
The majority of processes can be carried out using OS commands.
It provides a centralized interface for handling security, user management, and other
administrative duties.
Characteristics of Client OS
Graphical User Interface (GUI): Client OS generally consists of a graphical user interface that
lets in customers to have interaction with the operating gadget and programs using visible
factors consisting of home windows, icons, menus, and buttons.
Application Support: Client OS affords assist for a various variety of packages and
software program equipment utilized by give up-users for productivity, conversation,
amusement, and personal duties. This consists of web browsers, email clients, office
suites, multimedia players, and gaming packages.
Device Compatibility: Client OS is designed to paintings with quite a few hardware
devices and peripherals normally utilized by cease-customers, along with printers,
scanners, cameras, and enter gadgets along with keyboards, mice, and touchscreens.
Ease of Use: Client OS emphasizes ease of use and ease, with intuitive interfaces and
person-pleasant functions geared toward permitting non-technical customers to carry out
duties inclusive of surfing the internet, sending emails, growing documents, and coping
with documents.
Graphical User Interface (GUI): Client OS typically consists of a graphical consumer
interface that lets in users to have interaction with the running system and packages using
visible elements including home windows, icons, menus, and buttons.
Stream Pipes- A stream pipe is a UNIX interprocess communication (IPC) facility that allows
processes on the same computer to communicate with each other.
Stream-pipe connections have the following advantages:
Unlike shared-memory connections, stream pipes do not pose the security risk of being
overwritten or read by other programs that explicitly access the same portion of shared
memory.
Unlike shared-memory connections, stream-pipe connections allow distributed
transactions between database servers that are on the same computer.
Stream-pipe connections have the following disadvantages:
Stream-pipe connections might be slower than shared-memory connections on some
computers.
Stream pipes are not available on all platforms.
When you use shared memory or stream pipes for client/server communications,
the hostname entry is ignored.
4.6 Passing File Descriptors, An Open Server-Version 1, Client-Server Connection
Functions.
4.6.1 Passing File Descriptors- File Descriptors are non-negative integers that act as an
abstract handle to “Files” or I/O resources (like pipes, sockets, or data streams). These
descriptors help us interact with these I/O resources and make working with them very easy.
Every process has it’s own set of file descriptors. Most processes (except for some daemons)
have these three File Descriptors :
stdin: Standard Input denoted by the File Descriptor 0
stdout: Standard Output denoted by the File Descriptor 1
stderr: Standard Error denoted by File Descriptor 2
List All File Descriptors Of A Process
Every process has its own set of File Descriptors. To list them all, we need to find its PID. For
example, if I want to check all the File Descriptors under the process ‘i3‘
First, we need to find the PID of the process by using the ps command:
$ ps aux | grep i3
576
Now, to list all the file descriptors under a particular PID the syntax would be:
$ ls -la /proc/<PID>/fd
Client server Connection Function- Client Server Communication refers to the exchange of data
and Services among multiple machines or processes. In Client client-server communication System
one process or machine acts as a client requesting a service or data, and Another machine or
process acts like a server for providing those Services or Data to the client machine. This
Communication model is widely used for exchanging data among various computing environments
like Distributed Systems, Internet Applications, and Networking Application communication. The
communication between Server and Client takes place with different Protocols and mechanisms.
Different Ways of Client-Server Communication-
In Client Server Communication we can use different ways-
1. Sockets Mechanism
2. Remote Procedure Call
3. Message Passing
4. Inter-process Communication
5. Distributed File Systems
Sockets Mechanism
The Sockets are the End Points of Communication between two machines. They provide a way for
processes to communicate with each other, either on the same on machine or over through Internet
also possible. The Sockets enable the communication connection between Serthe er and the client
to transfer data in a bidirectional way.
Client Server Communication using Sockets
Remote Procedure Call (PRC)
Remote Procedure Call is a Protocol. A Protocol is set of Instructions. It allows a client to execute
a procedure call on remote server, as if it is local procedure call. PRC is commonly used in Client
Server communication Architecture. PRC Provide high level of abstraction to the programmer. In
This The client Program issues a procedure call , which is translated into message that is sent over
the network to the Server, The Server execute the call and send back to the Client Machine.