0% found this document useful (0 votes)
55 views

OS Unit-1 Notes

Uploaded by

amulya.bca
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

OS Unit-1 Notes

Uploaded by

amulya.bca
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 31

Operating System

An operating system (OS) is a program that acts as an interface between the system hardware and
the user. Moreover, it handles all the interactions between the software and the hardware. All the
working of a computer system depends on the OS at the base level. Further, it performs all the
functions like handling memory, processes, the interaction between hardware and software, etc.
Now, let us look at the functions of operating system.

Operating System

Objectives of OS
The primary goals of an operating system are as follows:
1. Convenience – An operating system improves the use of a machine. Operating
systems enable users to get started on the things they wish to complete quickly
without having to cope with the stress of first configuring the system.
2. Efficiency – An operating system enables the efficient use of resources. This is due to
less time spent configuring the system.
3. Ability to evolve – An operating system should be designed in such a way that it
allows for the effective development, testing, and introduction of new features
without interfering with service.
4. Management of system resources – It guarantees that resources are shared fairly
among various processes and users.
Operating system services:
 User interface
 Program execution
 Job Accounting
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection

1. User interface:
Operating system provide act as an interface between user and system hardware. It make
easy and convenient for users to access system resources with help of different interface
mechanism. UI, GUI and CLI are the common way of interacting with the system.
2. Program execution:
Operating systems handle many kinds of activities from user programs to system
programs. The system must be able to load a program into memory and to run that
program. The program must be able to end its execution, either normally or abnormally.
Provides a mechanism for process synchronization, process communication and deadlock
handling.

3. Job Accounting – As the operating system keeps track of all the functions of a computer
system. Hence, it makes a record of all the activities taking place on the system. It has an
account of all the information about the memory, resources, errors, etc. Therefore, this
information can be used as and when required.
4. I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. An
Operating System manages the communication between user and device drivers.
5. File system manipulation:
A file represents a collection of related information. Computers can store files on the
disk. Many operating systems provide a variety of file systems. Operating system provide
various activities with respect to file management.
6. Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages communications
between all the processes. Multiple processes communicate with one another through
communication lines in the network.
7. Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in
the memory hardware. OS constantly checks for possible errors and takes an appropriate
action to ensure correct and consistent computing.
8. Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU
cycles and files storage are to be allocated to each user or job. The OS manages all kinds
of resources using schedulers.
9. Protection
It provides mechanism or a way to control the access of programs, processes, or users to
the resources defined by a computer system. OS ensures that all access to system
resources is controlled.
Types of Operating System
1. Batch Operating System:
In this system, the OS does not forward the jobs/tasks directly to the CPU. It works by grouping
together similar types of jobs under one category. Further, we name this group as a ‘batch’.
Hence, the name batch OS.

Advantages of Batch Operating System


• Simple scheduling used.
• Processors of the batch systems know how long the job would be when it is in the queue.

• Multiple users can share the batch systems.

• The idle time for the batch system is very less.

• It is easy to manage large work repeatedly in batch systems

Disadvantages of Batch Operating System


• The computer operators should be well known with batch systems.
• A job could enter infinite loop.
• Batch systems are hard to debug.
• It is sometimes costly.
• Lack of protection.
• The other jobs will have to wait for an unknown time if any job fails or slow I/O devives.

2. Multi-Programming Operating System


More than one program is present in the main memory and any one of them can be kept in
execution. This is basically used for better execution of resources. Multi programming &
multi-tasking concept used.

Advantages
• Multiple programs can be executed simultaneously.
• High and efficient CUP utilization
• Increases the Throughput of the System.
• It helps in reducing the response time.
Disadvantages
• There is not any facility for user interaction of system resources with the system.
• CPU scheduling is required.
• Many management is also required to handle many jobs.
• Synchronization and IPC mechanism is needed.

3. Time-Sharing Operating Systems:


Each task is given some time to execute so that all the tasks work smoothly. Each user gets
the time of the CPU. The time that each task gets to execute is called quantum. After this
time interval is over OS switches over to the next task.

Advantages
• Each task gets an equal opportunity.
• Provide user an interface to interact with system.
• CPU idle time can be reduced.
• User get quick response.
• Resources can be shared.

Disadvantages
• More complex to implement.
• One must have to take care of the security and integrity of user programs and data.
• Reliability problem.
• Data communication problem.

4. Distributed Operating System


Various autonomous interconnected computers communicate with each other using a shared
communication network. Independent systems possess their own memory unit and CPU. Remote
access is enabled within the devices connected in that network.

Advantages
• Efficient resource sharing.
• Uses load balancing to share workload
• Reliability of the network
• Exchange of information through communication
• Maintain coordination among programs using IPC mechanism.
Disadvantages
• Complexity in implantation.
• Protection to shared resources is needed.
• Require memory and resource management.

5. Real-Time Operating System:


Real-time systems are used when there are rigid time requirements needed for each
operations of a processor.
Ex: missile systems, air traffic control systems, robots, etc
 Types of Real-Time Operating Systems:
A. Hard Real-Time Systems
Hard Real-Time OSs are meant for applications where time constraints are very strict and
even the shortest possible delay is not acceptable.

B. Soft Real-Time Systems:


These OSs are for applications where time-constraint is less restrictive.
Advantages
• Maximum utilization of resources and systems.
• Memory management is less demanding.
• Improved CPU scheduling.
• Focus is on application and hence performance is high.
• Error or failure rate is very less.
Disadvantages
• Implementation cost is more
• Limited tasks will run at a time.
• Uses complex algorithms.

6. Network Operating System:


These systems run on a server and provide the capability to manage data, users, groups,
security, applications, and other networking functions. These types of operating systems
allow shared access to files, printers, security, applications, and other networking functions
over a small private network.

Advantages
• Highly stable centralized servers.
• Security concerns are handled through servers.
• New technologies and hardware up-gradation are easily integrated into the system.
• Server access is possible remotely from different locations and types of systems.
Disadvantages
• Servers are costly.
• User has to depend on a central location for most operations.
• Maintenance and updates are required regularly.

Functions of Operating System


1. Memory Management
It is the management of the main or primary memory. Whatever program is executed, it has to be
present in the main memory. Main memory is a quick storage area that may be accessed directly by
the CPU. When the program is completed, the memory region is released and can be used by other
programs. Therefore, there can be more than one program present at a time. Hence, it is required to
manage the memory.
The operating system:
 Allocates and deallocates the memory.
 Keeps a record of which part of primary memory is used by whom and how much.
 Distributes the memory while multiprocessing.
 In multiprogramming, the operating system selects which processes acquire memory
when and how much memory they get.
2. Processor Management/Scheduling
Every software that runs on a computer, whether in the background or in the frontend, is a process.
Processor management is an execution unit in which a program operates. The operating system
determines the status of the processor and processes, selects a job and its processor, allocates the
processor to the process, and de-allocates the processor after the process is completed.
When more than one process runs on the system the OS decides how and when a process will use
the CPU. Hence, the name is also CPU Scheduling. The OS:
 Allocates and deallocates processor to the processes.
 Keeps record of CPU status.
Certain algorithms used for CPU scheduling are as follows:
 First Come First Serve (FCFS)
 Shortest Job First (SJF)
 Round-Robin Scheduling
 Priority-based scheduling etc.
Purpose of CPU scheduling
The purpose of CPU scheduling is as follows:
 Proper utilization of CPU. Since the proper utilization of the CPU is necessary.
Therefore, the OS makes sure that the CPU should be as busy as possible.
 Since every device should get a chance to use the processor. Hence, the OS makes
sure that the devices get fair processor time.
 Increasing the efficiency of the system.
3. Device Management
An operating system regulates device connection using drivers. The processes may require devices
for their use. This management is done by the OS. The OS:
 Allocates and deallocates devices to different processes.
 Keeps records of the devices.
 Decides which process can use which device for how much time.
4. File Management
The operating system manages resource allocation and de-allocation. It specifies which process
receives the file and for how long. It also keeps track of information, location, uses, status, and so
on. These groupings of resources are referred to as file systems. The files on a system are stored in
different directories. The OS:
 Keeps records of the status and locations of files.
 Allocates and deallocates resources.
 Decides who gets the resources.
5. Storage Management
Storage management is a procedure that allows users to maximize the utilization of storage devices
while also protecting data integrity on whatever media on which it lives. Network virtualization,
replication, mirroring, security, compression, deduplication, traffic analysis, process automation,
storage provisioning, and memory management are some of the features that may be included. The
operating system is in charge of storing and accessing files. The creation of files, the creation of
directories, the reading and writing of data from files and directories, as well as the copying of the
contents of files and directories from one location to another are all included in storage
management.
The OS uses storage management for:
 Improving the performance of the data storage resources.
 It optimizes the use of various storage devices.
 Assists businesses in storing more data on existing hardware, speeding up the data
retrieval process, preventing data loss, meeting data retention regulations, and
lowering IT costs

 Functions of Operation System:


The various functions of operating system are as follows:
1. Process Management:
Processor management is an execution unit in which a program operates. The operating system
determines the status of the processor and processes, selects a job and its processor, allocates the
processor to the process, and de-allocates the processor after the process is completed.
The OS is responsible for the following activities of process management.
• Creating & deleting both user & system processes.
• Suspending & resuming processes.
• Providing mechanism for process synchronization.
• Providing mechanism for process communication.
• Providing mechanism for deadlock handling.

2. Main Memory Management:


It is the management of the main or primary memory. Whatever program is executed, it has to be
present in the main memory. Main memory is a quick storage area that may be accessed directly
by the CPU. There can be more than one program present at a time. Hence, it is required to
manage the memory.

The OS is responsible for the following activities in connection with memory management.
• Keeping track of which parts of memory are currently being used & by whom.
• Deciding which processes are to be loaded into memory when memory space becomes
available.
• Allocating &deallocating memory space as needed.

3. File Management:
File management is one of the most important components of an OS computer can store
information on several different types of physical media magnetic tape, magnetic disk & optical
disk are the most common media.
The OS is responsible for the following activities of file management.
• Creating & deleting files.
• Creating & deleting directories.
• Supporting primitives for manipulating files & directories.
• Mapping files into secondary storage.
• Backing up files on non-volatile media.

4. I/O System Management:


One of the purposes of an OS is to hide the peculiarities of specific hardware devices from
theuser. For example, in UNIX the peculiarities of I/O devices are hidden from the bulk of the
OS itself by the I/O subsystem.
The I/O subsystem consists of:
• A memory management component that includes buffering, catching & spooling.
• A general device- driver interfaces drivers for specific hardware devices. Only the device
driver knows the peculiarities of the specific device to which it is assigned.

5. Secondary Storage Management:


The main purpose of computer system is to execute programs. These programs with the data they
access must be in main memory during execution. As the main memory is too small to
accommodate all data & programs & because the data that it holds are lost when power is lost.
The computer system must provide secondary storage to back-up main memory. Most modern
computer systems are disks as the storage medium to store data & program.
The operating system is responsible for the following activities of disk management.
• Free space management.
• Storage allocation.
• Disk scheduling
Because secondary storage is used frequently it must be used efficiently.
7. Networking:
A distributed system is a collection of processors that don’t share memory peripheral devices or
a clock. Each processor has its own local memory & clock and the processor communicate with
one another through various communication lines such as high speed buses or networks. The
processors in the system are connected through communication networks which are configured
in a number of different ways. The communication network design must consider message
routing & connection strategies are the problems of connection & security.
8. Protection or security:
If a computer system has multi users & allow the concurrent execution of multiple processes
then the various processes must be protected from one another’s activities. For that purpose,
mechanisms ensure that files, memory segments, CPU & other resources can be operated on by
only those processes that have gained proper authorization from the OS.

 System Calls:
System calls provide the interface between a process & the OS. These are usually available in the
form of assembly language instruction. Some systems allow system calls to be made directly
from a high level language program like C, BCPL and PERL etc. systems calls occur in different
ways depending on the computer in use. System calls can be roughly grouped into 5 major
categories.

1. Process Control:
• End, abort: A running program needs to be able to has its execution either normally
(end) or abnormally (abort).
• Load, execute: A process or job executing one program may want to load and executes
another program.
• Create Process, terminate process: There is a system call specifying for the purpose of
creating a new process or job (create process or submit job). We may want to terminate a
job or process that we created (terminates process, if we find that it is incorrect or no
longer needed).
• Get process attributes, set process attributes: If we create a new job or process we
should able to control its execution. This control requires the ability to determine & reset
the attributes of a job or processes (get process attributes, set process attributes).

2. File Manipulation:
• Create file, delete file: We first need to be able to create & delete files. Both the system
calls require the name of the file & some of its attributes.
• Open file, close file: Once the file is created, we need to open it & use it. We close the
file when we are no longer using it.
• Read, write, reposition file: After opening, we may also read, write or reposition the file
(rewind or skip to the end of the file).
• Get file attributes, set file attributes: For either files or directories, we need to be able
to determine the values of various attributes & reset them if necessary. Two system calls
get file attribute & set file attributes are required for their purpose.
3. Device Management:
• Request device, release device: If there are multiple users of the system, we first request
the device. After we finished with the device, we must release it.
• Read, write, reposition: Once the device has been requested & allocated to us, we can
read, write & reposition the device.
4. Information maintenance:
• Get time or date, set time or date: Most systems have a system call to return the current
date & time or set the current date & time.
• Get system data, set system data: Other system calls may return information about the
system like number of current users, version number of OS, amount of free memory etc.
• Get process attributes, set process attributes: The OS keeps information about all its
processes & there are system calls to access this information.

Communication:
There are two modes of communication such as:
• Message passing model: Information is exchanged through an inter process communication
facility provided by operating system. Each computer in a network has a name by which it is
known. Similarly, each process has a process name which is translated to an equivalent identifier
by which the OS can refer to it. The get hostid and get processed systems calls to do this
translation. These identifiers are then passed to the general purpose open & close calls provided
by the file system or to specific open connection system call. The recipient process must give its
permission for communication to take place with an accept connection call. The source of the
communication known as client & receiver known as server exchange messages by read message
& write message system calls. The close connection call terminates the connection.
• Shared memory model: processes use map memory system calls to access regions of memory
owned by other processes. They exchange information by reading & writing data in the shared
areas. The processes ensure that they are not writing to the same location simultaneously.

OPERATING SYSTEM STRUCTURE:


The operating systems are large and complex. A common approach is to partition the task into
small components, or modules, rather than have one monolithic system. The structure of an
operating system can be defined the following structures.
• Simple structure
• Layered approach
• Microkernels
• Modules
• Hybrid systems

Simple structure:
The Simple structured operating systems do not have a well-defined structure. These systems
will be simple, small and limited systems.
Example: MS-DOS

o There are four layers that make up the MS-DOS operating system, and each has its own
set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
o The MS-DOS operating system benefits from layering because each level can be defined
independently and, when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update. Because
of this, simple structures can be used to build constrained systems that are less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O procedures
are visible to end users, giving them the potential for unwanted access.

MONOLITHIC STRUCTURE
The monolithic operating system controls all aspects of the operating system's operation,
including file management, memory management, device management, and operational
operations.

The core of an operating system for computers is called the kernel (OS). All other
System components are provided with fundamental services by the kernel. The
operating system and the hardware use it as their main interface. When an operating
system is built into a single piece of hardware, such as a keyboard or mouse, the kernel
can directly access all of its resources.
The monolithic operating system is often referred to as the monolithic kernel. Multiple
programming techniques such as batch processing and time-sharing increase a processor's
usability. Working on top of the operating system and under complete command of all
hardware, the monolithic kernel performs the role of a virtual computer. This is an old operating
system that was used in banks to carry out simple tasks like batch processing and time-sharing,
which allows numerous users at different terminals to access the Operating System.

Advantages of Monolithic Structure:

o Because layering is unnecessary and the kernel alone is responsible for managing
all operations, it is easy to design and execute.
o Due to the fact that functions like memory management, file management,
process scheduling, etc., are implemented in the same address area, the
monolithic kernel runs rather quickly when compared to other systems. Utilizing
the same address speeds up and reduces the time required for address allocation
for new processes.

Disadvantages of Monolithic Structure:


o The monolithic kernel's services are interconnected in address space and have an
impact on one another, so if any of them malfunctions, the entire system does as
well.
o It is not adaptable. Therefore, launching a new service is difficult.

LAYERED STRUCTURE
The OS is separated into layers or levels in this kind of arrangement. Layer 0 (the lowest
layer) contains the hardware, and layer 1 (the highest layer) contains the user interface
(layer N). These layers are organized hierarchically, with the top-level layers making use of
the capabilities of the lower-level ones.

The functionalities of each layer are separated in this method, and abstraction is also an
option. Because layered structures are hierarchical, debugging is simpler, therefore all
lower-level layers are debugged before the upper layer is examined. As a result, the
present layer alone has to be reviewed since all the lower layers have already been
examined.

Advantages of Layered Structure:

o Work duties are separated since each layer has its own functionality, and there is
some amount of abstraction.
o Debugging is simpler because the lower layers are examined first, followed by the
top layers.

Disadvantages of Layered Structure:

o Performance is compromised in layered structures due to layering.


o Construction of the layers requires careful design because upper layers only make
use of lower layers' capabilities.

MICRO-KERNEL STRUCTURE
The operating system is created using a micro-kernel framework that strips the kernel of
any unnecessary parts. Systems and user applications are used to implement these
optional kernel components. So, Micro-Kernels is the name given to these systems that
have been developed.

Each Micro-Kernel is created separately and is kept apart from the others. As a result, the system
is now more trustworthy and secure. If one Micro-Kernel malfunctions, the remaining operating
system is unaffected and continues to function normally.

. Advantages of Micro-Kernel Structure:


o It enables portability of the operating system across platforms.
o Due to the isolation of each Micro-Kernel, it is reliable and secure.
o The reduced size of Micro-Kernels allows for successful testing.
o The remaining operating system remains unaffected and keeps running properly
even if a component or Micro-Kernel fails.

Disadvantages of Micro-Kernel Structure:

o The performance of the system is decreased by increased inter-module


communication.
o The construction of a system is complicated.

 Process Concept

A process is basically a program in execution. The execution of a process must progress in a


sequential fashion.
When a program is loaded into the memory and it becomes a process, it can be divided into
four sections ─ stack, heap, text and data. The following image shows a simplified layout of
a process inside main memory −

S.N. Component & Description


1 Stack
The process Stack contains the temporary data such as method/function
parameters, return address and local variables.

2 Heap
This is dynamically allocated memory to a process during its run time.

3 Text
This includes the current activity represented by the value of Program
Counter and the contents of the processor's registers.

4 Data
This section contains the global and static variables.

Process State:
 Processes may be in one of 5 states.
o New - The process is in the stage of being created.

o Ready - The process has all the resources available that it needs to run, but the
CPU is not currently working on this process's instructions.

o Running - The CPU is working on this process's instructions.

o Waiting - The process cannot run at the moment, because it is waiting for some
resource to become available or for some event to occur. For example the process
may be waiting for keyboard input, disk access request, inter-process messages, a
timer to go off, or a child process to finish.

o Terminated - The process has completed.


Process Control Block
For each process there is a Process Control Block, PCB, which stores the following ( types of )
process-specific information, as illustrated in Figure 3.1. ( Specific details may vary from system
to system. )

 Process State - Running, waiting, etc., as discussed above.


 Process ID, and parent process ID.
 CPU registers and Program Counter - These need to be saved and restored when
swapping processes in and out of the CPU.
 CPU-Scheduling information - Such as priority information and pointers to scheduling
queues.
 Memory-Management information - E.g. page tables or segment tables.
 Accounting information - user and kernel CPU time consumed, account numbers,
limits, etc.
 I/O Status information - Devices allocated, open file tables, etc.

 Context switch
• The Context switching is a technique or method used by the operating system to switch a
process from one state to another to execute its function using CPUs in the system.
• It is a method to store/restore the state or of a CPU in PCB. So that process execution can
be resumed from the same point at a later time. The context switching method is
important for multitasking OS.

 Process scheduling:
• It allows OS to allocate a time interval of CPU execution for each process. Another
important reason for using a process scheduling system is that it keeps the CPU busy all
the time.
• This allows you to get the minimum response time for programs.
• Thread is a light weight process which holds less number of resources. In multithreaded
OS a process can have single or many no. of threads.
• The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization.
• The objective of time sharing is to switch the CPU among processes so frequently that
users can interact with each program white it is running.

Process Scheduling Queues:


• Process scheduler manages the scheduling of execution of all the processes with the help
of queues.
• Process Scheduling Queues help you to maintain a distinct queue for each and every
process states and PCBs.
Three types of operating system queues are:
1. Job queue – It helps you to store all the processes in the system.
2. Ready queue – This type of queue helps you to set every process residing in the main
memory, which is ready and waiting to execute.
3. Device queues – It is a process that is blocked because of the absence of an I/O device.
Categories of Scheduling:
 Non-Preemptive Scheduling:
In this scheduling, once the resources (CPU cycles) are allocated to a process,
the process holds the CPU till it gets terminated or reaches a waiting state. In the
case of non-preemptive scheduling does not interrupt a process running CPU in
the middle of the execution. Instead, it waits till the process completes its CPU
burst time, and then it can allocate the CPU to another process.

Advantages
1. It has a minimal scheduling burden or overhead.
2. It is a very easy to implement and low cost.
3. Less computational resources are used.
4. It has a high throughput rate.
Disadvantages
1. Shortest jobs are made to wait for longer jobs.
2. Less efficient approach.

 Preemptive Scheduling
Preemptive scheduling is used when a process switches from the running state to the
ready state or from the waiting state to the ready state. The resources (mainly CPU
cycles) are allocated to the process for a limited amount of time and then taken
away, and the process is again placed back in the ready queue if that process still has
CPU burst time remaining. That process stays in the ready queue till it gets its next
chance to execute.

Advantages
1. It is a more reliable method.
2. The average response time is improved.
3. The operating system makes sure that every process using the CPU is using the
same amount of CPU time.
Disadvantages
1. Suspending the running process, change the context, and dispatch the new
incoming process all take more time.
2. The low-priority process would have to wait if multiple high-priority processes
arrived at the same time.
3. Implementation is difficult and high cost.

 Process Schedulers:
A scheduler is a type of system software that allows OS to handle and monitor process
scheduling.
There are mainly three types of Process Schedulers:
 Long Term Scheduler
 Short Term Scheduler
 Medium Term Scheduler
1. Long-term scheduler (Job scheduler)
• It selects which processes should be brought into the ready queue.
• Long-term scheduler is invoked infrequently (seconds, minutes)  (may be slow)
• The long-term scheduler controls the degree of multiprogramming.
• Long-term scheduler should select a good process mix to improve performance
2. Short-term scheduler (or CPU scheduler) –
 It selects which process should be executed next and allocates CPU.
 Sometimes the only scheduler in a system.
 Short-term scheduler is invoked frequently (milliseconds)  (must be fast)
Processes can be described as either:
I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
CPU-bound process – spends more time doing computations; few very long CPU bursts

3. Medium-term scheduler
• It can be added if degree of multiple programming needs to decrease
• Remove process from memory, store on disk, bring back in from disk to continue
execution: swapping
Long term scheduler Medium term scheduler Short term scheduler

Long term scheduler is a job Medium term is a process of Short term scheduler is called
scheduler. swapping schedulers. a CPU scheduler.

The speed of long term is lesser The speed of medium term is in The speed of short term is
than the short term. between short and long term fastest among the other two.
scheduler.

Long term controls the degree of Medium term reduces the The short term provides
multiprogramming. degree of multiprogramming. lesser control over the degree
of multiprogramming.

The long term is almost nil or The medium term is a part of Short term is also a minimal
minimal in the time sharing the time sharing system. time sharing system.
system.

The long term selects the Medium term can reintroduce Short term selects those
processes from the pool and the process into memory and processes that are ready to
loads them into memory for execution can be continued. execute.
execution.

 Interprocess Communication (IPC)


A process can be of two types:
 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other processes while a co-operating
process can be affected by other executing processes. The co-operative nature can be utilized for
increasing computational speed, convenience, and modularity. Inter-process communication
(IPC) is a mechanism that allows processes to communicate with each other and synchronize
their actions. The communication between these processes can be seen as a method of co-
operation between them.
Reasons for cooperating processes:
 Information sharing
 Computation speedup
 Modularity
 Convenience
Inter-process communication (IPC) is a mechanism can be implemented using two approaches:
 Message passing
 Shared memory

Fig: (a) Message passing. (b) shared memory

Message passing:
In this method communication takes place with other processes is by sending
messages. When two or more processes participate in inter-process communication, each process
sends messages to the others via Kernel and system calls. It is useful for exchanging smaller
amount of data. It is also easier to implement and no much overhead involved. It allows
synchronization among processes and hence there will be no conflict about the data. However it
will be less effective in terms of more time consumption since use of system calls.
Shared Memory:
In this method a common memory region is shared between all the cooperating processes.
Process can exchange information by reading and writing data to the shared region directly with
or without need of system calls. It allows maximum speed and convenience of communications.
Since all process share the memory the operating system has to provide security and
synchronization about the information as well as communication.

 Multithreaded programming

A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register
set, and a stack. A traditional (or heavyweight) process has a single thread of control. If a process
has multiple threads of control, it can perform more than one task at a time. The difference
between a traditional single-threaded process and a multithreaded process shown below.

Benefits of multithreaded programming:

1. Responsiveness:
Multithreading an interactive application may allow a program to continue running even if part
of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the
user.
2. Resource sharing. Processes can only share resources through techniques such as shared
memory and message passing. Threads share the memory and the resources of the process to
which they belong by default. The benefit of sharing code and data is that it allows an application
to have several different threads of activity within the same address space.
3. Economy. Allocating memory and resources for process creation is costly. Because threads
share the resources of the process to which they belong, it is more economical to create and
context-switch threads.

4. Scalability. The benefits of multithreading can be even greater in a multiprocessor


architecture, where threads may be running in parallel on different processing cores. A
single-threaded process can run on only one processor, regardless how many are available.

 Multicore Programming
Computer systems with more than one core or CPU are called multi-core or multi-processor
systems. Multiple computing cores are designed on a single chip. Each core appears as a
separate processor to the operating system. Multithreaded programming provides a
mechanism for more efficient use of these multiple computing cores and improved
concurrency.

 Multithreading Models:
In multi-threaded systems threads are identified as user threads and kernel threads. User
threads are supported above the kernel and are managed without kernel support, whereas
kernel threads are supported and managed directly by the operating system. There are
different types of multi-threaded models used such as: many-to-one model, one-to-one
model, and many-to-many model.

1) Many-to-One Model:
The many-to-one model maps many user-level threads to one kernel thread. Thread
management is done by the thread library in user space, so it is efficient. Multiple threads
may not run in parallel on muticore system because only one may be in kernel at a time.
One thread blocking causes all to block. Few systems currently use this model.
Examples: Solaris Green Threads, GNU Portable Threads

2) One-to-One Model:
The one-to-one model maps each user-level thread to a separate kernel thread. Whenever
a user-level thread is created OS creates a kernel thread associated to it. It provides more
concurrency than many-to-one model. Number of threads per process sometimes
restricted due to overhead.
Examples: Windows, Linux, Solaris 9 and later

3) Many-to-Many Model:
This model allows many user level threads to be mapped to many kernel threads. It also
allows the operating system to create a sufficient number of kernel threads to support user
threads. The number of kernel threads may be specific to either a particular application or
a particular machine.
Example: Solaris prior to version 9, Windows with the ThreadFiber package.

You might also like