Module 11 Bit
Module 11 Bit
com/c/EDULINEFORCSE
STUDENTS STUDENTS STUDENTS
• An operating system is an important part of almost every computer Operating system allocates hardware resources for running the
system. application programs. I.e., OS act as a
Prepared By Mr. EBIN PM, AP, IESCE 1 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 3 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 5
OPERATING SYSTEMS • The hardware — the central processing unit (CPU), the memory,
and the input/output (I/O) devices — provides the basic computing
• An operating system is a program that manages the computer resources.
hardware.
• The application programs — such as word processors,
• It acts as an intermediary between the user of a computer and the VVVVV
computer hardware. spreadsheets, compilers, and web browsers — define the ways in
which these resources are used to solve the computing problems
• The purpose of an operating system is to provide an environment of the users.
in which a user can execute programs in a convenient and efficient
manner. • The operating system controls and coordinates the use of the
• Some operating systems are designed to be convenient, others to hardware among the various application programs for the various
be efficient, and others some combination of the two. users.
Efficient - optimum resource utilization
Convenient - user interaction is simple.
Prepared By Mr.EBIN PM, AP, IESCE EDULINE 2 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 4 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 6
Prepared By Mr. EBIN PM, AP, IESCE 1 Prepared By Mr. EBIN PM, AP, IESCE 2 Prepared By Mr. EBIN PM, AP, IESCE 3
• The introduction of disk technology allowed the operating system • In a multiprogramming system, the operating system simply
TYPES OF OPERATING SYSTEMS (OS) to keep all jobs on a disk, rather than in a serial card reader. switches to, and executes, another job.
1. BATCH SYSTEMS • With direct access to several jobs, the operating system could • When that job needs to wait, the CPU is switched to another job,
• Here the user did not interact directly with the computer systems. perform job scheduling, to use resources and perform tasks and so on.
Rather, the user prepared a job and submitted it to the computer efficiently • Eventually, the first job finishes waiting and gets the CPU back.
operator program. • As long as at least one job needs to execute, the CPU is never idle.
• The job was usually in the form of punch cards.
JOB POOL
• At some later time (after minutes, hours, or days), the output • All the jobs that enter the system are kept in the job pool. This pool
appeared. consists of all processes residing on disk awaiting allocation of
• Its major task of the OS was to transfer control automatically from main memory.
one job to the next.
Prepared By Mr.EBIN PM, AP, IESCE EDULINE 7 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 9 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 11
• To speed up processing, operators batched together jobs with 2. MULTIPROGRAMMED SYSTEMS • If several jobs are ready to be brought into memory, and if there is
similar needs and ran them through the computer as a group. • Multiprogramming increases CPU utilization by organizing jobs so not enough room for all of them, then the system must choose
• The operator would sort programs into batches with similar that the CPU always has one to execute. among them. Making this decision is job scheduling.
requirements and, as the computer became available, would run • The operating system keeps several jobs in memory • When the operating system selects a job from the job pool, it loads
each batch. simultaneously. that job into memory for execution.
• In this execution environment, the CPU is often idle, because the • This set of jobs is a subset of the jobs kept in the job pool. • Having several programs in memory at the same time requires
speeds of the mechanical I/O devices are slower than are those of • The operating system picks and begins to execute one of the jobs some form of memory management.
electronic devices. in the memory. • If several jobs are ready to run at the same time, the system must
• Using SPOOLing the idle time of the CPU can be reduced. • Eventually, the job may have to wait for some task, such as an I/O choose among them. Making this decision is CPU scheduling.
• The common input devices are card readers and tape drivers. operation, to complete. CPU Scheduling
Job Scheduling
• The output devices are line printers, tape drives and card punches.
Prepared By Mr.EBIN PM, AP, IESCE EDULINE 8 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 10 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 12
Prepared By Mr. EBIN PM, AP, IESCE 4 Prepared By Mr. EBIN PM, AP, IESCE 5 Prepared By Mr. EBIN PM, AP, IESCE 6
Multi programming requirements are • As the system switches rapidly from one user to the next, each user 4. MULTIPROCESSOR SYSTEMS
• Protection and security is given the impression that the entire computer system is • Most systems to date are single-processor systems; that is, they
dedicated to her use, even though it is being shared among many have only one main CPU.
• Large memory users.
• Proper job mixing • Multiprocessor systems also known as parallel systems or tightly
• A time-shared operating system uses CPU scheduling and coupled systems.
• Job Scheduling multiprogramming to provide each user with a small portion of a
time-shared computer. • Such systems have more than one processor in close
• CPU scheduling communication, sharing the computer bus, the clock, and
• Disk management • Each user has at least one separate program in memory. sometimes memory and peripheral devices.
• Main memory management • A program loaded into memory and executing is commonly ADVANTAGES
referred to as a process.
1. Increased throughput: By increasing the number of processors,
we hope to get more work done in less time. The speed-up ratio
with N processors is not N; rather, it is less than N.
Prepared By Mr.EBIN PM, AP, IESCE EDULINE 13 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 15 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 17
3. TIME-SHARING SYSTEMS (MULTI-TASKING SYSTEM) • Time-sharing operating systems are even more complex than 2. Economy of scale: Multiprocessor systems can save more money
• Time sharing (or multitasking) is a logical extension of multiprogrammed operating systems. than multiple single-processor systems, because they can share
multiprogramming. • In both, several jobs must be kept simultaneously in memory, so peripherals, mass storage, and power supplies.
• The CPU executes multiple jobs by switching among them, but the the system must have memory management and protection . 3. Increased reliability: If functions can be distributed properly
switches occur so frequently that the users can interact with each • The main advantage is the user gets quick response among several processors, then the failure of one processor will not
program while it is running. halt the system, only slow it down. If we have ten processors and
one fails, then each of the remaining nine processors must pick up a
• A time-shared operating system allows many users to share the • Multiprogramming is known as non-preemptive system because it share of the work of the failed processor. Thus, the entire system
computer simultaneously. goes to a waiting state, if an I/O operation is coming. runs only 10 percent slower, rather than failing altogether.
• Since each action or command in a time-shared system tends to be • Timeshared system is preemptive. It does not go to a waiting state. • This ability to continue providing service proportional to the level
short, only a little CPU time is needed for each user. So it is known as an interactive system. After the time completion, of surviving hardware is called graceful degradation.
the process goes to ready state through an interrupt. • Systems designed for graceful degradation are also called fault
tolerant.
Prepared By Mr.EBIN PM, AP, IESCE EDULINE 14 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 16 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 18
Prepared By Mr. EBIN PM, AP, IESCE 7 Prepared By Mr. EBIN PM, AP, IESCE 8 Prepared By Mr. EBIN PM, AP, IESCE 9
There are two types of multiprocessors b. Asymmetric multiprocessors • Each node can monitor one or more of the others (over the LAN).
a. Symmetric Multiprocessors • Here which each processor is assigned a specific task. • If the monitored machine fails, the monitoring machine can take
• Each processor runs an identical copy of the operating system, and • A master processor controls the system; the other processors ownership of its storage, and restart the application(s) that were
these copies communicate with one another as needed. either look to the master for instruction or have predefined tasks. running on the failed machine.
• SMP means that all processors are peers; no master- slave • This scheme defines a master-slave relationship. • The failed machine can remain down, but the users and clients of
relationship exists between processors. the application would only see a brief interruption of service.
• The master processor schedules and allocates work to the slave
• Each processor concurrently runs a copy of the operating system. processors.
• The benefit of this model is that many processes can run • The difference between symmetric and asymmetric
simultaneously — N processes can run if there are N CPUs — multiprocessing may be the result of either hardware or software.
without causing a significant deterioration of performance
Prepared By Mr.EBIN PM, AP, IESCE EDULINE 19 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 21 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 23
• Also, since the CPUs are separate, one may be sitting idle while 5. CLUSTERED SYSTEMS a. Symmetric clustering
another is overloaded, resulting in inefficiencies. These • Like parallel systems, clustered systems gather together multiple • In symmetric mode, two or more hosts are running applications,
inefficiencies can be avoided if the processors share certain data CPUs to accomplish computational work. and they are monitoring each other.
structures.
• Clustered systems differ from parallel systems, however, in that • This mode is obviously more efficient, as it uses all of the available
• An example of the SMP system is Encore's version of UNIX for the they are composed of two or more individual systems coupled hardware. It does require that more than one application be
Multimax computer together. available to run.
• Clustered computers share storage and are closely linked via LAN b. Asymmetric clustering
networking. • In asymmetric clustering, one machine is in hot-standby mode
• Clustering is usually performed to provide high availability. while the other is running the applications.
• A layer of cluster software runs on the cluster nodes. • The hot-standby host (machine) does nothing but monitor the
active server. If that server fails, the hot standby host becomes the
active server
Prepared By Mr.EBIN PM, AP, IESCE EDULINE 20 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 22 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 24
Prepared By Mr. EBIN PM, AP, IESCE 10 Prepared By Mr. EBIN PM, AP, IESCE 11 Prepared By Mr. EBIN PM, AP, IESCE 12
OPERATING SYSTEMS http://www.youtube.com/c/EDULINEFORCSE OPERATING SYSTEMS http://www.youtube.com/c/EDULINEFORCSE OPERATING SYSTEMS http://www.youtube.com/c/EDULINEFORCSE
STUDENTS STUDENTS STUDENTS
• Since a cluster consists of several computer systems connected via Real-time systems come in two flavors: hard and soft.
a network, clusters can also be used to provide high-performance • Several fundamental data structures used extensively in operating
A hard real-time system guarantees that critical tasks be systems.
computing environments. completed on time.
• Other forms of clusters include parallel clusters and clustering over • An array is a simple data structure in which each element can be
• Virtual memory is almost never found on real-time systems. accessed directly.
a WAN. Deadline is supported. It doesn‘t support advanced OS features. No
• Parallel clusters allow multiple hosts to access the same data on priority based working. • A list represents a collection of data values as a sequence.
the shared storage. A less restrictive type of real-time system is a soft real-time • The most common method for implementing this structure is a
system, where a critical real-time task gets priority over other linked list, in which items are linked to one another.
tasks, and retains that priority until it completes. • Linked lists are of several types:
• It doesn‘t show any deadline support. • In a singly linked list, each item points to its successor, as illustrated
• It has multimedia applications and support advanced OS features. in
Prepared By Mr.EBIN PM, AP, IESCE EDULINE 25 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 27 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 29
6. REAL-TIME SYSTEMS KERNEL DATA STRUCTURES • In a doubly linked list, a given item can refer either to its
• It is a special-purpose operating system. • Kernel is central component of an operating system that manages predecessor or to its successor
• A real-time system is used when rigid time requirements have been operations of computer and hardware. • In a circularly linked list, the last element in the list refers to the
placed on the operation of a processor. • It basically manages operations of memory and CPU time. first element, rather than to null,
• It is often used as a control device in a dedicated application. • It is core component of an operating system. • Linked lists accommodate items of varying sizes and allow easy
• Sensors bring data to the computer. The computer must analyze • Kernel loads first into memory when an operating system is loaded insertion and deletion of items.
the data and possibly adjust controls to modify the sensor inputs. and remains into memory until operating system is shut down • A stack is a sequentially ordered data structure that uses the last
again.
• A real-time system has well-defined, fixed time constraints. in, first out (LIFO) principle for adding and removing items,
Processing must be done within the defined constraints, or the • It is responsible for various tasks such as disk management, task meaning that the last item placed onto a stack is the first item
management, and memory management. removed.
system will fail.
• It decides which process should be allocated to processor to
• A real-time system functions correctly only if it returns the correct execute and which process should be kept in main memory to • The operations for inserting and removing items from a stack are
result within its time constraints. execute. known as push and pop, respectively.
Prepared By Mr.EBIN PM, AP, IESCE EDULINE 26 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 28 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 30
Prepared By Mr. EBIN PM, AP, IESCE 13 Prepared By Mr. EBIN PM, AP, IESCE 14 Prepared By Mr. EBIN PM, AP, IESCE 15
• An operating system often uses a stack when invoking function • This numeric value can then be used as an index into a table
calls. (typically an array) to quickly retrieve the data.
• Parameters, local variables, and the return address are pushed • Whereas searching for a data item through a list of size n can
onto the stack when a function is called; returning from the require up to O(n) comparisons in the worst case, using a hash
function call pops those items off the stack. function for retrieving data from table can be as good as O(1) in
• A queue, in contrast, is a sequentially ordered data structure that the worst case, depending on implementation details. Because of
uses the first in, first out (FIFO) principle: items are removed from a this performance, hash functions are used extensively in operating
queue in the order in which they were inserted. systems.
• Queues are also quite common in operating systems—jobs that are • A bitmap is a string of n binary digits that can be used to represent
sent to a printer are typically printed in the order in which they the status of n items.
were submitted, for example.
Prepared By Mr.EBIN PM, AP, IESCE EDULINE 31 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 33
• A tree is a data structure that can be used to represent data • For example, suppose we have several resources and the
hierarchically. availability of each resource is indicated by the value of a binary
• Data values in a tree structure are linked through parent–child digit: 0 means that the resource is available, while 1 indicates that
relationships. In a general tree, a parent may have an unlimited it is unavailable (or vice-versa).
number of children. • The value of the ith position in the bitmap is associated with the
• In a binary tree, a parent may have at most two children, which we ith resource.
term the left child and the right child. A binary search tree • As an example, consider the bitmap shown below:
additionally requires an ordering between the parent‘s two 001011101
children in which left child <= right child.
• A hash function takes data as its input, performs a numeric
operation on this data, and returns a numeric value.
Prepared By Mr.EBIN PM, AP, IESCE EDULINE 32 Prepared By Mr.EBIN PM, AP, IESCE EDULINE 34
Prepared By Mr. EBIN PM, AP, IESCE 16 Prepared By Mr. EBIN PM, AP, IESCE 17