computer operating system lecture notes
computer operating system lecture notes
Every computer must have an operating system to run other programs. The operating
system and coordinates the use of the hardware among the various system programs
and application program for a various users. It simply provides an environment within
which other programs can do useful work.
The operating system is a set of special programs that run on a computer system that
allow it to work properly. It performs basic tasks such as recognizing input from the
keyboard, keeping track of files and directories on the disk, sending output to the
display screen and controlling a peripheral devices.
OS is designed to serve two basic purposes :
1. It controls the allocation and use of the computing system‘s resources among the
various user and tasks.
2. It provides an interface between the computer hardware and the programmer that
simplifies and makes feasible for coding, creation, debugging of application programs.
The operating system must support the following tasks:
1. Provides the facilities to create, modification of program and data files using and
editor. 2. Access to the compiler for translating the user program from high level
language to machine language.
3. Provide a loader program to move the compiled program code to the computer‘s
memory for execution.
4. Provide routines that handle the details of I/O programming.
1.8Compiler
The high level languages – examples are FORTRAN, COBOL, ALGOL and PL/I – are
processed by compilers and interpreters. A compiler is a program that accepts a source
program in a high-level language and produces a corresponding object program. An
interpreter is a program that appears to execute a source program as if it was machine
language. The same name (FORTRAN, COBOL etc) is often used to designate both a
compiler and its associated language.
1.9 Loader
A loader is a routine that loads an object program and prepares it for execution. There
are various loading schemes: absolute, relocating and direct-linking. In general, the
loader must load, relocate, and link the object program. Loader is a program that
places programs into memory and prepares them for execution. In a simple loading
scheme, the assembler outputs the machine language translation of a program on a
secondary device and a loader is placed in core. The loader places into memory the
machine language version of the user‘s program and transfers control to it. Since the
loader program is much smaller than the assembler, this makes more core services
available to user‘s program.
1.10 Kernel
CPUs typically have (at least) two execution modes: user mode and kernel mode. User
applications run in user mode. The heart of the operating system is called the kernel.
This is the collection of functions that perform the basic services such as scheduling
applications. The kernel runs in kernel mode. Kernel mode is also called supervisor
mode or privileged mode. A kernel can be contrasted with a shell, the outermost part
of an operating system that interacts with user commands. Kernel and shell are terms
used more frequently in Unix operating systems than in IBM mainframe or Microsoft
Windows systems.
By Fred Omondi @2020 Computer Operating System 6
It typically makes these facilities available to application processes through inter-
process communication mechanisms and system calls. Operating system tasks are done
differently by different kernels, depending on their design and implementation.
While monolithic kernels execute all the operating system code in the same address
space to increase the performance of the system, microkernels run most of the
operating system services in user space as servers, aiming to improve maintainability
and modularity of the operating system. A range of possibilities exists between these
two extremes
1.11 Activities
Point out the differences between the kernel of early operating systems and the kernel
of current operating systems.
Explain how the kernel of current operating systems makes it easier to add new
components or modify existing ones, which is useful for distributed systems
By Fred Omondi @2020 Computer Operating System 7
1.12Self Assessment
1. Define Operating System ,Assembler, Loader, Compiler
2. Explain various function of operating system
3. Explain I/O system Management
4. Outline the history of operating systems
1.12 Summary
Every general purpose computer consists of the hardware, operating system, system
programs and application programs
Assembler :Input to an assembler is an assembly language program. Output is an object
program plus information that enables the loader to prepare the object program for
execution.
A loader is a routine that loads an object program and prepares it for execution. There
are various loading schemes: absolute, relocating and direct-linking. In general, the
loader must load, relocate, and link the object program
A compilers is a program that accepts a source program ‖ in a high-level language‖ and
produces a corresponding object program.
The kernel is the central component of most computer operating systems; it is a bridge
between applications and the actual data processing done at the hardware level
An Operating system is concerned with the allocation of resources and services, such as
memory, processors, devices and information. The Operating System correspondingly
includes programs to manage these resources, such as a traffic controller, a scheduler,
memory management module, I/O programs, and a file system.
HBT 2102 Computer Operating System Task One
Attempt the following tasks in our respective groups
Presentation Date:
1. Define Operating System ,Assembler, Loader, Compiler
2. Explain EIGHT functions of operating system
3. With the aid of a diagram, explain the conceptual view of a Computer
system
4. Outline the history of operating systems
5. Differentiate between Multitasking, Multiprocessing & Multiprogramming
6. Write brief notes in relation to the following types of Operating Systems
a. Windows
b. Unix
c. Linux
d. Macintosh
e. Ms-Dos
Lecture Two
2.1Introduction
Welcome to the second lecture on Operating Systems. In this lecture we shall examine the
principles of operating systems and operating systems architecture
2.4 Introduction
An operating system provides the environment within which programs are executed.
Internally, operating systems vary greatly in their makeup, since they are organized along
many different lines. The design of a new operating
system is amajor task. It is important that the goals of the system be well defined before
the design begins.
We can view an operating system from several viewpoints. One view focuses on the
services that the system provides, another, on the interface that it makes available to users
and programmers; a third, on its components and their interconnections
2.5 Operating System Services
2. I/O Operation : I/O means any file or any specific I/O device. Program may require any
I/O device while running. So operating system must provide the required I/O.
3. File system manipulation : Program needs to read a file or write a file. The operating
system gives the permission to the program for operation on file.
4. Communication: Data transfer between two processes is required for some time. The
both processes are on the one computer or on different computer but connected through
computer network. Communication may be implemented by two methods:
a. Shared memory
b. Message passing.
5. Error detection: error may occur in CPU (central processing unit), in I/O devices or in
the memory hardware. The operating system constantly needs to be aware of possible
errors. It should take the appropriate action to ensure correct and consistent computing.
1.Resource Allocation : If there are more than one user or jobs running at the same time,
then resources must be allocated to each of them. Operating system manages different types
of resources require special allocation code, i.e. main memory, CPU cycles and file storage.
There are some resources which require only general request and release code. For
allocating CPU, CPU scheduling algorithms are used for better utilization of CPU. CPU
scheduling algorithms are used for better utilization of CPU. CPU scheduling routines
consider the speed of the CPU, number of available registers and other required factors.
2. Accounting : Logs of each user must be kept. It is also necessary to keep record of which
user how much and what kinds of computer resources. This log is used for accounting
purposes. The accounting data may be used for statistics or for the billing. It also used to
improve system efficiency.
3. File Management
6. Networking
7. Protection System
1. Monolithic architecture
2. Layered Architecture
The layered Architecture of operating system was developed in the 60’s in this approach;
the operating system is broken up into a number of layers. The bottom layer (layer 0) is the
hardware layer and the highest layer (layer n) is the user interface layer as shown in the
figure.
4. client/server architecture
This is a trend in modern operating system whereby the maximum code is moved into the
higher level so as to remove as much as possible from the operating system, minimizing
the work of the kernel. The basic approach is to implement most of the operating system
functions in the user processes so as to request a service, for instance, request to read a
particular file, the user sends a request to the server process, the server checks the
parameter and finds whether it is valid or not, after that the server does the work and sends
back the answer to the client server model which works on request- response technique i.e.
Client always sends request to the side in order to perform the task, and on the other side,
the server gates, complementing that request and sends back a response. The figure below
shows the client server architecture.
Due to lack of protection scheme, one batch job can affect pending jobs.
2.9 Multiprogramming
When two or more programs are in memory at the same time, sharing the processor is
referred to the multiprogramming operating system. Multiprogramming assumes a single
processor that is being shared. It increases CPU utilization by organizing jobs so that the
CPU always has one to execute.
Fig. 2.2 shows the memory layout for a multiprogramming system.
The operating system keeps several jobs in memory at a time. This set of jobs is a subset of
the jobs kept in the job pool. The operating system picks and begins to execute one of the
job in the memory.
Multiprogrammedsystem provide an environment in which the various system resources
are utilized effectively, but they do not provide for user interaction with the computer
system.
Jobs entering into the system are kept into the memory. Operating system picks the job and
begins to execute one of the job in the memory. Having several programs in memory at the
same time requires some form of memory management.
Multiprogramming operating system monitors the state of all active programs and system
resources. This ensures that the CPU is never idle unless there are no jobs.
2.10 Spooling
Acronym for simultaneous peripheral operations on line. Spooling refers to putting jobs in
a buffer, a special area in memory or on a disk where a device can access them when it is
ready.
Spooling is useful because device access data that different rates. The buffer provides a
waiting station where data can rest while the slower device catches up.
Computer can perform I/O in parallel with computation, it becomes possible to have the
computer read a deck of cards to a tape, drum or disk and to write out to a tape printer
while it was computing. This process is called spooling.
The most common spooling application is print spooling. In print spooling, documents are
loaded into a buffer and then the printer pulls them off the buffer at its own rate.
Spooling is also used for processing data at remote sites. The CPU sends the data via
communications path to a remote printer. Spooling overlaps the I/O of one job with the
computation of other jobs.
One difficulty with simple batch systems is that the computer still needs to read the decks
of cards before it can begin to execute the job. This means that the CPU is idle during
these relatively slow operations.
Spooling batch systems were the first and are the simplest of the multiprogramming
systems.
2.11 Multitasking
Multitasking refers to term where multiple jobs are executed by the CPU simultaneously
by switching between them. Switches occur so frequently that the users may interact with
each program while it is running. Operating system does the following activities related to
multitasking.
The user gives instructions to the operating system or to a program directly, and
receives an immediate response.
Operating System handles multitasking in the way that it can handle multiple
operations / executes multiple programs at a time.
Multitasking Operating Systems are also known as Time-sharing systems.
These Operating Systems were developed to provide interactive use of a
computer system at a reasonable cost.
A time-shared operating system uses concept of CPU scheduling and
multiprogramming to provide each user with a small portion of a time-shared
CPU.
Each user has at least one separate program in memory.
1.12 Summary
An operating system provides services to programs and to the users of those
programs. It provided by one environment for the execution of programs. The services
provided by one operating system is difficult than other operating system. Operating
system makes the programming task easier.
Operating systems can be classified based on their structuring mechanism.Some of the
main structures used in operating systems are:Monolithic systems, Layered systems,
Virtual machines, Client/server a.k.a. Microkernels
Batch operating system is one where programs and data are collected together in a
batch before processing starts. In batch operating system memory is usually divided
into two areas : Operating system and user program area.
Time sharing, or multitasking, is a logical extension of multiprogramming. Multiple
jobs are executed by the CPU switching between them, but the switches occur so
frequently that the users may interact with each program while it is running.
When two or more programs are in memory at the same time, sharing the processor is
referred to the multiprogramming operating system.
Spooling is useful because device access data that different rates. The buffer provides
a waiting station where data can rest while the slower device catches up.
LECTURE FOUR
4.0 Introduction
Welcome to the forth lecture on operating systems. In this lecture we shall cover the basic of process
synchronisation and inter-process communication. We will introduce the concepts of mutual
exclusion, semaphores, monitors which will be covered in more detail in the lecture on concurrency
control.
1
When multiple processes are accessing shared data without access control the final result depends on
the execution order creating what we call race conditions. This is a serious problem for any
concurrent system using shared variables. We need access control using code sections that are
executed atomically. An Atomic operation is one that completes in its entirety without interruption.
Multiple cooperating threads introduce the potential for new synchronization problems in software
implementations: deadlock, critical sections, and non-determinacy
Mutex comes into the picture when two threads work on the same data at the same time. It acts as a
lock and is the most basic synchronization tool. When a thread tries to acquire a mutex, it gains the
mutex if it is available, otherwise the thread is set to sleep condition. Mutual exclusion reduces
latency and busy-waits using queuing and context switches. Mutex can be enforced at both the
hardware and software levels
4.2.3 Semaphores
In software, a semaphore is an OS abstract data type that performs operations similar to the traffic
light. The semaphore will allow one process to control the shared resource while the other process
waits for the resource to be released. Before discussing the principles of operation of semaphores,
let’s consider the assumptions under which semaphores operate. An acceptable solution to the critical
section problem has these requirements:
Only one process at a time should be allowed to be executing in its critical section ie mutual
exclusion.
If a critical section is free and a set of processes all want to enter the critical section, then the
decision about which process should be chosen to enter the critical section should be made by
the collection of processes instead of by an external agent such as a scheduler.
3
If a process attempts to enter its critical section and the critical section becomes available,
then the waiting process cannot be blocked from entering the critical section for an indefinite
period of time.
Once a process attempts to enter its critical section, then it cannot be forced to wait for more
than a bounded number of other processes to enter the critical section before it is allowed to
do so.
A semaphore is a simpler construct than a monitor because it’s just a lock that protects a shared
resource – and not a set of routines like a monitor. The application must acquire the lock before using
that shared resource protected by a semaphore. A mutex is the most basic type of semaphore. In a
mutex, only one thread can use the shared resource at a time. If another thread wants to use the shared
resource, it must wait for the owning thread to release the lock.
4.2.4 Monitors
A monitor is a set of multiple routines which are protected by a mutual exclusion lock. None of the
routines in the monitor can be executed by a thread until that thread acquires the lock. This means
that only ONE thread can execute within the monitor at a time. Any other threads must wait for the
thread that’s currently executing to give up control of the lock.
However, a thread can actually suspend itself inside a monitor and then wait for an event to occur. If
this happens, then another thread is given the opportunity to enter the monitor. The thread that was
suspended will eventually be notified that the event it was waiting for has now occurred, which
means it can wake up and reacquire the lock.
4
by other tasks .
Inter Process Communication (IPC) functions in the OS provide the solutions means that a process
(scheduler, task or ISR) generates some information by signal (for other process start) or value (for
example of semaphore) or generates an output so that it lets another process take note or use it
through the kernel functions for the IPCs
5
4.4 Summary
Process synchronization or coordination seeks to make sure that concurrent processes or
threads don’t interfere with each other when accessing or changing shared resources.
Segments of code that touch shared resources are called critical sections thus, no two
processes should be in their critical sections at the same time.
The existence of critical sections creates an environment in which a new, subtle problem
can occur: deadlock.
A mutual exclusion (mutex) is a program object that prevents simultaneous access to a
shared resource.
A semaphore is a simpler construct than a monitor because it’s just a lock that protects a
shared resource – and not a set of routines like a monitor
A monitor is a set of multiple routines which are protected by a mutual exclusion lock.
Inter processor communication in a multiprocessor system is used to generate information
about certain sets of computations finishing on one processor and to let the other
processors waiting for finishing the computations take note of the information
4.5 References
1. Chapter Eight: Basic Synchronization Principles Nutt, G. Operations Systems, Third
Edition (2003) Addison-Wesley