Unit 3 Introduction To Operating System Concepts
Unit 3 Introduction To Operating System Concepts
Unit 3 Introduction To Operating System Concepts
INTRODUCTION
Operating System:
An Operating System is a program that manages the computer hardware. It also provides a basis for
application programs and act as an intermediary between the computer user and the computer hardware.
An amazing aspect of operating systems is how they vary in accomplishing these tasks. Mainframe
operating systems are designed primarily to optimize utilization of hardware. Personal computer (PC)
operating systems support complex games, business applications, and everything in between. Operating
systems for mobile computers provide an environment in which a user can easily interface with the
computer to execute programs. Thus, some operating systems are designed to be convenient, others to be
efficient, and others to be some combination of the two.
In other cases, a user sits at a terminal connected to a mainframe or a minicomputer. Other users are
accessing the same computer through other terminals. These users share resources and may exchange
information. The operating system in such cases is designed to maximize resource utilization to assure
that all available CPU time, memory, and I/O are used efficiently.
In still other cases, users sit at workstations connected to networks of other workstations and servers.
These users have dedicated resources at their disposal, but they also share resources such as networking
and servers, including file, compute, and print servers. Therefore, their operating system is designed to
compromise between individual usability and resource utilization.
Most mobile computers are standalone units for individual users. They are connected to networks
through cellular or other wireless technologies. Increasingly, these mobile devices are replacing desktop
and laptop computers for people who are primarily interested in using computers for e-mail and web
browsing.
Systems View:
As per computer’s point of view, the operating system is defined as:
Resource Allocator
Control Program
1. Resource Allocator: A computer system has many resources that may be required to solve a
problem: CPU time, memory space, file-storage space, I/O devices, and so on. The operating
system acts as the manager of these resources. Facing numerous and possibly conflicting requests
for resources, the operating system must decide how to allocate them to specific programs and
users so that it can operate the computer system efficiently and fairly.
2. Control Program: It manages the execution of the user programs to prevent errors and improper
use of the computer. It is especially concerned with the operation and control of I/O devices.
TYPES OF OPERATING SYSTEM:
There are several different types of operating system exists. They are:
1. Mainframe systems 2. Multiprocessor systems
3. Clustered Systems 4. Distributed Systems
5. Real Time systems
1. Mainframe Systems: These are the first computers used to tackle many commercial and scientific
applications.
Batch Systems: Early computers were enormous machines run from the console. The
common input devices were card readers and tape drivers. The common output devices
were line printers, tape drivers and card punches. The user used to prepare a job which
consists of the program, the data and some control information. It is submitted to the
system and sometime later output appeared. The output consists of the result of the
program as well as the dump of the final memory and register contents for debugging.
To speed this process operators batched together the jobs with similar needs and ran them
through the computer as a group. Thus the programmer leaves the programs with the
operators. The operator grouped the programs into batches with similar requirements and as
the computer became available, would run each batch.
Operating System
User Program
Area
In time sharing systems, the operating system must ensure reasonable response time, which
is sometimes accomplished through swapping, where processes are swapped in and out of
the main memory to the disk. A more common method to achieve this is Virtual Memory,
a technique that allows the execution of the process that is not completely in memory. It
enable user to run programs that are larger than the actual physical memory. The main
advantage of the virtual memory is that programs can be larger than physical memory
Time sharing systems must also provide a file system. It resides on the collection of disks,
hence disk management must be provided. It also provides a mechanism for concurrent
execution, which required sophisticated CPU Scheduling Schemes.
2. Multiprocessor Systems: These are also known as parallel systems or multi-core systems. Such
systems have two or more processors in close communication, sharing the computer bus and
sometimes the clock, memory and peripheral devices. They generally have three main advantages:
Increased Throughput: By increasing the number of processors the more amount of work
will be done in less time. When multiple processors cooperate on a task, certain amount of
overhead is incurred in keeping all the parts working correctly. This overhead lowers the
expected gain from additional processors.
Economy of Scale: Multiprocessor systems can cost less than equivalent multiple single-
processor systems, because they can share peripherals, mass storage, and power supplies. If
several programs operate on the same set of data its cheaper to store those data on the disk.
Increased Reliability: If functions can be distributed properly among several processors, then
the failure of one processor will not halt the system, only slow it down. If we have ten
processors and one fails, then each of the remaining nine processors can pick up a share of the
work of the failed processor.
The multiprocessor systems in use are of two types: asymmetric and symmetric
In asymmetric multiprocessing (ASMP) system each processor is assigned a specific task. A master
processor control the system, the other processor either looks to the master for instructions or have
predefined tasks. It defines the master-slave relationship. The master process schedules and allocates
work to slave processes.
In symmetric multiprocessing (SMP) system each processor runs identical copy of the operating
system and these copies communicate with one another as needed. Fig 1.2 represents the SMP
architecture. Each processor has its own set of registers, as well as private or local cache.
5. Real Time Systems: A real time system is used when rigid time requirements have been placed on
the operation of the processor or the flow of data. Thus it is often used as a control device in
dedicated applications. Sensors bring data to the computer. The computer must analyze the data
and possibly adjust controls to modify the sensor input. Systems that control scientific
experiments, industrial control systems and certain display systems are real time systems.
A real time system has well defined fixed time constraints. Processing must be done within the
defined constraints, or the system will fail. A real time system functions correctly only if it returns
the correct result within its time constraints.
Real time systems are basically classified into two types. They are:
a. Hard Real Time Systems: It guarantees that critical task be completed on time. In this
system secondary storage is limited or missing and the data is stored in ROM. Virtual
memory is not found on real time systems. Therefore hard real time conflicts with the
operations of time sharing systems, and the two cannot be mixed.
b. Soft Real Time Systems: These are less restrictive. A critical real time task gets priority
over other tasks and retains the priority until it completes. These have limited utility than
hard real time systems. These are risky to use in industrial applications where fixed time
constraints are imposed. They are useful in several areas such as: multimedia, virtual
reality, etc.
OPERATING SYSTEM OPERATIONS:
Modern operating systems are interrupt driven. Events are signaled by the occurrence of an interrupt or
a trap. A trap (or an exception) is a software-generated interrupt caused either by an error (for example,
division by zero or invalid memory access) or by a specific request from a user program that an
operating-system service be performed. For each type of interrupt, separate segments of code in the
operating system determine what action should be taken. An interrupt service routine is provided to deal
with the interrupt.
Dual Mode and Multimode Operation:
In order to ensure the proper execution of the operating system, distinguishing between the execution of
operating-system code and user defined code should be done. The approach taken by most computer
systems is to provide hardware support that allows us to differentiate among various modes of
execution. In general there are two different types of modes:
User Mode
Kernel Mode (also called supervisor mode, system mode or privileged mode)
A bit, called mode bit, is added the hardware of the computer to indicate the current mode: kernel (0) or
user (1). When the computer system is executing on behalf of the user application, the system is in user
mode. However when a user application requests a service from the operating system, the system must
transition from user to kernel mode to fulfill the request. This is shown in the figure 1.3:
Communication
The below figure 1.4 shows one view of the various operating system services:
5. Communications: There are many circumstances in which one process needs to exchange
information with another process. Such communication may occur between processes that are
executing on the same computer or between processes that are executing on different computer
systems tied together by a computer network. Communications may be implemented via shared
memory, in which two or more processes read and write to a shared section of memory, or
message passing, in which packets of information in predefined formats are moved between
processes by the operating system.
6. Error Detection: The operating system needs to be detecting and correcting errors constantly.
Errors may occur in the CPU and memory hardware (such as a memory error or a power failure),
in I/O devices (such as a parity error on disk, a connection failure on a network, or lack of paper in
the printer), and in the user program (such as an arithmetic overflow, an attempt to access an
illegal memory location, or a too-great use of CPU time). For each type of error, the operating
system should take the appropriate action to ensure correct and consistent computing.
Another set of operating system functions exists not for helping the user but rather for ensuring the
efficient operation of the system itself. Systems with multiple users can gain efficiency by sharing
the computer resources among the users.
7. Resource Allocation: When there are multiple users or multiple jobs running at the same time,
resources must be allocated to each of them. The operating system manages many different types
of resources. Some (such as CPU cycles, main memory, and file storage) may have special
allocation code, whereas others (such as I/O devices) may have much more general request and
release code. For instance, in determining how best to use the CPU, operating systems have CPU-
scheduling routines that take into account the speed of the CPU, the jobs that must be executed, the
number of registers available, and other factors. There may also be routines to allocate printers,
USB storage drives, and other peripheral devices.
8. Accounting: At times the system needs to keep track of which users use how much and what kinds
of computer resources. This record keeping may be used for accounting (so that users can be
billed) or simply for accumulating usage statistics.
9. Protection and Security: The owners of information stored in a multiuser or networked computer
system may want to control use of that information. When several separate processes execute
concurrently, it should not be possible for one process to interfere with the others or with the
operating system itself. Protection involves ensuring that all access to system resources is
controlled. Security of the system from outsiders is also important. Such security starts with
requiring each user to authenticate himself or herself to the system, usually by means of a
password, to gain access to system resources. If a system is to be protected and secure, precautions
must be instituted throughout it.
INTRODUCTION TO SYSTEM CALLS:
System Calls: System calls provide an interface to the services made available by an operating system.
These calls are generally available as routines written in C and C++, although certain low-level tasks
may have to be written using assembly-language instructions.
Example on How System calls are used:
Writing a simple program to read data from one file and copy them to another file.
1. The first input that the program will need is the names of the two files: the input file and the output
file. These names can be specified in many ways, depending on the operating-system design. One
approach is for the program to ask the user for the names. In an interactive system, this approach
will require a sequence of system calls, first to write a prompting message on the screen and then
to read from the keyboard the characters that define the two files.
2. Once the two file names have been obtained, the program must open the input file and create the
output file. Each of these operations requires another system call. Possible error conditions for
each operation can require additional system calls.
3. When both files are set up, we enter a loop that reads from the input file a system call) and writes
to the output file (another system call). Each read and write must return status information
regarding various possible error conditions. On input, the program may find that the end of the file
has been reached or that there was a hardware failure in the read (such as a parity error). The write
operation may encounter various errors; depending on the output device (for example, no more
disk space).
4. Finally, after the entire file is copied, the program may close both files (another system call), write
a message to the console or window (more system calls), and finally terminate normally (the final
system call). This system-call sequence is shown in Figure 1.5.
Process Control:
A running program needs to be able to halt its execution either normally (end()) or abnormally (abort()).
If a system call is made to terminate the currently running program abnormally, or if the program runs
into a problem and causes an error trap, a dump of memory is sometimes taken and an error message
generated. The dump is written to disk and may be examined by a debugger: a system program
designed to aid the programmer in finding and correcting errors or bugs to determine the cause of the
problem. Under either normal or abnormal circumstances, the operating system must transfer control to
the invoking command interpreter
The command interpreter then reads the next command. In an interactive system, the command
interpreter simply continues with the next command assuming that the user will issue an appropriate
command to respond to any error.
If a new job or process is created or a set of process are created, the user should be able to control its
execution. It requires the abilities to determine and reset the attributes of a job or a process. So
get_process_attributes() and set_process_attributes() system calls are used. The created process may
also be terminated using terminate_process() system call.
File Management:
Files can be created and deleted using create() and delete() system calls. Either system call requires the
name of the file and perhaps some of the file’s attributes. Once the file is created, we need to open() it
and to use it. We may also read(), write(), or reposition() (rewind or skip to the end of the file, for
example). Finally, we need to close() the file, indicating that we are no longer using it.
The same set of operations can be used for directories if there is a directory structure for organizing file
in the file system. For either files or directories various attributes need to be determined. File attributes
include the file name, file type, protection codes, accounting information, and so on. At least two
system calls, get file attributes() and set file attributes(), are required for this function. Some operating
systems provide many more calls, such as calls for file move() and copy().
Device Management:
A process may need several resources to execute – main memory, disk drivers, and access to files and
so on. If the resources are available they can be granted, and the control can be returned to the user
process. Otherwise the process has to wait until sufficient resources are available.
The various resources controlled by the operating system can be thought of as devices. Some of these
devices are physical devices (for example, disk drives), while others can be thought of as abstract or
virtual devices (for example, files).A System with multiple users may require us to first request() a
device, to ensure exclusive use of it. After we are finished with the device, we release() it. These
functions are similar to the open() and close() system calls for files. Other operating systems allow
unmanaged access to devices.
Information Maintenance:
Many system calls exist for the purpose of transferring information between the user program and the
operating system. For example, most systems have a system call to return the current time() and date().
Other system calls may return information about the system, such as the number of current users, the
version number of the operating system, the amount of free memory or disk space, and so on.
Another set of system calls is helpful in debugging a program. Many systems provide system calls to
dump() memory. This provision is useful for debugging. A program trace lists each system call as it is
executed. Even microprocessors provide a CPU mode known as single step, in which a trap is executed
by the CPU after every instruction. The trap is usually caught by a debugger.
Communication:
There are two common models of inter-process communication. They are:
The message passing model and
The Shared memory model
1. The Message Passing Model: In the message-passing model, the communicating processes
exchange messages with one another to transfer information. Messages can be exchanged between
the processes either directly or indirectly through a common mailbox. Before communication can
take place, a connection must be opened. The name of the other communicator must be known.
Each computer in a network has a host name. A host also has a network identifier, such as an
IP address. Similarly, each process has a process name, and this name is translated into an
identifier by which the operating system can refer to the process. The get hostid() and get
processid() system calls do this translation. The identifiers are then passed to the general purpose
open() and close() calls provided by the file system or to specific open connection() and close
connection() system calls, depending on the system’s model of communication. The recipient
process usually must give its permission for communication to take place with an
accept_connection() call. Most processes that will be receiving connections are special-purpose
daemons, which are system programs provided for that purpose. They execute a
wait_for_connection() call and are awakened when a connection is made. The source of the
communication, known as the client, and the receiving daemon, known as a server, then exchange
messages by using read_message() and write_message() system calls. The close_connection() call
terminates the communication.
2. The Shared Memory Model: In the shared-memory model, processes use shared memory create()
and shared memory attach() system calls to create and gain access to regions of memory owned by
other processes. In general the operating system tries to prevent one process from accessing
another process’s memory Shared memory requires that two or more processes agree to remove
this restriction. They can then exchange information by reading and writing data in the shared
areas. The processes are also responsible for ensuring that they are not writing to the same location
simultaneously.
Note: Message passing is useful for exchanging smaller amounts of data, because no conflicts need be
avoided. It is also easier to implement than is shared memory for inter-computer communication. Shared
memory allows maximum speed and convenience of communication, since it can be done at memory
transfer speeds when it takes place within a computer.
OPERATING SYSTEM STRUCTURE:
A system as large and complex as a modern operating system must be engineered carefully if it is to
function properly A common approach is to partition the task into small components, or modules, rather
than have one monolithic system. Each of these modules should be a well-defined portion of the system,
with carefully defined inputs, outputs, and functions.
Simple Structure:
Many operating systems do not have well-defined structures. Frequently, such systems started as small,
simple, and limited systems and then grew beyond their original scope. MS-DOS is an example of such
a system. It was originally designed and implemented by a few people who had no idea that it would
become so popular. It was written to provide the most functionality in the least space, so it was not
carefully divided into modules. Figure 1.6 shows its structure.
Limitations:
1. The major difficulty with the layered approach involves appropriately defining the various layers.
Because a layer can use only lower-level layers, careful planning is necessary.
2. Another problem with layered implementations is that they tend to be less efficient than other
types. At each layer, the parameters may be modified; data may need to be passes and so on. This
increases the overhead at the system call. The net result will be the system call that takes longer
time than usual.
Micro-kernels:
Mach operating system modularizes the kernel using the microkernel approach. This method structures
the operating system by removing all non essential components from the kernel and implementing them
as system or user level programs. This results in smaller kernel. Micro-kernels provide minimal process
and memory management, in addition to the communication facility. Fig 1.9 represents the architecture
of typical microkernel.
The main function of the microkernel is to provide communication between the client program and the
various services that are also running in user space. Communication is provided through message
passing.
It is a layered stack of software that provides a rich set of frameworks for developing mobile
applications. At the bottom of this software stack is the Linux kernel, although it has been modified by
Google and is currently outside the normal distribution of Linux releases.
The set of libraries available for Android applications includes frameworks for developing web browsers
(webkit), database support (SQLite), and multimedia. The libc library is similar to the standard C library
but is much smaller and has been designed for the slower CPUs that characterize mobile devices.