0% found this document useful (0 votes)
5 views

OS_Module_1_Notes (2)

Uploaded by

PALLAVI Y
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

OS_Module_1_Notes (2)

Uploaded by

PALLAVI Y
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Operating Systems M23BCS303

COURSE NAME: OPERATING SYSTEMS


COURSE CODE: BCS303

SEMESTER: 3

MODULE: 1
Introduction to operating systems:
 What operating systems do?
 Computer System organization
 Computer System architecture
 Operating System operations
 Computing environments
 Operating-System Structures:
 Operating System Services
 User - Operating System interface
 System calls
 Types of system calls
 Operating System structure

Department of Computer Science and Engineering, MIT Mysore Page 1


Operating Systems M23BCS303

MODULE 1

INTRODUCTION TO OPERATING SYSTEM

What is an Operating System?

An operating system is system software that acts as an intermediary between a user of a


computer and the computer hardware. It is software that manages the computer hardware and
allows the user to execute programs in a convenient and efficient manner.

Operating system goals:


 Make the computer system convenient to use. It hides the difficulty in managing the
hardware.
 Use the computer hardware in an efficient manner
 Provide and environment in which user can easily interface with computer.
 It is a resource allocator

Computer System Structure (Components of Computer System)

Computer system mainly consists of four components-

 Hardware – provides basic computing resources CPU, memory, I/O devices


 Operating system - Controls and coordinates use of hardware among various applications and
users
 Application programs – define the ways in which the system resources are used to solve
the computing problems of the users, Word processors, compilers, web browsers,
database systems, video games
 Users - People, machines, other computers

Department of Computer Science and Engineering, MIT Mysore Page 2


Operating Systems M23BCS303

List out the User Views and System views of OS

Operating System can be viewed from two viewpoints– User views & System views

User Views: -The user’s view of the operating system depends on the type of user.

 If the user is using standalone system, then OS is designed for ease of use and high
performances. Here resource utilization is not given importance.

 If the users are at different terminals connected to a mainframe or minicomputers, by sharing


information and resources, then the OS is designed to maximize resource utilization. OS is
designed such that the CPU time, memory and i/o are used efficiently and no single user takes
more than the resource allotted to them.

 If the users are in workstations, connected to networks and servers, then the user have a
system unit of their own and shares resources and files with other systems. Here the OS
is designed for both ease of use and resource availability (files).

 Other systems like embedded systems used in home device (like washing m/c) &
automobiles do not have any user interaction. There are some LEDs to show the status of
its work

 Users of hand-held systems, expects the OS to be designed for ease of use and performance
per amount of battery life

System Views: - Operating system can be viewed as a resource allocator and control program.

 Resource allocator – The OS acts as a manager of hardware and software resources.


CPU time, memory space, file-storage space, I/O devices, shared files etc. are the
different resources required during execution of a program. There can be conflicting
request for these resources by different programs running in same system. The OS
assigns the resources to the requesting program depending on the priority.

 Control Program – The OS is a control program and manage the execution of user
program to prevent errors and improper use of the computer.

Computer System Organization

Computer - system operation


One or more CPUs, device controllers connect through common bus providing access to
shared memory. Each device controller is in-charge of a specific type of device. To ensure
orderly access to the shared memory, a memory controller is provided whose function is to
synchronize access to the memory. The CPU and other devices execute concurrently
competing for memory cycles. Concurrent execution of CPUs and devices competing for
memory cycles

Department of Computer Science and Engineering, MIT Mysore Page 3


Operating Systems M23BCS303

 When system is switched on, ‘Bootstrap’ program is executed. It is the initial program to
run in the system. This program is stored in read-only memory (ROM) or in electrically
erasable programmable read-only memory (EEPROM).
 It initializes the CPU registers, memory, device controllers and other initial setups. The
program also locates and loads, the OS kernel to the memory. Then the OS starts with the
first process to be executed (ie. ‘init’ process) and then wait for the interrupt from the
user.

Switch on ‘Bootstrap’ program


 Initializes the registers, memory and I/O devices
 Locates & loads kernel into memory
 Starts with ‘init’ process
 Waits for interrupt from user.

Interrupt handling –

 The occurrence of an event is usually signaled by an interrupt. The interrupt can either be from
the hardware or the software. Hardware may trigger an interrupt at any time by sending a
signal to the CPU. Software triggers an interrupt by executing a special operation called a
system call (also called a monitor call).

 When the CPU is interrupted, it stops what it is doing and immediately transfers execution to a
fixed location. The fixed location (Interrupt Vector Table) contains the starting address where
the service routine for the interrupt is located. After the execution of interrupt service routine,
the CPU resumes the interrupted computation.

 Interrupts are an important part of computer architecture. Each computer design has its own
interrupt mechanism, but several functions are common. The interrupt must transfer control to
the appropriate interrupt service routine

Department of Computer Science and Engineering, MIT Mysore Page 4


Operating Systems M23BCS303

Storage Structure

 Computer programs must be in main memory (RAM) to be executed. Main memory is


the large memory that the processor can access directly. It commonly is implemented in a
semiconductor technology called dynamic random-access memory (DRAM).
Computers provide Read Only Memory (ROM), whose data cannot be changed.
 All forms of memory provide an array of memory words. Each word has its own address.
Interaction is achieved through a sequence of load or store instructions to specific
memory addresses.
 A typical instruction-execution cycle, as executed on a system with a Von Neumann
architecture, first fetches an instruction from memory and stores that instruction in the
instruction register. The instruction is then decoded and may cause operands to be
fetched from memory and stored in some internal register. After the instruction on the
operands has been executed, the result may be stored back in memory.
 Ideally, we want the programs and data to reside in main memory permanently. This
arrangement usually is not possible for the following two reasons:

1. Main memory is usually too small to store all needed programs and data permanently. 2.
Main memory is a volatile storage device that loses its contents when power is turned
off.

 Thus, most computer systems provide secondary storage as an extension of main


memory. The main requirement for secondary storage is that it will be able to hold large
quantities of data permanently.

 The most common secondary-storage device is a magnetic disk, which provides storage
for both programs and data. Most programs are stored on a disk until they are loaded into
memory. Many programs then use the disk as both a source and a destination of the
information for their processing.

 The wide variety of storage systems in a computer system can be organized in a


hierarchy as shown in the figure, according to speed, cost and capacity. The higher levels
are expensive, but they are fast. As we move down the hierarchy, the cost per bit
generally decreases, whereas the access time and the capacity of storage generally
increase.
 In addition to differing in speed and cost, the various storage systems are either volatile
or nonvolatile. Volatile storage loses its contents when the power to the device is

Department of Computer Science and Engineering, MIT Mysore Page 5


Operating Systems M23BCS303

removed. In the absence of expensive battery and generator backup systems, data must be
written to nonvolatile storage for safekeeping. In the hierarchy shown in figure, the storage
systems above the electronic disk are volatile, whereas those below are nonvolatile.
 An electronic disk can be designed to be either volatile or nonvolatile. During normal
operation, the electronic disk stores data in a large DRAM array, which is volatile. But
many electronic-disk devices contain a hidden magnetic hard disk and a battery for
backup power. If external power is interrupted, the electronic-disk controller copies the
data from RAM to the magnetic disk. Another form of electronic disk is flash memory.

I/O Structure
 A large portion of operating system code is dedicated to managing I/O, both because of its
importance to the reliability and performance of a system and because of the varying nature of
the devices.
 Every device has a device controller, maintains some local buffer and a set of special- purpose
registers. The device controller is responsible for moving the data between the peripheral
devices. The operating systems have a device driver for each device controller.
 Interrupt-driven I/O is well suited for moving small amounts of data but can produce high
overhead when used for bulk data movement such as disk I/O. To solve this problem, direct
memory access (DMA) is used.
 After setting up buffers, pointers, and counters for the I/O device, the device controller
transfers an entire block of data directly to or from its own buffer storage to memory, with no
intervention by the CPU. Only one interrupt is generated per block, to tell the device driver that
the operation has completed.

Department of Computer Science and Engineering, MIT Mysore Page 6


Operating Systems M23BCS303

Computer System Architecture

Categorized roughly according to the number of general-purpose processors used.

Single-Processor Systems –
 The variety of single-processor systems range from PDAs through mainframes. On a single-
processor system, there is one main CPU capable of executing instructions from user
processes. It contains special-purpose processors, in the form of device-specific processors, for
devices such as disk, keyboard, and graphics controllers.
 All special-purpose processors run limited instructions and do not run user processes. These are
managed by the operating system; the operating system sends them information about their
next task and monitors their status.
 For example, a disk-controller processor, implements its own disk queue and scheduling
algorithm, thus reducing the task of main CPU. Special processors in the keyboard, converts
the keystrokes into codes to be sent to the CPU.
 The use of special-purpose microprocessors is common and does not turn a single- processor
system into a multiprocessor. If there is only one general-purpose CPU, then the system is a
single-processor system.

Multi -Processor Systems (parallel systems or tightly coupled systems)

Systems that have two or more processors in close communication, sharing the computer bus, the
clock, memory, and peripheral devices are the multiprocessor systems.

Multiprocessor systems have three main advantages:

1. Increased throughput - In multiprocessor system, as there are multiple processors execution of


different programs take place simultaneously. Even if the number of processors is increased the
performance cannot be simultaneously increased. This is due to the overhead incurred in keeping
all the parts working correctly and also due to the competition for the shared resources. The
speed-up ratio with N processors is not N, rather, it is less than N. Thus the speed of the system is
not has expected.

2. Economy of scale - Multiprocessor systems can cost less than equivalent number of many
single-processor systems. As the multiprocessor systems share peripherals, mass storage, and
power supplies, the cost of implementing this system is economical. If several processes are
working on the same data, the data can also be shared among them.

3. Increased reliability- In multiprocessor systems functions are shared among several processors.
If one processor fails, the system is not halted, it only slows down. The job of the failed
processor is taken up, by other processors.

Two techniques to maintain ‘Increased Reliability’ - graceful degradation & fault tolerant
1. Graceful degradation – As there are multiple processors when one processor fails other
process will take up its work and the system go down slowly.
2. Fault tolerant – When one processor fails, its operations are stopped, the system failure
is then detected, diagnosed, and corrected.

Department of Computer Science and Engineering, MIT Mysore Page 7


Operating Systems M23BCS303

Different types of multiprocessor systems

1. Asymmetric multiprocessing
2. Symmetric multiprocessing

1) Asymmetric multiprocessing – (Master/Slave architecture) Here each processor is


assigned a specific task, by the master processor. A master processor controls the other
processors in the system. It schedules and allocates work to the slave processors.

2) Symmetric multiprocessing (SMP) – All the processors are considered peers. There is no
master-slave relationship. All the processors have their own registers and CPU, only
memory is shared.

The benefit of this model is that many processes can run simultaneously. N processes can run if there
are N CPUs—without causing a significant deterioration of performance. Operating systems like
Windows, Windows XP, Mac OS X, and Linux—now provide support for SMP. A recent trend in
CPU design is to include multiple compute cores on a single chip. The communication between
processors within a chip is faster than communication between two single processors.

Clustered Systems
Clustered systems are two or more individual systems connected together via a network and sharing
software resources. Clustering provides high availability of resources and services. The service will
continue even if one or more systems in the cluster fail. High availability is generally obtained by
storing a copy of files (s/w resources) in the system.

There are two types of Clustered systems – asymmetric and symmetric


1. Asymmetric clustering – one system is in hot-standby mode while the others are running
the applications. The hot-standby host machine does nothing but monitor the active server.
If that server fails, the hot-standby host becomes the active server.

Department of Computer Science and Engineering, MIT Mysore Page 8


Operating Systems M23BCS303

2. Symmetric clustering – two or more systems are running applications, and are monitoring
each other. This mode is more efficient, as it uses all of the available hardware. If any
system fails, its job is taken up by the monitoring system.

Other forms of clusters include parallel clusters and clustering over a wide-area network (WAN).
Parallel clusters allow multiple hosts to access the same data on the shared storage. Cluster
technology is changing rapidly with the help of SAN (storage-area networks). Using SAN
resources can be shared with dozens of systems in a cluster, that are separated by miles.

Operating-System Operations

Modern operating systems are interrupt driven. If there are no processes to execute, no I/O devices to
service, and no users to whom to respond, an operating system will sit quietly, waiting for something
to happen. Events are signaled by the occurrence of an interrupt or a trap. A trap (or an exception) is
a software-generated interrupt. For each type of interrupt, separate segments of code in the operating
system determine what action should be taken. An interrupt service routine is provided that is
responsible for dealing with the interrupt.

Explain dual mode operation in operating system with a neat block diagram Dual-Mode

Operation
Since the operating system and the user programs share the hardware and software resources of the
computer system, it has to be made sure that an error in a user program cannot cause problems to other
programs and the Operating System running in the system.
The approach taken is to use a hardware support that allows us to differentiate among various
modes of execution.

The system can be assumed to work in two separate modes of operation:


1. User mode
2. Kernel mode (supervisor mode, system mode, or privileged mode).

A hardware bit of the computer, called the mode bit, is used to indicate the current mode:
kernel (0) or user (1). With the mode bit, we are able to distinguish between a task that is
executed by the operating system and one that is executed by the user.
When the computer system is executing a user application, the system is in user mode. When a
user application requests a service from the operating system (via a system call), the transition
from user to kernel mode takes place.

Department of Computer Science and Engineering, MIT Mysore Page 9


Operating Systems M23BCS303

At system boot time, the hardware starts in kernel mode. The operating system is then loaded and
starts user applications in user mode. Whenever a trap or interrupt occurs, the hardware switches from
user mode to kernel mode (that is, changes the mode bit from 1 to 0). Thus, whenever the operating
system gains control of the computer, it is in kernel mode.

The dual mode of operation provides us with the means for protecting the operating system from
errant users—and errant users from one another.

The hardware allows privileged instructions to be executed only in kernel mode. If an attempt
is made to execute a privileged instruction in user mode, the hardware does not execute the
instruction but rather treats it as illegal and traps it to the operating system. The instruction to
switch to user mode is an example of a privileged instruction.

Initial control is within the operating system, where instructions are executed in kernel mode.
When control is given to a user application, the mode is set to user mode. Eventually, control is
switched back to the operating system via an interrupt, a trap, or a system call.

The concept of modes can be extended beyond two modes (in which case the CPU uses more
than one bit to set and test the mode). CPUs that support virtualization frequently have a
separate mode to indicate when the virtual machine manager (VMM)—and the virtualization
management software—is in control of the system. In this mode, the VMM has more privileges
than user processes but fewer than the kernel. It needs that level of privilege so it can create
and manage virtual machines, changing the CPU state to do so. Sometimes, too, different
modes are used by various kernel components.

Computing Environments

The different computing environments are -

Traditional Computing
The current trend is toward providing more ways to access these computing environments.
Web technologies are stretching the boundaries of traditional computing. Companies establish
portals, which provide web accessibility to their internal servers. Network computers are
essentially terminals that understand web-based computing. Handheld computers can
synchronize with PCs to allow very portable use of company information. Handheld PDAs can
also connect to wireless networks to use the company's web portal. The fast data connections
are allowing home computers to serve up web pages and to use networks. Some homes even
have firewalls to protect their networks.

In the latter half of the previous century, computing resources were scarce. Years before,
systems were either batch or interactive. Batch system processed jobs in bulk, with
predetermined input (from files or other sources of data). Interactive systems waited for input
from users. To optimize the use of the computing resources, multiple users shared time on
these systems. Time-sharing systems used a timer and scheduling algorithms to rapidly cycle
processes through the CPU, giving each user a share of the resources.

Today, traditional time-sharing systems are used everywhere. The same scheduling technique
is still in use on workstations and servers, but frequently the processes are all owned by the
same user (or a single user and the operating system). User processes, and system processes
that provide services to the user, are managed so that each frequently gets a slice of computer
time.

Department of Computer Science and Engineering, MIT Mysore Page 10


Operating Systems M23BCS303

Distributed Systems
A distributed system is a collection of physically separate, possibly heterogeneous, computer systems that
are networked to provide users with access to increases computation speed, functionality, data availability,
and reliability.
A network, in the simplest terms, is a communication path between two or more systems. Distributed
systems depend on networking for their functionality. Networks vary by the protocols used, the distances
between nodes, and the transport media. TCP/IP is the most common network protocol, and it provides
the fundamental architecture of the Internet. Most operating systems support TCP/IP, including all
general-purpose ones.
Networks are characterized based on the distances between their nodes.
A local-area network (LAN) connects computers within a room, a building, or a campus.
A wide-area network (WAN) usually links buildings, cities, or countries. A global company may have a
WAN to connect its offices worldwide, for example. These networks may run one protocol or several
protocols. The continuing advent of new technologies brings about new forms of networks.
For example, a metropolitan-area network (MAN) could link buildings within a city. BlueTooth and
802.11 devices use wireless technology to communicate over a distance of several feet, in essence creating
a personal-area network (PAN) between a phone and a headset or a smartphone and a desktop computer.

Client-Server Computing
Designers shifted away from centralized system architecture to - terminals connected to
centralized systems. As a result, many of today’s systems act as server systems to satisfy requests
generated by client systems. This form of specialized distributed system, called client- server system.

General Structure of Client – Server System

Server systems can be broadly categorized as compute servers and file servers:
 The compute-server system provides an interface to which a client can send a request to
perform an action (for example, read data); in response, the server executes the action and
sends back results to the client. A server running a database that responds to client requests
for data is an example of such a system.
 The file-server system provides a file-system interface where clients can create, update,
read, and delete files. An example of such a system is a web server that delivers files to
clients running the web browsers.

Peer-to-Peer Computing
In this model, clients and servers are not distinguished from one another; here, all nodes within
the system are considered peers, and each may act as either a client or a server, depending on
whether it is requesting or providing a service.
In a client-server system, the server is a bottleneck, because all the services must be served by
the server. But in a peer-to-peer system, services can be provided by several nodes distributed
throughout the network.
Department of Computer Science and Engineering, MIT Mysore Page 11
Operating Systems M23BCS303

To participate in a peer-to-peer system, a node must first join the network of peers. Once a
node has joined the network, it can begin providing services to—and requesting services
from—other nodes in the network.

Determining what services are available is accomplished in one of two general ways:
 When a node joins a network, it registers its service with a centralized lookup service on the
network. Any node desiring a specific service first contacts this centralized lookup service
to determine which node provides the service. The remainder of the communication takes
place between the client and the service provider.
 A peer acting as a client must know, which node provides a desired service by broadcasting
a request for the service to all other nodes in the network. The node (or nodes) providing
that service responds to the peer making the request. To support this approach, a discovery
protocol must be provided that allows peers to discover services provided by other peers in
the network.

Virtualization

Virtualization is a technology that allows operating systems to run as applications within other operating
systems. Broadly speaking, virtualization is one member of a class of software that also includes
emulation. Emulation is used when the source CPU type is different from the target CPU type. For
example, when Apple switched from the IBM Power CPU to the Intel x86 CPU for its desktop and laptop
computers, it included an emulation facility called “Rosetta,” which allowed applications
compiled for the IBM CPU to run on the Intel CPU.

Windows was the host operating system, and the VMware application was the virtual machine manager
VMM. The VMM runs the guest operating systems, manages their resource use, and protects each guest
from the others.

Department of Computer Science and Engineering, MIT Mysore Page 12


Operating System –M23BCS303

Mobile Computing

Mobile computing refers to computing on handheld smart-phones and tablet computers. These devices
share the distinguishing physical features of being portable and lightweight. Historically, compared with
desktop and laptop computers, mobile systems gave up screen size, memory capacity, and overall
functionality in return for handheld mobile access to services such as e-mail and web browsing.

Today, mobile systems are used not only for e-mail and web browsing but also for playing music and
video, reading digital books, taking photos, and recording high-definition video. Accordingly, tremendous
growth continues in the wide range of applications that run on such devices.

Cloud Computing

Cloud computing is a type of computing that delivers computing, storage, and even applications as a
service across a network. In some ways, it’s logical extension of virtualization, because it uses
virtualization as a base for its functionality.
There are actually many types of cloud computing, including the following:

• Public cloud—a cloud available via the Internet to anyone willing to pay for the services

Private cloud—a cloud run by a company for that company’s own use

• Hybrid cloud—a cloud that includes both public and private cloud components

• Software as a service (SaaS)—one or more applications (such as word processors or spreadsheets)


available via the Internet
• Platform as a service (PaaS)—a software stack ready for application use via the Internet (for example, a
database server)
• Infrastructure as a service (IaaS)—servers or storage available over the Internet (for example, storage
available for making backup copies of production data)

Department of Computer Science and Engineering, MIT Mysore Page 13


Operating System –M23BCS303

Real-Time Embedded Systems


Embedded computers are the most prevalent form of computers in existence. These devices are found
everywhere, from car engines and manufacturing robots to DVDs and microwave ovens. They tend to
have very specific tasks.
The systems they run on are usually primitive, and so the operating systems provide limited features.
Usually, they have little or no user interface, preferring to spend their time monitoring and managing
hardware devices, such as automobile engines and robotic arms
Embedded systems almost always run real-time operating systems. A real-time system is used when
rigid time requirements have been placed on the operation of a processor or the flow of data; thus, it is
often used as a control device in a dedicated application. Sensors bring data to the computer.
The computer must analyze the data and possibly adjust controls to modify the sensor inputs. Systems that
control scientific experiments, medical imaging systems, industrial control systems, and certain display
systems are realtime systems.

OPERATING SYSTEM SERVICES

Operating-System Services
Q) List and explain the services provided by OS for the user and efficientoperation of
system.

An operating system provides an environment for the execution of programs. It provides certain
services to programs and to the users of those programs.

OS provide services for the users of the system, including:

 User Interfaces - Means by which users can issue commands to the system. Depending on the
operating system these may be a command-line interface ( e.g. sh, csh, ksh, tcsh, etc.), a
Graphical User Interface (e.g. Windows, X-Windows, KDE, Gnome, etc.), or a batch
command systems.
In Command Line Interface (CLI)- commands are given to the system.
In Batch interface – commands and directives to control these commands are put in a file and
then the file is executed.

Department of Computer Science and Engineering, MIT Mysore Page 14


Operating System –M23BCS303

In GUI systems- windows with pointing device to get inputs and keyboard to enter the text.
 Program Execution - The OS must be able to load a program into RAM, run the program, and
terminate the program, either normally or abnormally.
 I/O Operations - The OS is responsible for transferring data to and from I/O devices,
including keyboards, terminals, printers, and files. For specific devices, special functions are
provided (device drivers) by OS.
 File-System Manipulation – Programs need to read and write files or directories. The services
required to create or delete files, search for a file, list the contents of a file and change the file
permissions are provided by OS.
 Communications - Inter-process communications, IPC, either between processes running on
the same processor, or between processes running on separate processors or separate machines.
May be implemented by using the service of OS- like shared memory or message passing.
 Error Detection - Both hardware and software errors must be detected and handled
appropriately by the OS. Errors may occur in the CPU and memory hardware (such as power
failure and memory error), in I/O devices (such as a parity error on tape, a connection failure
on a network, or lack of paper in the printer), and in the user program (such as an arithmetic
overflow, an attempt to access an illegal memory location).

OS provide services for the efficient operation of the system, including:

 Resource Allocation – Resources like CPU cycles, main memory, storage space, and I/O
devices must be allocated to multiple users and multiple jobs at the same time.
 Accounting – There are services in OS to keep track of system activity and resource usage,
either for billing purposes or for statistical record keeping that can be used to optimize future
performance.
 Protection and Security – The owners of information (file) in multiuser or networked
computer system may want to control the use of that information. When several separate
processes execute concurrently, one process should not interfere with other or with OS.
Protection involves ensuring that all access to system resources is controlled. Security of the
system from outsiders must also be done, by means of a password.

User Operating-System Interface


There are several ways for users to interface with the operating system.

i) Command-line interface, or command interpreter, allows users to directly enter commands


to be performed by the operating system.
ii) Graphical user interface (GUI), allows users to interface with the operating system using
pointer device and menu system.
Command Interpreter
 Command Interpreters are used to give commands to the OS. There are multiple command
interpreters known as shells. In UNIX and Linux systems, there are several different shells, like
the Bourne shell, C shell, Bourne-Again shell, Korn shell, and others.
 The main function of the command interpreter is to get and execute the user-specified
command. Many of the commands manipulate files: create, delete, list, print, copy, execute,
and so on.

Department of Computer Science and Engineering, MIT Mysore Page 15


Operating System –M23BCS303

The commands can be implemented in two general ways-

i) The command interpreter itself contains the code to execute the command. For example, a
command to delete a file may cause the command interpreter to jump to a particular section
of its code that sets up the parameters and makes the appropriate system call.
ii) The code to implement the command is in a function in a separate file. The interpreter
searches for the file and loads it into the memory and executes it by passing the parameter.
Thus by adding new functions new commands can be added easily to the interpreter without
disturbing it.

Graphical User Interfaces


 A second strategy for interfacing with the operating system is through a userfriendly graphical
user interface or GUI. Rather than having users directly enter commands via a command-line
interface, a GUI allows provides a mouse-based window-and-menu system as an interface.
 A GUI provides a desktop metaphor where the mouse is moved to position its pointer on
images, or icons, on the screen (the desktop) that represent programs, files, directories, and
system functions.
 Depending on the mouse pointer's location, clicking a button on the mouse can invoke a
program, select a file or directory—known as a folder— or pull down a menu that contains
commands.

System Calls

Q) What are system calls? Briefly point out its types.

 System calls provides an interface to the services of the operating system. These are generally
written in C or C++, although some are written in assembly for optimal performance.
 The below figure illustrates the sequence of system calls required to copy a file content from
one file (input file) to another file (output file).

Department of Computer Science and Engineering, MIT Mysore Page 16


Operating System –M23BCS303

An example to illustrate how system calls are used: writing a simple program to read data from one
file and copy them to another file

 There are number of system calls used to finish this task. The first system call is to write a
message on the screen (monitor). Then to accept the input filename. Then another system
call to write message on the screen, then to accept the output filename.
 When the program tries to open the input file, it may find that there is no file of that name
or that the file is protected against access. In these cases, the program should print a
message on the console (another system call) and then terminate abnormally (another
system call) and create a new one (another system call).
 Now that both the files are opened, we enter a loop that reads from the input file (another
system call) and writes to output file (another system call).
 Finally, after the entire file is copied, the program may close both files (another system
call), write a message to the console or window (system call), and finally terminate
normally (final system call).

 Most programmers do not use the low-level system calls directly, but instead use an
"Application Programming Interface", API.
 Instead of direct system calls provides for greater program portability between different
systems. The API then makes the appropriate system calls through the system call interface,
using a system call table to access specific numbered system calls.
 Each system call has a specific numbered system call. The system call table (consisting of
system call number and address of the particular service) invokes a particular service
routine for a specific system call.
 The caller need know nothing about how the system call is implemented or what it does
during execution.

Figure: The handling of a user application invoking the open() system call.

Department of Computer Science and Engineering, MIT Mysore Page 17


Operating System –M23BCS303

Figure: Passing of parameters as a table.

Three general methods used to pass parameters to OS are –

i) To pass parameters in registers


ii) If parameters are large blocks, address of block (where parameters are stored in memory) is
sent to OS in the register. (Linux & Solaris).
iii) Parameters can be pushed onto the stack by program and popped off the stack by OS.

Types of System Calls

The system calls can be categorized into six major categories:

1. Process Control
2. File management
3. Device management
4. Information management
5. Communications
6. Protection

Department of Computer Science and Engineering, MIT Mysore Page 18


Operating System-M23BCS303

Figure: Types of system calls

Department of Computer Science and Engineering, MIT Mysore


Page 19
Operating System-M23BCS303

1. Process Control

 Process control system calls include end, abort, load, execute, create process, terminate
process, get/set process attributes, wait for time or event, signal event, and allocate and free
memory.
 Processes must be created, launched, monitored, paused, resumed, and eventually stopped.
 When one process pauses or stops, then another must be launched or resumed
 Process attributes like process priority, max. allowable execution time etc. are set and
retrieved by OS.
 After creating the new process, the parent process may have to wait (wait time), or wait for
an event to occur (wait event). The process sends back a signal when the event has occurred
(signal event)

2. File Management

The file management functions of OS are –


 File management system calls include create file, delete file, open, close, read, write,
reposition, get file attributes, and set file attributes.
 After creating a file, the file is opened. Data is read or written to a file.
 The file pointer may need to be repositioned to a point.
 The file attributes like filename, file type, permissions, etc. are set and retrieved using
system calls.
 These operations may also be supported for directories as well as ordinary files.

3. Device Management

 Device management system calls include request device, release device, read, write,
reposition, get/set device attributes, and logically attach or detach devices.
 When a process needs a resource, a request for resource is done. Then the control is
granted to the process. If requested resource is already attached to some other process,
the requesting process has to wait.
 In multiprogramming systems, after a process uses the device, it has to be returned to
OS, so that another process can use the device.
 Devices may be physical (e.g. disk drives ), or virtual / abstract ( e.g. files, partitions,
and RAM disks ).

4. Information Maintenance
 Information maintenance system calls include calls to get/set the time, date, system data,
and process, file, or device attributes.
 These system calls care used to transfer the information between user and the OS.
Information like current time & date, no. of current users, version no. of OS, amount of free
memory, disk space etc. are passed from OS to the user.

Department of Computer Science and Engineering, MIT Mysore


Page 20
Operating Systems –M23BCS303

5. Communication
 Communication system calls create/delete communication connection, send/receive
messages, transfer status information, and attach/detach remote devices.
 The message passing model must support calls to:
o Identify a remote process and/or host with which to communicate.
o Establish a connection between the two processes.
o Open and close the connection as needed.
o Transmit messages along the connection.
o Wait for incoming messages, in either a blocking or non-blocking state.
o Delete the connection when no longer needed.
 The shared memory model must support calls to:
o Create and access memory that is shared amongst processes (and threads. )
o Free up shared memory and/or dynamically allocate it as needed.
 Message passing is simpler and easier, (particularly for inter-computer communications),
and is generally appropriate for small amounts of data. It is easy to implement, but there are
system calls for each read and write process.
 Shared memory is faster, and is generally the better approach where large amounts of data
are to be shared. This model is difficult to implement, and it consists of only few system
calls.

6. Protection
 Protection provides mechanisms for controlling which users / processes have access to
which system resources.
 System calls allow the access mechanisms to be adjusted as needed, and for non- privileged
users to be granted elevated access permissions under carefully controlled temporary
circumstances.

Operating-System Structure

Simple Structure
 Many operating systems do not have well-defined structures. They started as small, simple, and
limited systems and then grew beyond their original scope. Eg: MS-DOS.
 In MS-DOS, the interfaces and levels of functionality are not well separated. Application
programs can access basic I/O routines to write directly to the display and disk drives. Such
freedom leaves MS-DOS in bad state and the entire system can crash down when user
programs fail.
 UNIX OS consists of two separable parts: the kernel and the system programs. The kernel is
further separated into a series of interfaces and device drivers. The kernel provides the file
system, CPU scheduling, memory management, and other operating-system functions through
system calls.

Department of Computer Science and Engineering, MIT Mysore Page 21


Operating Systems –M23BCS303

Figure: MS-DOS layer structure.


Layered Approach

 The OS is broken into number of layers (levels). Each layer rests on the layer below it, and
relies on the services provided by the next lower layer.
 Bottom layer (layer 0) is the hardware and the topmost layer is the user interface.
 A typical layer, consists of data structure and routines that can be invoked by higher-level
layer.
 Advantage of layered approach is simplicity of construction and debugging.
 The layers are selected so that each uses functions and services of only lower-level layers. So
simplifies debugging and system verification. The layers are debugged one by one from the
lowest and if any layer doesn’t work, then error is due to that layer only, as the lower layers are
already debugged. Thus, the design and implementation are simplified.
 A layer need not know how its lower-level layers are implemented. Thus hides the operations
from higher layers.

Figure: A layered Operating System

Disadvantages of layered approach:


 The various layers must be appropriately defined, as a layer can use only lower-level layers.
 Less efficient than other types, because any interaction with layer 0 required from top layer. The
system call should pass through all the layers and finally to layer 0. This is an overhead.

Department of Computer Science and Engineering, MIT Mysore Page 22


Operating Systems –M23BCS303

Microkernels
 This method structures the operating system by removing all nonessential components from the
kernel and implementing them as system and user-level programs thus making the kernel as
small and efficient as possible.
 The removed services are implemented as system applications.
 Most microkernels provide basic process and memory management, and message passing
between other services.
 The main function of the microkernel is to provide a communication facility between the client
program and the various services that are also running in user space.

Application File Device


user mode Program System Driver

messages messages

Interprocess memory CPU


Communication managment scheduling

microkernel
kernel mode
hardware

Architecture of a typical microkernel.


Benefit of microkernel –
 System expansion can also be easier, because it only involves adding more system
applications, not rebuilding a new kernel.
 Mach was the first and most widely known microkernel, and now forms a major component of
Mac OSX.
Disadvantage of Microkernel -
 Performance overhead of user space to kernel space communication

Department of Computer Science and Engineering, MIT Mysore Page 23


Operating System- M23BCS303

Modules

 Modern OS development is object-oriented, with a relatively small core kernel and a set of
modules which can be linked in dynamically.
 Modules are similar to layers in that each subsystem has clearly defined tasks and interfaces,
but any module is free to contact any other module, eliminating the problems of going through
multiple intermediary layers.
 The kernel is relatively small in this architecture, similar to microkernels, but the kernel does
not have to implement message passing since modules are free to contact each other directly.
Eg: Solaris, Linux and MacOSX.

Figure: Solaris loadable modules

 The Max OSX architecture relies on the Mach microkernel for basic system management
services, and the BSD kernel for additional services. Application services and dynamically
loadable modules (kernel extensions ) provide the rest of the OS functionality.
 Resembles layered system, but a module can call any other module.
 Resembles microkernel, the primary module has only core functions and the knowledge of how
to load and communicate with other modules.

Hybrid Systems
In practice, very few operating systems adopt a single, strictly defined structure. Instead, they combine
different structures, resulting in hybrid systems that address performance, security, and usability issues.
For example, both Linux and Solaris are monolithic, because having the operating system in a single
address space provides very efficient performance. However, they are also modular, so that new
functionality can be dynamically added to the kernel. Windows is largely monolithic as well (again
primarily for performance reasons), but it retains some behavior typical of microkernel systems, including
providing support for separate subsystems (known as operating-system personalities) that run as user-
mode processes. Windows systems also provide support for dynamically loadable kernel modules. We
provide case studies of Linux and Windows 7 respectively. In the remainder of this section, we explore
the structure of three hybrid systems: the Apple Mac OS X operating system and the two most
prominent mobile operating systems—iOS and Android.

Mac OS X
The Apple Mac OS X operating system uses a hybrid structure. As shown in Figure 2.16, it is a layered
system. The top layers include the Aqua user interface (Figure 2.4) and a set of application environments
and services. Notably, the Cocoa environment specifies an API for the Objective-C programming
language, which is used for writing Mac OS X applications. Below these layers is the kernel environment,
which consists primarily of the Mach microkernel and the BSD UNIX kernel. Mach provides memory
management; support for remote procedure calls (RPCs) and interprocess communication (IPC) facilities,
Operating System- M23BCS303

including message passing; and thread scheduling. The BSD component provides a BSD command-line
interface, support for networking and file systems, and an implementation of POSIX APIs, including
Pthreads.
In addition to Mach and BSD, the kernel environment provides an I/O kit for development of device
drivers and dynamically loadable modules (which Mac OS X refers to as kernel extensions). As shown in
Figure 2.16, the BSD application environment can make use of BSD facilities directly.

iOS
iOS is a mobile operating system designed by Apple to run its smartphone, the iPhone, as well as its tablet
computer, the iPad. iOS is structured on the Mac OS X operating system, with added functionality
pertinent to mobile devices, but does not directly run Mac OS X applications. The structure of iOS
appears in Figure 2.17.
Cocoa Touch is an API for Objective-C that provides several frameworks for developing applications that
run on iOS devices. The fundamental difference between Cocoa, mentioned earlier, and Cocoa Touch is
that the latter provides support for hardware features unique to mobile devices, such as touch screens.
The media services layer provides services for graphics, audio, and video.

Android
The Android operating system was designed by the Open Handset Alliance (led primarily by Google) and
was developed for Android smartphones and tablet computers. Whereas iOS is designed to run on Apple
mobile devices and is close-sourced, Android runs on a variety of mobile platforms and is open-sourced,
partly explaining its rapid rise in popularity. The structure of Android appears in Figure 2.18.
Android is similar to iOS in that it is a layered stack of software that provides a rich set of frameworks for
developing mobile applications. At the bottom of this software stack is the Linux kernel, although it has
been modified by Google and is currently outside the normal distribution of Linux releases.
Operating System- M23BCS303

Linux is used primarily for process, memory, and device-driver support for hardware and has been
expanded to include power management. The Android runtime environment includes a core set of libraries
as well as the Dalvik virtual machine. Software designers for Android devices develop applications in the
Java language. However, rather than using the standard Java API, Google has
designed a separate Android API for Java development. The Java class files are first compiled to Java
bytecode and then translated into an executable file that runs on the Dalvik virtual machine. The Dalvik
virtual machine was designed for Android and is optimized for mobile devices with limited memory and
CPU processing capabilities.
The set of libraries available for Android applications includes frameworks
for developing web browsers (webkit), database support (SQLite), and multimedia. The libc library is
similar to the standard C library but is much smaller and has been designed for the slower CPUs that
characterize mobile devices.

You might also like