OPERATING SYSTEM in Kenya
OPERATING SYSTEM in Kenya
Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in use.
In multiprogramming, the OS decides which process will get memory when and how much.
Allocates the memory when a process requests it to do so.
De-allocates the memory when a process no longer needs it or has been terminated.
2. PROCESS MANAGEMENT
In multiprogramming environment, the OS decides which process gets the processor when and for how much
time. This function is called process scheduling. An Operating System does the following activities for
processor management:
Keeps tracks of processor and status of process. The program responsible for this task is known as
traffic controller.
Allocates the processor (CPU) to a process.
De-allocates processor when a process is no longer required.
3. DEVICE MANAGEMENT
An Operating System manages device communication via their respective drivers. It does the following
activities for device management:
Keeps tracks of all devices. The program responsible for this task is known as the
I/O controller.
Decides which process gets the device when and for how much time.
Allocates the device in the most efficient way.
De-allocates devices.
4. File Management
A file system is normally organized into directories/folder for easy navigation and usage. These directories
may contain files and other directions.
An Operating System does the following activities for file management:
Keeps track of information, location, uses, status etc. The collective facilities are often known as file
system.
Decides who gets the resources.
Allocates the resources.
De-allocates the resources.
Other Important Activities
Following are some of the important activities that an Operating System performs:
5. SECURITY -- By means of password and similar other techniques, it prevents unauthorized access to
programs and data.
6. CONTROL OVER SYSTEM PERFORMANCE -- Recording delays between request for a service
and response from the system.
7. JOB ACCOUNTING -- Keeping track of time and resources used by various jobs and users.
8. ERROR DETECTING AIDS -- Production of dumps, traces, error messages, and other debugging and
error detecting aids.
9. COORDINATION BETWEEN OTHER SOFTWARE AND USERS -- Coordination and assignment
of compilers, interpreters, assemblers and other software to the various users of the computer systems.
8. Semaphore - Semaphore is simply a variable. This variable is used to solve the critical section problem and to
achieve process synchronization in the multiprocessing environment. The two most common kinds of
semaphores are counting semaphores and binary semaphores.
9. The main function of the dispatcher (the portion of the process scheduler) is assigning ready process to the
CPU. The key difference between scheduler and dispatcher is that the scheduler selects a process out of
several processes to be executed while the dispatcher allocates the CPU for the selected process by
the scheduler.
10. Firmware - is data that is stored on a computer or other hardware device's ROM (read-only memory) that provides
instruction on how that device should operate.
11. Wild card - A special symbol that stands for one or more characters. Many operating
systems and applications support wildcards for identifying files and directories. This enables you to
select multiple files with a single specification. For example, in DOS and Windows, the asterisk(*) is a
wild card that stands for any combination of letters. The file specification m*
therefore, refers to all files that begin with m. Similarly, the specification
m*.doc
refers to all files that start with m and end with.doc.
Question:What is the difference between a keyword search using the ‘*‘ (asterisk) versus a
keyword search using the ‘%‘ (percent)? Both work in the catalog, but return different sets.
Why?
Answer: A wildcard is a character (*,?,%,.) that can be used to represent one or more characters
in a word. Two of the wildcard characters that can be used in Koha searches are the asterisk (‘*‘)
and the percent sign (‘%‘). However, these two characters act differently when used in searching.
The ‘*‘ is going to force a more exact search of the first few characters you enter prior to the ‘*‘.
The asterisk will allow for an infinite number of characters in the search as long as the first few
characters designated by your search remain the same. For example, searching for authors using
the term, Smi*, will return a list that may include Smith, Smithers, Smithfield, Smiley, etc
depending on the authors in your database.
The ‘%‘ will treat the words you enter in the terms of “is like“. So a search of Smi% will search
for words like Smi. This results in a much more varied results list. For example, a search on Smi
% will return a list containing Smothers, Smith, Smelley, Smithfield and many others depending
on what is your database.
The bottom line in searching with wildcards: ‘*‘ is more exact while ‘%‘ searches for like terms.
1. It is decomposable and therefore effects separation of concerns and different abstraction levels
2. It allows good maintenance, where you can make changes without affecting layer interfaces
1) Centralization: Unlike P2P, where there is no central administration, here in this architecture there is a
centralized control. Servers help in administering the whole set-up. Access rights and resource allocation is
done by Servers.
2) Proper Management: All the files are stored at the same place. In this way, management of files becomes
easy. Also it becomes easier to find files.
3) Back-up and Recovery possible: As all the data is stored on server its easy to make a back-up of it. Also,
in case of some break-down if data is lost, it can be recovered easily and efficiently. While in peer computing
we have to take back-up at every workstation.
4) Upgrading and Scalability in Client-server set-up: Changes can be made easily by just upgrading the
server. Also new resources and systems can be added by making necessary changes in server.
5) Accessibility: From various platforms in the network, server can be accessed remotely.
6) As new information is uploaded in database, each workstation need not have its own storage capacities
increased (as may be the case in peer-to-peer systems). All the changes are made only in central computer on
which server database exists.
7) Security: Rules defining security and access rights can be defined at the time of set-up of server.
1) Congestion in Network: Too many requests from the clients may lead to congestion, which rarely takes
place in P2P network. Overload can lead to breaking-down of servers. In peer-to-peer, the total bandwidth of
the network increases as the number of peers increase.
2) Client-Server architecture is not as robust as a P2P and if the server fails, the whole network goes down.
Also, if you are downloading a file from server and it gets abandoned due to some error, download stops
altogether. However, if there would have been peers, they would have provided the broken parts of file.
3) Cost: It is very expensive to install and manage this type of computing.
4) You need professional IT people to maintain the servers and other technical details of network.
To speed up processing, jobs with similar needs are batched together and run as a group. The programmers
leave their programs with the operator and the operator then sorts the programs with similar requirements
into batches.
The processors communicate with one another through various communication lines (such as high-speed
buses or telephone lines). These are referred as loosely coupled systems or distributed systems.
Examples of network operating systems include Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
JOB CONTROL0
a) Command languages
b) Job control languages
System messages
1. COMMAND LANGUAGES
A computer programming language composed chiefly of a set of commands or operators, used especially for
communicating with the operating system of a computer.
Examples include; shell and batch programming languages
2. JOB CONTROL LANGUAGE
This is a scripting language used on IBM mainframe operating systems to instruct the system on how to run a
batch job or a subsystem.
The purpose of job control language is to say which programs run, using which files or devices for input or
output.
3. SYSTEM MESSAGES
System massages is a form of communication between objects, processes or other resources used in object
oriented programming, inter-process communication and parallel computing.
Operating System Capabilities (“Types of o/s”)
a) A particular o/s may incorporate one or more of these capabilities:
b) Single User processing – only one user at a time to access a computer e.g. DOS
c) Multi-user processing – allows 2 or more users to access a computer at the same time. Actual no of
users depends on the hardware and o/s design e.g. UNIX
d) Single Tasking – allows one program to execute at a time and that program must finish executing
before the next program can begin e.g. DOS.
e) Context Switching – allows several programs to reside in the memory but only one to be active at a
time. Active program is in the foreground and others in the background.
f) Multitasking / Multiprogramming – allows single CPU to execute what appears to be more than one
program at a time. The CPU switches its attention between 2 or more programs in main memory as it
receives requests for processing from one program and the other. It happens so quickly that the programs
appear to execute simultaneously / executing concurrently.
g) Multiprocessing / Parallel processing – allows the simultaneous or parallel execution of programs by
a computer that has 2 or more CPU’s
h) Multithreading – support several simultaneous functions with the same application.
i) Inter-processing / Dynamic linking – allows any change made in one application to be automatically
reflected in any related linked applications e.g. a link between word processing and financial applications
(linked together).
j) Time sharing – allows multiple users to access a single computer found on a large computer o/s where
many users need access at the same time.
k) Virtual storage – o/s with the capability of virtual storage called virtual memory, allows you to use a
secondary storage device as an extension of main memory.
l) Real Time Processing – allows a computer to control or to monitor the task performances of other
machines and people by responding to input data in a specified amount of time. To control processes
immediate response is usually necessary.
m) Virtual Machine (VM) Processing – creates the illusion that there is more than one physical machine
when in fact there is only one. Such programming allows several users of a computer to operate as if each
had the only terminal attached to the computer. Thus, users feel as if each is on a dedicated computer and
has sole use of the CPU and I/O devices. When a VM operating system is loaded, each user chooses the
o/s that is compatible with his or her intended application program to the VM o/s. Thus the V/M o/s
gives users flexibility and allows them to choose o/s that best suits their needs.
PROCESS MANAGEMENT
A process can be thought as a program in execution. It needs certain resources including CPU time, memory,
file and I/O devices to accomplish its task.
The o/s is responsible for the following activities in connection with process management:
Creation and deletion of both user and system processes
Suspension and resumption of processes
Provision of mechanism for process synchronization
Provision of mechanism for process communication
Provision for mechanism for deadlock handling.
Process Model
A process is more than a program. It includes current activity temporary data containing global value. A
program is a passive entity e.g. contents of a file stored on disk where as a process is an entity.
Process state
As a process executes, it changes state. The state of a process is defined in part by the current activity of that
process. It states may be:
i) New – process is being created
ii) Running – Instructions are being executed
iii) Waiting – process waiting for some event to occur.
iv) Ready – waiting to be assigned to a processor
v) Terminated – finished execution.
INTER-PROCESS COMMUNICATION
O/s must provide inter-process communication facility (IPC) which provides a mechanism to allow processes
to communicate and to synchronize their actions to prevent Race conditions(a condition where several
processes access and manipulate the same data concurrently and the outcome of the execution depends on
the particular order in which the access takes place).
To guard against this mechanisms are necessary to ensure that only one process at a time can be manipulating
the data. Some of this mechanisms / techniques include:
Critical Section Problem.
It is a segment of code in which a process may be changing common variables, updating a table, writing a
file etc. Each process has its critical section while the other portion is referred to as Reminder section.
The important feature of the system is that when one has process is executing in its critical section no other
process is to be allowed to execute its critical section. Each process must request permission to enter its
critical section.
A solution to critical section problem must satisfy the following:
i) Mutual Exclusion – If one process is executing its critical section then no other process can be
executing in their critical section.
ii) Progress – If no process is executing in its critical section and there exists some processes that wish to
enter critical sections then those processes that are not executing in their remainder section can be
allowed.
i) Bounded Waiting – There must exist a bound on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical section
and before that request is granted (Prevents Bus waiting).
Busy waiting occurs when one process is being executed in its critical section and process 2 tests to
confirm if process 1 is through. It is only acceptable technique when the anticipated waits are brief;
otherwise it could waste CPU cycles.
To achieve the desired effect we can view the semaphore as a variable that has an integer value upon which 3
operations are defined
a) A semaphore may be initialized to a non-negative value
b) The wait operation decrements these semaphore value. If the value becomes negative the
process executing the wait is blocked.
c) The signal operation increments the semaphore value. If the value is not positive then a
process is blocked by await operation is unblocked
A binary semaphore accepts only 2 values 0 or 1. For both semaphore and binary semaphore a queue is used
to hold processes waiting on the semaphore.
A strong semaphore has a mechanism of removing processes from a queue e.g. FIFO policy.
A weak semaphore doesn’t specify the order in which processes are removed from the queue.
Example of implementation:
Wait (S);
Critical Section
Signal (S);
2. Message passing
Provides a means for co-operating processes to communicate when they are not using shared memory
environment. Processes generally send and receive messages by using system calls such as: send
(receiverprocess; message) or Receive (senderprocess; message).
A blocking send must wait for receiver to receive the message. A non-blocking send enables the sender to
continue with other processing even if the receiver has not yet received the message. This requires buffering
mechanisms to hold messages until the receiver receives it.
Message passing can be flawless on some computers but in a distributed system it can be flawed or even lost,
thus acknowledgement protocols are used.
One complication in distributed systems with send / receive message passing is in naming processes
unambiguously so that send and receive calls references the proper process.
Process creation and destruction can be coordinated through some centralized naming mechanisms but this
can introduce considerable transmission overhead as individual machines request permission to use new
names.
3. Monitors
It is a high synchronization construct characterized by a set of programmer defined operations. It is made of
declarations of variables and bodies of procedures or functions that implement operations of a given type. It
cannot be used directly by the various processes; ensures that only one process at a time can be active within
a monitor.
Schematic view of a monitor
Processes desiring to enter the monitor when it is already in use must wait. This waiting is automatically
managed by the monitor. Data inside the monitor is accessible only to the process inside it. There is no way
for processes outside the monitor to access monitor data. This is called information hiding.
Event Counters
Introduced to enable process synchronization without use of mutual exclusion. . It keeps track of no of
occurrences of events of a particular class of related events. It is an integer counter that does not decrease.
Some operations will allow processes to reference event counts e.g. Advance (event Count), Read (event
count) and await (event count value).
Advance (E) –signals the occurrence of an event of the class of events represented by E by incrementing
event count E by 1.
Read (E) – obtains value of E; because Advance (E) operations may be occurring during Read (E) it is only
guaranteed that the value will be at least as great as E was before the read started.
Await (E) – blocks the process until the value of E becomes at least V; this avoids the need for busy waiting.
PROCESS SCHEDULING
It is the process of switching the CPU among processes; it makes the computer more productive and is the
basis of multiprogrammed o/s. Normally several processes are kept in memory and when one process has to
wait, the o/s takes the CPU away from that process and gives the CPU to another process.
Modules of the o/s involved include
CPU Scheduler
CPU dispatcher
CPU Scheduler – selects a process from the ready queue that is to be executed. Scheduling decisions may
take place under the following four conditions:
a) When process switches from running state to waiting state (e.g. I/O request or invocation of wait
for the termination of one of the child processes).
b) When a process switches from running state to the ready state (e.g. when an interrupt occurs)
c) When a process switches from the waiting state to the ready state (e.g. completion of I/O)
d) When a process terminates.
Scheduling levels
Three important levels of scheduling are considered.
a) High level scheduling (Job Scheduling) – determines which jobs should be allowed to compete
actively for the resources of the system. Also called Admission Scheduling. Once admitted jobs
become processes or groups of processes.
b) Intermediate level scheduling – determines which processes shall be allowed to compete for the
CPU.
c) Low level scheduling – determines which ready process will be assigned the CPU when it next
becomes available and actual assigns the CPU to this process.
Different CPU scheduling algorithms are available, thus in determining which the best is for a particular
situation many criteria have been suggested that can be used to compare them. They include:
(i) CPU utilization – keeps the CPU as busy as possible.
(ii) Throughput – measure of work i.e. no of processes that are completed per time unit.
(iii) Turn around time – The interval from time of submission to time of completion. It is sum of
periods spent waiting to get into memory, waiting in the ready queue, executing in CPU and doing
I/O.
(iv) Waiting Time – Sum of periods spent waiting in ready queue.
(v) Response time – Measure of time from submission of a request by user until the first response is
produced. It is amount of time it takes to start responding but not time that it takes to output that
response.
(vi) I/O Bounded-ness of a process – when a process gets the CPU, does it use the CPU only briefly
before generating an I/O request?
(vii) CPU Bounded-ness of a process – when a process gets the CPU does it tend to use the CPU until
its time quantum expires.
Scheduling Algorithms
1. First Come First Served (FCFS) / First In First Out (FIFO) Scheduling
Process that request CPU first is allocated the CPU first. Code for FCFS is simple to write and understand.
Average waiting time under FCFS is often quite long. FCFS is non-preemptive discipline (once a process has
the CPU it runs to completion).
Example
Processes arrive at time o, with length of the CPU-burst time given in milliseconds.
Process Burst Time
(Time of CPU
use)
P1 24
P2 3
P3 3
P1 P2 P3
0 24 27 30
Thus Average waiting time under FCFS policy is generally not minimal and may vary substantially if
processes CPU-Burst times vary greatly. FCFS algorithm is particularly troublesome for time sharing
systems.
2. Shortest Job First (SJF) Scheduling
When the CPU is available it is assigned to the process that has the smallest next CPU burst. If 2 processes
have the same length next CPU Burst, FCFS scheduling is used to break the tie. It reduces Average waiting
time as compared to FCFS.
Example
Process Burst Time
(ms)
P1 6
P2 8
P3 7
P4 3
P4 P1 P3 P2
0 3 9 16 24
Real difficulty of SJF is knowing the next CPU request and this information is not usually available. NB: SJF
is non-preemptive in nature.
Gantt chart
P1 P2 P4 P1 P3
0 1 5 10 17 26
P1 is started at time 0 since it’s the only process in the queue. P2 arrives at time 1. The remaining time for P1
(7ms) is larger than time for process P2 (4ms) so P1 is preempted and P2is scheduled. Average waiting time
is less as compared to non preemptive SJF.
SRT has higher overhead than SJF since it must keep track of elapsed service time of the running job and
must handle occasional preemptions.
Gantt chart:
P1 P2 P3 P4 P1 P3 P4 P3
0 4 8 12 16 20 24 25 26
RR is effective in time sharing environments. The pre-emptive overheads are kept low by efficient context
switching mechanisms and providing adequate memory for the processes to reside in main storage.
Selfish Round ROBIN is a variant introduced by Klein Rock i.e. as processes enter the system they first
reside in a holding queue until their priorities reach the levels of processes in an active queue.
5. Priority Scheduling
SJF is a special case of general priority scheduling algorithms. A priority is associated with each process and
CPU is allocated to the process with highest priority. Equal priority processes are scheduled in FCFS order.
Priorities are generally some fixed range of numbers e.g. 0 – 7 or 0 – 4095. Some system use low numbers to
represent low priority; others use low numbers for high priority. Priorities can be defined either
internationally or externally.
a) For internally defined priorities the following measurable quantities are used – time limits,
memory requirements, no of open files, ration of average I/O burst to CPU burst.
b) Externally defined priorities are set by criteria that are eternal to o/s e.g. importance of process,
type and amount of funds being paid for computer use, department sponsoring the work and other
political factors.
Priority scheduling can either be preemptive or non preemptive.
Major problem is indefinite blocking or starvation: low priority jobs may wait indefinitely for CPU.
Solution: Aging – technique of gradually increasing the priority of process that waits in the system for a long
time.
Exercises
1. Consider the following set of processes, with the length of CPU burst time
given in milliseconds.
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
The processes are assumed to have arrived in the order P1, P2, P3, P4, and P5 all at time 0.
Draw for Gantt charts illustrating the execution of these processes using FCFS, SJF, Priority (1 is lowest
priority) and RR (quantum = 1ms) scheduling. Calculate average waiting time for each algorithm.
2. Assume that we have the workload shown. All 5 processes arrive at time 0 in
order given with the length f burst time given in ms.
Process Burst Time
P1 10
P2 29
P3 3
P4 7
P5 12
Consider FCFS, SJF and RR (quantum = 10ms) scheduling algorithms for this set of processes, which
algorithm would give the minimum average waiting time.
DEADLOCKS
Occurs when several processes compete for a finite number of resources i.e. a process is waiting for a
particular event that will not occur. The event here may be resource acquisition and release.
Example of Deadlocks
1. A traffic deadlock – vehicles in a busy section of the city.
P2
P2
R2 Held by
Requests
This system is deadlocked because each process holds a resource being requested by the other process and
neither process is willing to release the resource it holds. (Leads to deadly embrace).
DEADLOCK DETECTION
This is the process of actually determining that a deadlock exists and of identifying the processes involved in
the deadlock. (i.e. determine if a circular wait exists). To facilitate detection of deadlocks resource allocation
graphs are used which indicate resource allocation and requests. This graphs change as process request
resources, acquire them and eventually release them to the o/s.
Reduction of Resource Allocation Graphs IS a technique useful for detecting deadlocks i.e. processes that
may complete their execution and the processes that will remain deadlocked are determined. If a process’s
resource request may be granted then we say that a graph may be reduced by that process (arrows connecting
process and resource is removed).
If a graph can be reduced by all its processes then the irreducible processes constitute the set of deadlocked
processes in the graph.
NB: - the order in which the graph reductions are performed does not matter; the final result will always be
the same.
Notations of Resource Allocation and request graphs.
P1 R1
R2
P2
1
A resource of type R2 has been allocated to
process P2
R3
P3
P4
Process P3 is requesting resource R3 which has
been allocated to process P4
R3
P3 P4
Circular wait (deadlock) process P5 has been
allocated R5 which is requested by P6 that has been
allocated R4; that is being requested by process P5
R3
Graph Reduction
Given the Resource Allocation graph can you determine the possibility of a deadlock?
R6
I. Reducing by P9
R6
P8
P8
R7
R7
P7 P9
P7 P9
II. Reducing by P7 III. Reducing by P8
R6 R6
P8 P8
R7 R7
P7 P9 P7 P9
NB: Deadlock detection algorithm should be invoked at less frequent intervals to reduce overhead in
computation time. If it is invoked at arbitrary points there may be many cycles in resource graph and it would
be difficult to tell which of the many deadlocked processes “caused” the deadlock.
DEADLOCK RECOVERY
Once a deadlock as been detected several alternatives exist:
a) Ostrich Algorithm (Bury head in the sand and assume things would just work out).
b) Informing the operator who will deal with it manually
Let the system recover automatically
DEADLOCK PREVENTION
Aims at getting rid of conditions that cause deadlock. Methods used include:
a) Denying Mutual exclusion – should only be allowed or non-shareable resources. For shareable
resources e.g. read only files the processes’ attempting to open it should be granted simultaneous
access. Thus for shareable we do not allow mutual exclusion.
b) Denying Hold and wait (wait for condition) – To deny this we must guarantee that whenever a
process requests a resource it does not hold any other resources. Protocols used include:
i) Requires each process to request and be allocated all its resources before it begins
execution. While waiting for resources to be available it should not hold any resource –
may lead to serious waste or resources.
ii) A process only requests for a resource when it has none or it has to release all resources
before getting a resource.
Disadvantages of these protocols
Resource utilization is low since many of the resources may be allocated but unused
for a long period.
Starvation is possible – A process that needs several popular resources may have to
wait indefinitely because at least one of the resources that it needs is always
allocated to some other process.
c) Denying “No preemption”
If a process that is holding some resource requests another resource that cannot be immediately allocated
to it (i.e. it must wait) then all resources currently being held are preempted. The process will be started
only when it can regain its old resources as well as the new ones that it is requesting.
Dijkstra’s Bankers Algorithm says allocation of a resource is only done when it results in a safe state rather
than in unsafe states. A safe state is only in which the total resource situation is such that all users would
eventually be able to finish. An unsafe state is one that might eventually lead to a deadlock.
Example of a safe state
Assume a system with 12 equivalent tape drives and 3 users sharing the drives:
State I
Users Current Maximum
Loan Need
User(1) 1 4
User(2) 4 6
User(3) 5 8
Available 2
The state is ‘safe’ because it is still possible for all 3 users to finish i.e. the remaining 2 drives may be given
to users(2) who may run to completion after which six tapes would be released for user(1) and user(3). Thus
the key to a state being safe is that there is at least one way for all user to finish.
Example of an unsafe state
State II
Users Current Maximum
Loan Need
User(1) 8 10
User(2) 2 5
User(3) 1 3
Available 1
A 3-way deadlock could occur if indeed each process needs to request at least one more drive before
releasing any drives to the pool.
NB: An unsafe state does not imply the existence of a deadlock of a deadlock. What an unsafe state does
imply is simply that some an unfortunate of events might lead to a deadlock.
State IV (Unsafe)
Therefore State IV is not necessarily deadlocked but the state has gone from a safe one to an unsafe one.
Thus Dijkstra’s Bankers Algorithm, the mutual exclusion, wait-for and No-preemption conditions are
allowed but processes do not claim exclusive use of the resources they require.
EXERCISE
In the context of Dijkstra’s Bankers Algorithm discuss whether each of the following states is safe or unsafe.
If a state is safe show how it is possible for all processes to complete. If a state is unsafe show how it is
possible for a deadlock to occur.
State A
State B
Advantages
Minimize fragmentation
Disadvantages
Page mapping hardware may lead to high cost of computer
Memory used to store various tables may be big
Some memory may remain unused if it is not sufficient for a frame.
2. Segmentation
Processes are divided into variable sized segments determined by sizes of process’s program, sub-routines,
data structures e.t.c A segment is a logical grouping of information such as a routine, an array or data area
that are determined by programmer. Each has a name and a length.
Example: User’s view of a program
Sgmt
1400 0Sgmt
Subroutine Stack 2400 3Sgmt
Segment 3 2Sgmt
Segment 0 3200 4Sgmt 1
Symbol Table
LimitBase100014 4300 Main
Memory
004006300110032 4700
Sqrt 0010004700Segm
Segment 4 ent Table 5700
6300
6700
Segment 1 Main Program
Segment 2
Logical Address space
(Program in backing store)
Advantages
Eliminates internal fragmentation
Allows dynamic growth of segments
Facilitates the loading of only one copy of shared routines.
Disadvantages
Considerable compaction overheads incurred in order to support dynamic growth and eliminate
fragmentation.
Maximum segment size is limited to the size of main memory
Can cause external fragmentation when all blocks of free memory are too small to accommodate a
segment.
3. Partitioned Allocation
Ensures the main memory accommodates both o/s and the various user processes i.e. memory is divided into
2 partitions one for resident o/s and one for user processes. It protects o/s code and data from changes
(accidental or malicious) by user processes.
The o/s takes into account the memory requirements of each process and amount of available memory space
in determining which processes are allocated memory.
Example
0
OS
400 K Job Queue
Process Memory Time
P1 600 K 10
P2 1000 K 5
2160 K P3 300 K 20
P4 700 K 8
P5 500 K 15
2560 K
0 OS 0 OS 0 OS 0 OS 0 OSP5
P1 P1 P1 P1
400 K 400 K 400 K 400 K 400 K P4
P1 P5
P4 P4 P3
Terminates Allocated
P2 P3 P3
900 K
1000K P3 1000K P3 1000K 1000K
P4 1000K
P2
Allocated
Terminates
1700K 1700K
2000K 2000K 2000K 2000K
1700K
2000K
2300K 2300K 2300K 2300K
2300K
2560K 2560K 2560K 2560K
2560K
Advantages
Eliminates fragmentation and makes it possible to allocate more partitions which allows higher
degree of multiprogramming
Disadvantages
Relocation h/ware increase cost of computers
Compaction time may be substantial
Job partition size is limited to the size of limited memory
4. Overlays
Used when process size is larger than amount of memory allocated (keeps in memory only those instructions
and data that are needed at any given time)
Example: A 2-pass assembler
During Pass 1, it constructs a symbol table and in pass 2 it generates machine language code. Thus the
assembler can be partitioned into Pass 1 code, pass 2 code, symbol table and common support routines used
by both Pass 1 and Pass 2.
Assuming sizes:
Pass 1 70K
Pass 2 80 K
Symbol Table 20 K
Common Routines 30 K
Loading everything requires 200 K of memory. If 150 K is available; Pass 1and Pass 2 do not need to be in
memory at the same time. Thus 2 overlays can be defined i.e.
Overlay A (Symbol table, common routines and pass1)
Overlay B (Symbol table, common routines and pass 2)
An overlay driver (10 K) is needed to manage the switching between overlays.
Symbol
20 K
TableCom
mon 30 K
routinesO
10 K
verlay
Driver
70 K Pass 1 Pass 2 80 K
Advantages
Do not require any special support from the o/s i.e. can be implemented by the users with simple file
structures
Disadvantages
Programmers must design and program an overlay structure properly
5. Swapping
User programs do not remain in main memory until completion. In some systems one job occupies the main
storage once. That job runs then when it can no longer continue it relinquishes both storage and CPU to the
next job.
Thus the entire storage is dedicated to one job for a brief period, it is then removed (i.e. swapped out or rolled
out) and next job is brought in (swapped in or rolled in). A job will normally be swapped in and out many
times before it is completed. It guarantees reasonable response times
VIRTUAL MEMORY
It is a technique where a part of secondary storage is addressed as main memory. It allows execution of
processes that may not be completely in memory i.e. programs can be larger than memory. Virtual memory
techniques:
1. Paging
Processes reside on secondary storage memory (usually a disk) and when they are to be executed they are
swapped into memory. Swapping is not for entire process but pages that are needed. (It uses a lazy swapper).
With this scheme we need some form of hardware support to distinguish between those pages that are in
memory and those pages that are on the disk.
2. Segmentation
It eliminates hardware overheads. The o/s allocates memory in segments rather than pages. It keeps track of
these segments through segment descriptors which include information about segments, size, protections and
location.
A program doesn’t need to have its entire segment in memory to execute; instead the segment descriptor
contains a valid bit for each segment to indicate whether the segment is currently in memory. A trap to o/s
(system fault) occurs when a segment not in memory is requested. The o/s will swap out a segment to
secondary storage and bring in the entire requested segment.
To determine which segment to replace in case of a segment fault the o/s uses another bit in the segment
descriptor called accessed bit.
FILE MANAGEMENT
File system – concerned with managing secondary storage space particularly disk storage. It consists of two
distinct parts:
A collection of files each storing related data
A directory structure which organizes and provide information about all the files in the system.
A file – is a named collection of related information that is recorded on secondary storage.
File Naming
A file is named for convenience of its human users and a name is usually a string of characters e.g. “good.c”.
When a file is named it becomes independent of the process, the user and even the system that created it i.e.
another user may edit the same file and specify a different name.
File Types
An o/s should recognize and support different files types so that it can operate and file in reasonable ways.
Files types can be implemented by including it as part of file name. The name is split into 2 parts – a name
and extension usually separated by a period character. The extension indicates the type of file and type of
operations that can be on that file.
File Attributes
A part from the name other attributes of files include
Size – amount of information stored in the file
Location – Is a pointer to a device and to the location of file on the device
Protection – Access control to information in file (who can read, write, execute and so on)
Time, Date and user identification –This information is kept for creation, last modification and last
use. These data can be useful for protection, security and usage monitor.
Volatility – frequency with which additions and deletions are made to a file
Activity – Refers to the percentage of file’s records accessed during a given period of time.
File Operations
File can be manipulated by operations such as:
Open – prepare a file to be referenced
Close – prevent further reference to a file until it is reopened
Create – Build a new file
Destroy – Remove a file
Copy – create another version of file with a new name.
Rename – change name of a file.
List – Print or display contents.
Move – change location of file
Individual data items within the file may be manipulated by operations like:
Read – Input a data item to a process from a file
Write – Output a data item from a process to a file
Update – modify an existing data item in a file
Insert – Add a new data item to a file
Delete – Remove a data item to a file
Truncating – delete some data items but file retains all other attributes
File Structure
Refers to internal organization of the file. File types may indicate structure. Certain files must conform to a
required structure that is understood by the o/s. Some o/s have file systems that does support multiple
structure while others impose (and support) a minimal number of file structures e.g. MS DOS and UNIX.
UNIX considers ach file to be a sequence of 8-bit bytes. Macintosh o/s supports a minimal no of file structure
and it expects executables to contain 2 parts – a resource fork and a data fork. Resource fork contains
information of importance to user e.g. labels of any buttons displayed by program.
File Organization
Refers to the manner in which records of a file are arranged on secondary storage.
The most popular schemes are:-
a) Sequential
Records placed in physical order. The next record is the one that physically follows the previous record. It is
used for records in magnetic tape.
b) Direct
Records are directly (randomly) accessed by their physical addresses on a direct Access by storage device
(DASD). The application user places the records on DASD in any order appropriate for a particular
application.
c) Indexed Sequential
Records are arranged in logical sequence according to a key contained in each record. The system maintains
an index containing the physical addresses of certain principal records. Indexed sequential records may be
accessed sequentially in key order or they may be accessed directly by a search through the system created
index. It is usually used inn disk.
d) Partitioned
It is a file of sequential sub files. Each sequential sub file is called a member. The starting address of each
member is stored in the file directory. Partitioned files are often used to store program libraries.
File Sharing
In a multi user system there is almost a requirement for allowing files to be shared among a number of users.
Two issues arise:-
i) Access rights – should provide a number of options so that the way in which a particular file is
accessed can be controlled. Access rights assigned to a particular file include: Read, execute, append,
update, delete.
ii) Simultaneous access – a discipline is required when access is granted to append or update a file to
more than one user. An approach can allow a user to lock the entire file when it is to be updated.
DEVICE MANAGEMENT (INPUT/OUTPUT MANAGEMENT)
Device management encompasses the management of I/O devices such as printers, keyboard, mouse, disk
drives, tape drives, modems etc. The devices normally transfer and receive alphanumeric characters in ASCII
which use 7 bits to code 128 characters i.e. A-Z, a-z, 0-9 and 32 special printable characters such as %, *.
The major objective of I/O management is to resolve these differences by supervising and synchronizing all
input and output transfers.
Processor Data
Address
Control Bus
Interface Interface Interface
Address bus – carry device address that identifies a device being communicated
Data Bus – carry data to or from a device
Control Bus – transmit I/O Commands
There are four types of commands that are transmitted:
(a) Control command – activates device and informs it what to do e.g. rewind
(b) Status command – used to test conditions in the interface and peripherals e.g. checking for
errors.
(c) Data output command – interface caused to transfer data into device.
(d) Data input command – interface receives an item of data from device.
Most computers use the same buses for both memory and input/output transfers. The processor will use
different methods to distinguish the two i.e.
a) Isolated input / output (I/O mapped I/O) – isolates memory from I/O addresses and thus a signal is
sent that distinguishes memory from I/O, enabling one device or memory location.
b) Memory mapped I/O – devices are treated exactly the same as memory locations. Same
instructions are used:
Example of I/O interface
It takes care of input/ output tasks, relieving the main processor the housekeeping chores involved in I/O
transfer. In addition the IOP can perform other processing tasks such as an arithmetic, logic and code
translation.
Block diagram of a computer with I/O Processor
CPU
Peripheral Devices
PD PD PD
Memory
Unit
IOP
I/O Bus
In interrupt controlled transfers the I/O software must issue commands to the peripheral to interrupt when
ready and to service the interrupt when it occurs. Software control of input –output equipment is a complex
undertaking. For this reason I/O routines for standard peripheral are provided by the manufacturer as part of
the computer system (usually within the O/S) i.e. device drivers, which communicate directly with peripheral
devices or their controllers. It is responsible for starting I/O operations on a device and processing the
completion of an I/O request.
DISKS
They include floppy disks, hard disks, disk caches, RAM disks and laser optical disks (DVD, CD-ROMs).On
DVDs and CD ROMs sectors form a long spiral that spins out from the center of the disk. On floppy disks
and hard disks sectors are organized into a no of concentric circles or tracks. As one moves out from the
center of the disk, the tracks get larger,
Disk Hardware
This refers to the disk drive that accesses information in the disk. The actual details of disk I/O operation
depend on the computer system, the operating system and nature of I/O channel and disk controller hardware.
Data is recorded on a series of magnetic disks or platters which are connected by a common spindle that
spins at very high speed (some up to 3600 revolutions per minute).
Platter showing:
Tracks and sectors
Boom
Track selection involves moving the head in a moveable head system or electronically selecting one head on
a fixed head system.
Hard Disk performance
Three factors affect Access time A – time it takes to access data from a disk.
Seek Time, S – in a moveable –head system, is the time it takes to position the head at the track.
Latency Time, L (Rotational Delay/Rotational Latency) – is time it takes for data to rotate from its
current position to a position adjacent to the read write head.
Transfer Time, T – determined by the amount of information to be read, no of bytes per track and
rotational speed.
Therefore: Access Time – Is the sum of seek time, latency time and transfer time.
A = S + L+ T
Disk Scheduling
Involves careful examination of pending requests to determine the most efficient way to service the requests.
A disk scheduler examines the positional relationships among waiting requests. The request queue is then
recorded so that the request will be serviced with minimum mechanical motion.
Common types of scheduling:
1. Seek Optimization
It aims to reduce average time spent on seeks. Algorithms used:
a) FCFS (First Come First Served) – we process items from the queue of requests in a sequential
order. There is no re-ordering of queue.
b) SSFT (Shortest seek Time First) – Disk arm is positioned next at the request (inward or outward)
that minimizes arm movement.
c) SCAN – Disk arm sweeps back and forth across the disk surface servicing all requests in its path.
It changes direction only when there are no more requests to service in the current direction.
d) C-SCAN (Circular SCAN) – Disk arm moves unidirectional across the disk surface toward the
inner track. When there no more request for service ahead of the arm, it jumps back to service the
requests nearest the outer track and proceeds inward again.
e) N-Step SCAN – Disk arm sweeps back and forth as in SCAN but all requests that arrive during a
sweep in on direction are batched and re-ordered for optimal service during the return sweep i.e.
segments the disk requests into sub queues of length N. Sub queues are processed one at a time by
using SCAN. While a queue is being processed, new requests must be added to some other queues.
Explain these other algorithms: LOOK, C-LOOK, F-SCAN
2. Rotational Optimization
It is used in drums. Once the disk arm arrives at a particular cylinder there may be many requests pending on
the various tracks of that cylinder.
Shortest Latency Time First (SLTF) strategy is used and examines all these requests and services the one
with the shortest rotational delay first.
RAM DISKS
This is a disk device simulated into a chip. It completely eliminates delays suffered in conventional disks
because of the mechanical motions inherent in seeks and in spinning a disk. They are much more expensive
than regular disks. Most forms are volatile – lose their contents when power is turned off or when the power
supply is interrupted.
CLOCKING SYSTEMS
Clocks and Timers
An interval timer is useful in multi-user systems for preventing one user from monopolizing a processor.
After a designated interval the timer generates pulses again to gain attention of the processor after which it
may be assigned to another user.
A clock is a regular timing signal that governs transitions in a system. It is the heart beat of the computer
which is necessary for timing and sequencing of operations.
VIRTUAL DEVICES
It is a technique where by one physical device is simulated on another physical device. They include:
a) Buffers / Buffering
A buffer is an area of primary storage for holding data during I/O transfers e.g. printer buffer. Techniques for
implementing buffering include:
i) Single buffered Input – The channel deposits data in a buffer, the processor processes that data; the
channel deposits the next data etc. While the channel is depositing data, no processing on that data
may occur; while the data is being processed no additional data may be deposited.
ii) Double Buffering – Allow overlap of input / output operation in one buffer; the processor may be
processing data in the other buffer.
There are various approaches to buffering for different types of I/O devices i.e. those that are block oriented
(store information in blocks that are usually of fixed sizes and transfers a block at a time e.g. tapes) stream
oriented (transfer data in and out as a stream of bytes e.g. terminals, mouse, communication ports).
Buffering aims at avoiding overheads and inefficiencies during I/O operations. Input transfers are performed
in advance of requests being made and also outputs are performed some times after request is made. This
information is kept in a buffer.
b) Spooling
A technique whereby a high speed device is interposed between a running program and a low speed device
involved with the program in input/ output. Instead of writing to a printer, outputs are written to a disk.
Programs can run to completion faster and other programs can be initiated sooner. It is used to improve
system throughput by disassociating a program from slow operating speed of devices such as printer.
c) Caching
Cache memory is usually a memory that is smaller and faster than memory and that is interposed between
main memory and processor. It reduces average memory access times by exploiting the principle of locality.
Disk cache is a buffer in main memory for disks sectors. The cache contains a copy of some of the sectors on
the disk e.g. when using DISKCOPY commands i.e. DISKCOPY A: A:
Assignment
Explain:
a) Graphic device (be sure to mention pixels and resolution)
b) RAID (Redundant array of Inexpensive disks) –main purpose and types.
SECURITY
Built into the operating systems are mechanisms for protecting computing activities. Files, devices and
memory must be protected from improper access& processes must be protected from improper interferences
by other processes.
Protection mechanisms implement protection policies. In some cases, the policy is built into the o/s, in others
it is determined by system administrator. Strict protection policies conflict with the needs for information
sharing and convenient access. A reasonable balance must be struck between these competing goals.
A systems security level is the extent to which all its resources are always protected as dictated by system’s
protection policies. Security mechanisms attempt to raise a system’s level of security. However the
mechanisms that achieve higher levels of security tend to make the system less efficient to use. Again a
balance must be found between safeguarding the system and providing convenient access. Major components
of security are:
- Authentication
- Prevention
- Detection
- Identification
- Correction
1. AUTHENTICATION
Many authorization policies are based on the identity of user associated with a process. Operating systems
need some mechanism for authenticating a user interacting with the computer system. The most commonly
used authentication techniques is requiring the user to supply one or more pieces of information known only
to the user.
Examples include: Passwords, Smartcard systems (use both password and a function), Physical
authentication (fingerprints, eye’s retina).
2. PREVENTION
The most desirable outcome of security system is for it to prevent intruders from successfully penetrating the
system’s security. Preventive measures include:
Limiting news password to meet a given criteria.(no of characters)
Requiring passwords to be changed at periodic intervals.
Encrypting data when transmitting or storing
Turning off unused or duplicate services(reduces no of system entry points)
Implementing internal firewall
3. DETECTION
Should a break occur its negative effects may be reduced by prompt detection. Effective detection measures
may also discourage intrusion attempts. Constant monitoring provides the best hope for fast discovery.
Detection can be achieved through:
Auditing systems (record time and user involved in each login
Virus checkers
Existence of long running process in a listing of currently executing process may indicate suspicious
activity.
Current state of system can be checked against a previous state.
4. CORRECTION
After a system has been penetrated it is frequently necessary to take preventive action:
Periodic backups to allow rolling back
Re-loading entire system when backup is not available.
Change all resident security information e.g. passwords
Vulnerability that led to the penetration should be fixed.
5. IDENTIFICATION
To discourage intruders, it is desirable to identify the source of an attack. Identification is frequently the most
difficult security task.
Audit trails may provide useful information
Systems accessed through modems can keep track of the source of incoming calls using call id.
Networks can record address of the connecting computer
All services can be configured to require user authentication e.g. mail servers
Exercise
Explain the following program threats:
i. Trapdoor
ii. Logic Bomb
iii. Trojan Horse
iv. Virus
v. Worm
vi. zombies