0% found this document useful (0 votes)
2 views

note (1)

The document provides an overview of Operating Systems (OS), detailing their role in managing computer resources, including memory, processes, and file management. It discusses the evolution of OS from early systems without automatic control to modern graphical interfaces and classifications such as single-user, multi-user, and real-time systems. Additionally, it covers file management, attributes, access methods, and process management, including states and scheduling, emphasizing the importance of OS in facilitating user interaction and resource management.

Uploaded by

softmuz941
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

note (1)

The document provides an overview of Operating Systems (OS), detailing their role in managing computer resources, including memory, processes, and file management. It discusses the evolution of OS from early systems without automatic control to modern graphical interfaces and classifications such as single-user, multi-user, and real-time systems. Additionally, it covers file management, attributes, access methods, and process management, including states and scheduling, emphasizing the importance of OS in facilitating user interaction and resource management.

Uploaded by

softmuz941
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

A/L – Information & Communication Technology

5.1. Introduction to Computer OS


The most fundamental of all the system program is the Operating System, which controls all the
computer resources and provides the base upon which the application programs can be written.

Booting is a startup sequence that starts the operating system of a computer when it is turned on.
A boot sequence is the initial set of operations that the computer performs when it is switched on.
Every computer has a boot sequence.

The first computers did not have an operating system. Programs were either hardwired (could not be
changed) or were loaded manually. There was no automatic control of peripherals or filing systems.
During that period, it became clear that some way had to be found to shield programmers from the
complexity of the hardware. The evolvement of that was to put a layer of software on top of the bare
hardware, to manage all parts of the system, and present the user with an interface or virtual machine
that is easier to understand and program. This layer of software is the Operating System.

The role of an operating system is to free the computer user from all worries and concerns about the
control of a computer and all its resources. The operating system converts a computer with real
hardware and software into a virtual machine that carries out the operations required by the user –
without the user having to know anything of the underlying hardware.

The traditional role of the operating system is memory management. This manages all storage
devices in the memory hierarchy from DRAM main store to disk drives to optical storage and even
USB memory stick plug-ins.

The modern graphical operating system can accept input from the user via keyboards, mouse
movements, and touch screens. Mouse movements and clicks are analyzed by the operating system
and the appropriate actions implemented. For example, if you wish to copy a file from a hard disk to
a DVD you simply select the file on the hard disk and drag it to the icon for a DVD. The operating
system detects the intent (move file) and then performs all the operations required. This may require
the reformatting of data between differing media, error handling, and data routing. All this is
transparent to the user.

Modern operating systems have integrated web browser technologies into the operating system in an
attempt to make the computer itself look like an extension of the internet with there are no
fundamental differences between local files and applications and those stored remotely on the
Internet. The operating system also handles all communications protocols and I/O devices such as
printers and scanners.

The main functions of an operating system are:

1. Providing interfaces 3. Resource management

2. Process management 4. Security and protection

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 1


A/L – Information & Communication Technology

Classifications of Operating Systems

Classification 1

Single User - Single User - Multi User - Multi Threading Real Time
Single Tasking Multi Tasking Multi Tasking
This type of OS This type of OS is
This type of OS This OS is This OS is has the ability to intended to
only has to deal designed mainly designed to manage its use by serve real-
with one person with a single user allows multiple more than one time application
at a time, running in mind, but it users to perform user at a time and requests. It must
one user can deal with several tasks to even manage be able to process
application at a many which are multiple requests data as it comes
time. applications handled by the by the same user in, typically
e.g. Mobile running at the computer system without having to without buffering
Phone OS same time. For at the same time. have multiple delays.
example, you This kind of OS copies of the Processing time r
might be writing can be found on programming equirements are
an essay, while Mainframe and running in the measured in
searching the Supercomputers. computer. tenths of seconds
internet, e.g. UNIX e.g. Solaris or shorter.
downloading a e.g. ATM
video file and Machine OS
also listening to a
piece of music.
e.g. Windows

Classification 2

Command Line Interface (CLI) Graphical User Interface (GUI)


Black & White interface Attractive, graphical interface
e.g. MS DOS, UNIX e.g. MS Windows, MAC

Classification 3

Open Source Operating System Proprietary Operating System


free online license should be bought
e.g. Linux e.g. MS Windows

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 2


A/L – Information & Communication Technology

5.2. File Management in OS


1. What is File Management?

File management is one of the basic and important features of an operating system. Operating system
is used to manage the files in a computer system. All the files with different extensions are managed
by the operating system.

A file is a collection of specific information stored in the memory of a computer system. File
management is defined as the process of manipulating files in computer system. File management
includes the process of creating, modifying, and deleting the files.

The following are some of the tasks performed by file management of the operating system in any
computer system:

 It helps to create new files in computer system and placing them at the specific locations.
 It helps in easily and quickly locating these files in the computer system.
 It makes the process of sharing of the files among different users easy and user friendly.
 It helps to store the files in separate folders known as directories.
 It helps the user to modify the data of files or to modify the name of the file in the directories.

2. Objectives of File Management System

 It provides I/O support for a variety of storage device types.


 Minimizes the chances of lost or destroyed data
 Helps OS to standardized I/O interface routines for user processes.
 It provides I/O support for multiple users in a multiuser systems environment.
3. Properties of a File System

 Files are stored on disk or other storage and do not disappear when a user logs off.
 Files have names and are associated with access permission that permits controlled sharing.
 Files could be arranged to reflect the relationship between them.
4. File Structure

A File Structure needs to be predefined in such a way so that the operating system can understand. It
has an exclusively defined structure, which is based on its type.

file structures in OS

Text File Object File Source File


it is a series of characters it is a series of bytes that is it is a series of functions
that is organized in lines organized into blocks and processes

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 3


A/L – Information & Communication Technology

5. File Attributes

 Name: It is the only information stored in a human-readable form.


 Identifier: Every file is identified by a unique tag number known as an identifier.
 Location: Points to file location on device.
 Type: This attribute is required for systems that support various types of files.
 Size: Attribute used to display the current file size.
 Protection: This attribute assigns and controls the access rights to the files.
 Time, date and security: It is used for protection, security, and also used for monitoring.

6. Functions of File

 Create file - find space on disk and make an entry in the directory
 Write to file - requires positioning within the file
 Read from file - involves positioning within the file
 Delete file - regain disk space
 Reposition - move read/write position

7. File Access Methods

File access is a process that determines the way that files are accessed and read into memory.
Generally, a single access method is always supported by operating systems. Though there are some
operating system which also supports multiple access methods.

file access methods in OS

Sequential Access Direct/Random Access Index Sequential Access

In this type of file access The random access method This type of accessing
method, records are is also called direct random method is based on simple
accessed in a certain pre- access. This method allow sequential access. In this
defined sequence. In the accessing the record access method, an index is
sequential access method, directly. Each record has its built for every file, with a
information stored in the own address on which can direct pointer to different
file is also processed one by be directly accessed for memory blocks. In this
one. Most compilers access reading and writing. method, the Index is
files using this access searched sequentially, and
method. its pointer can access the
file directly. Multiple levels
of indexing can be used to
offer greater efficiency in
access. It also reduces the
time needed to access a
single record.

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 4


A/L – Information & Communication Technology

8. Space Allocation

In the Operating system, files are always allocated disk spaces.

space allocation methods in OS

Contiguous Allocation Linked Allocation Indexed Allocation

In this method, In this method, In this method,


- Every file uses a - Every file includes a list of - Directory comprises the
contiguous address space on links. addresses of index blocks of
memory. - The directory contains a the specific files.
- Here, the OS assigns disk link to the first block of a - An index block is created,
address in a linear order. file. having all the pointers for
- In the contiguous - This allocation method is specific files.
allocation method, external used for sequential access - All files should have
fragmentation is the biggest files. individual index blocks to
issue. store the addresses for disk
space.

9. File Directories

A single directory may or may not contain multiple files. It can also have sub-directories inside the
main directory. Information about files is maintained by Directories (also known as folders).

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 5


A/L – Information & Communication Technology

10. File Types

File Type Usual Extension Function

Executable exe, com, bin or none ready-to-run machine- language program

Object obj, o complied, machine language, not linked

Source code c, p, pas, 177, asm, a source code in various languages

Batch bat, sh Series of commands to be executed

Text txt, doc textual data documents

Word processor doc, docs, tex, rrf, etc. various word-processor formats

Library lib, h libraries of routines

Archive arc, zip, tar related files grouped into one file, sometimes compressed

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 6


A/L – Information & Communication Technology

5.3. Process Management in OS


1. Definition of process

A process is basically a program in execution. The execution of a process must progress in a


sequential fashion.

A process is defined as an entity which represents the


basic unit of work to be implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.

When a program is loaded into the memory and it becomes a process, it can be divided into four
sections – stack, heap, text and data. The following image shows a simplified layout of a process
inside main memory:

This contains the temporary data such as function parameters,


return address and local variables

This is dynamically allocated memory to a process during its


run time.
This includes the current activity represented by the value of
Program Counter and the contents of the processor's registers.

This section contains the global and static variables.

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 7


A/L – Info rmatio n & Communication Technology

2. Process States

A process state is a condition of the process at a specific instant of time. It also defines the current
position of the process.

There are mainly seven stages of a process which are:

1. New : The new process is created when a specific program calls from secondary
memory/hard disk to primary memory/RAM.

2. Ready : In a ready state, the process should be loaded into the primary memory, which
is ready for execution.

3. Waiting : The process is waiting for the allocation of CPU time and other resources for
execution.

4. Executing : The process is in execution state.

5. Blocked : It is a time interval when a process is waiting for an event like I/O operations
to complete.

6. Suspended : Suspended state defines the time when a process is ready for execution but has
not been placed in the ready queue by OS.

7. Terminated : Terminated state specifies the time when a process is terminated.

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 8


A/L – Information & Communication Technology

3. Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System for every process.
The PCB is identified by an integer process ID (PID).

A PCB keeps all the information needed to keep track of a process as listed below:

1. Process state - the current state of the process i.e., whether it is ready, running, waiting, or
whatever.
2. Process privileges - this is required to allow/disallow access to system resources.
3. Process ID - unique identification for each of the process in the operating system.
4. Pointer – a pointer to parent process.
5. Program counter - a pointer to the address of the next instruction to be executed for this
process.
6. CPU registers - various CPU registers where process need to be stored for execution for
running state.
7. CPU scheduling information - process priority and other scheduling information which is
required to schedule the process.
8. Memory management information - this includes the information of page table, memory
limits, segment table depending on memory used by the operating system.
9. Accounting information - this includes the amount of CPU used for process execution, time
limits, execution ID etc.
10. IO status information - this includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB:

The PCB is maintained for a process throughout its lifetime and is deleted once the process
terminates.

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 9


A/L – Information & Communication Technology

4. Process Scheduling

The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating


systems allow more than one process to be loaded into the executable memory at a time and the
loaded process shares the CPU using time multiplexing.

Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types:

1. Long Term Scheduler


It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them into
memory for execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound, and processor bound. It also controls the degree of multiprogramming. If the degree
of multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long-term scheduler. When a process changes the state from new
to ready, then there is use of long-term scheduler.

2. Short Term Scheduler


It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of
the process. CPU scheduler selects a process among the processes that are ready to execute
and allocates CPU to one of them. Short-term schedulers, also known as dispatchers, make
the decision of which process to execute next. Short-term schedulers are faster than long-term
schedulers.

3. Medium Term Scheduler


Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of
handling the swapped out-processes.

A running process may become suspended if it makes an I/O request. A suspended process
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the
secondary storage. This process is called swapping, and the process is said to be swapped out
or rolled out. Swapping may be necessary to improve the process mix.

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 10


A/L – Information & Communication Technology

Comparison among Schedulers

Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

It is a process swapping
It is a job scheduler It is a CPU scheduler
scheduler.

Speed is lesser than short term Speed is fastest among other Speed is in between both short
scheduler two and long term scheduler.

It controls the degree of It provides lesser control over It reduces the degree of
multiprogramming degree of multiprogramming multiprogramming.

It is almost absent or minimal It is also minimal in time It is a part of Time sharing


in time sharing system sharing system systems.

It selects processes from pool It can re-introduce the process


It selects those processes
and loads them into memory into memory and execution
which are ready to execute
for execution can be continued.

5. Context Switching

A context switch is the mechanism to store and restore the state or context of a CPU in Process
Control block so that a process execution can be resumed from the same point at a later time. Using
this technique, a context switcher enables multiple processes to share a single CPU. Context
switching is an essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another, the state from
the current running process is stored into the process control block. After this, the state for the
process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that point,
the second process can start executing.

Context switches are computationally intensive since register and memory state must be saved and
restored. To avoid the amount of context switching time, some hardware systems employ two or
more sets of processor registers. When the process is switched, the following information is stored
for later use:

 Program counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed state
 I/O state information
 Accounting information

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 11


A/L – Info rmatio n & Communication Technology

5.4. Memory Management in OS


Memory management is the functionality of an operating system which handles or manages primary
memory and moves processes back and forth between main memory and disk during execution.
Memory management keeps track of each and every memory location, regardless of either it is
allocated to some process or it is free. It checks how much memory is to be allocated to processes. It
decides which process will get memory at what time. It tracks whenever some memory gets freed or
unallocated and correspondingly it updates the status.

A computer can address more memory than the amount physically installed on the system. This extra
memory is called virtual memory and it is a section of a hard disk that is set up to emulate the
computer's RAM. The main visible advantage of this scheme is that programs can be larger than
physical memory. Virtual memory serves two purposes. First, it allows us to extend the use of
physical memory by using disk. Second, it allows us to have memory protection, because each
virtual address is translated to a physical address. Paging technique plays an important role in
implementing virtual memory.

Paging is a memory management technique in which process address space is broken into blocks of
the same size called pages. The size of the process is measured in the number of pages. Similarly,
main memory is divided into small fixed-sized blocks of (physical) memory called frames and the
size of a frame is kept the same as that of a page to have optimum utilization of the main memory
and to avoid external fragmentation.

The mapping from virtual to physical address is done by the memory management unit (MMU),
which is built into the hardware. MMU is a computer hardware component that handles all memory
and caching operations associated with the processor. In other words, the MMU is responsible for all
aspects of memory management.

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 12


A/L – Information & Communication Technology

5.5. I/O Device Management in OS


One of the important jobs of an Operating System is to manage various I/O devices including mouse,
keyboards, touch pad, disk drives, display adapters, USB devices, Analog-to-Digital converter,
on/off switch, network connections, audio I/O, printers etc.

An I/O system is required to take an application I/O request and send it to the physical device, then
take whatever response comes back from the device and send it to the application.

Device drivers (also known as Device Controllers) are software modules that can be plugged into an
OS to handle a particular device. OS takes help from device drivers to handle all I/O devices.

The Device Controller works like an interface between a device and a device driver. I/O units
(Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an electronic
component where electronic component is called the device controller.

There is always a device controller for each device to communicate with the Operating Systems. A
device controller may be able to handle multiple devices. As an interface its main task is to convert
serial bit stream to a block of bytes and perform error correction as necessary.

Any device connected to the computer is connected by a plug and socket, and the socket is connected
to a device controller. Following is a model for connecting the CPU, memory, controllers, and I/O
devices where CPU and device controllers all use a common bus for communication.

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 13


A/L – Information & Communication Technology

FAT32 vs. NTFS


FAT32 is the most common version of the FAT (File Allocation Table) file system, created by
Microsoft in 1977. It is the older of the two file systems, so it is not as efficient or advanced as
NTFS. It uses the File Allocation Table to describe the allocation stats of the clusters in a file system
and the link relationship between each. It works as a content table for the OS and indicates where the
directories and files are stored on the disk.

NTFS (New Technology File System) is a proprietary journaling file system developed by Microsoft
in 1993. Starting with Windows NT 3.1, it is the default file system of the Windows NT family. It is
introduced as a replacement for the FAT file system; it is more robust and effective because it makes
use of advanced data structure to improve reliability, disk space utilization and overall performance.

FAT32 NTFS
File size limit Supports files of up to 4GB in size Can support volumes as large as
and volumes of up to 2TB in size. 256TB, and its file size support tops
You are not allowed to store a single out at 16 EiB (Exbibyte).
file that is larger than 4GB on
FAT32 partition, and you can
format 2TB hard drive to FAT32 at
most.

File name length Maximum 8.3 characters Maximum 255 characters

Security Must depend on share permissions Allows to set permissions on local


for security, which means that they files and folders as well.
are good in the network but locally
they are vulnerable.

File compression Offers no file compression feature Can compress files and folders
whatsoever. individually.

Fault tolerance Maintains two different copies of Maintains a log of disk changes and
the file allocation table and uses a in case of power failure or abrupt
backup mechanism if some damage errors it repairs the files and folders
occurs. automatically without the user being
notified anything.

Compiled By: Ms. Akila Farwin (Exc MSc(Marketing)-AeU,PGD(IT)-BCS,FIT-UCSC) 14

You might also like