0% found this document useful (0 votes)
50 views13 pages

Suyash Os Notes

1) Operating system structures include simple, layered, microkernel, and monolithic structures. Microkernels provide essential services like communication and coordination between components. 2) The main components of an operating system are process management, file management, and device management. A bootstrap loader loads programs into memory on startup. 3) A process is a running program that exists in various states like ready, running, waiting, and terminated. Process scheduling algorithms like preemptive and non-preemptive determine which process runs on the CPU.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views13 pages

Suyash Os Notes

1) Operating system structures include simple, layered, microkernel, and monolithic structures. Microkernels provide essential services like communication and coordination between components. 2) The main components of an operating system are process management, file management, and device management. A bootstrap loader loads programs into memory on startup. 3) A process is a running program that exists in various states like ready, running, waiting, and terminated. Process scheduling algorithms like preemptive and non-preemptive determine which process runs on the CPU.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

|| CHAPTER 1 ||

STRUCTURES OF OS 1) simple Structure -: In simple


structure the structure of operating system was not well-defined.
Such operating systems started small, simple and limited
systems and then grew beyond their original scope. 2) Layered
Structure -: in the layered structure the operating system is
organized as a hierarchy of layers or levels with layer built on
the top of the layer below it. 3) Micro-kernels -: The kernel is
the core system software that behaves as a bridge between
application software and hardware of the system. Kernel
manages system resources. A microkernel is a piece of software
or even code that contains the near-minimum amount of
function and features required to implement an operating
system. Function -: The main function of a microkernel is to
provide a minimal set of essential services to facilitate
communication and coordination between various components.
Advantage 1) Easy to extend 2) it is portable 3) the
microkernel structure provides high security and reliability. 4)
Monolithic structure -: monolithic structure is the oldest
structure of the operating system. A monolithic kernel is an
operating system structure where the entire operating system is
working in kernel space. COMPONENTS /function OF OS
1) Process Management -:A process is program or a fraction of
a program that is loaded in main memory. A process needs
certain resources including CPU time, Memory, Files, and I/O
devices to accomplish its task. The process management
component manages the multiple processes running
simultaneously on the Operating System. 2) File Management-:
File management is one of the most visible services of an
operating system. Computers can store information in
several different physical forms; magnetic tape, disk, and
drum are the most common forms.
BOOTING /BOOTSTAP LEADER  A Bootstrap Loader (BSL)
is a small program which can be activated immediately
after a microcontroller has been powered up, in order to
load and execute another program in a well defined
manner. FUNCTION -: separate program in the program
memory that executes when a new application needs to be
reloaded into the rest of program memory.
SYSTEM CALLS A system call is a mechanism used by
programs to request services from the operating system
(OS). Types -: 1) Process Control -: Process control is the
system call that is used to direct the processes. Some
process control examples include creating, load, abort, end,
execute, process, terminate the process, etc. 2) File
Management File management is a system call that is used
to handle the files. Some file management examples include
creating files, delete files, open, close, read, write, etc. 3)
Device Management -: Device management is a system call
that is used to deal with devices. Some examples of device
management include read, device, write, get device
attributes, release device, etc. 4) Information Maintenance-:
Information maintenance is a system call that is used to
maintain information. There are some examples of
information maintenance, including getting system data, set
time or date, get time or date, set system data, etc.
|| CHAPTER 2 ||
PROCESS A process is a running program that serves as the
foundation for all computation. The procedure is not the same as
computer code, although it is very similar. In contrast to the program,
which is often regarded as some ‘passive’ entity, a process is an
‘active’ entity. PROCESS STATE 1) NEW -: The creation of the process.
2) READY -: The waiting for the process that is to be assigned to any
processor. 3) RUNNING -: Execution of the instructions. 4) WAITING -:
The waiting of the process for some event that is about to occur (like
an I/O completion, a signal reception, etc.). 5) TERMINATED -: A
process has completed execution. PROCESS TYPES 1) I/O bound
process -: the process that spends more its time on I/O than it spends
doing some computation is I/O bound process. 2) CPU bound process -
: the process that generates an I/O request less frequently and uses
much of its time doing computations. PROCESS CONTROL BLOCK (PCB)
 A Process Control Block (PCB) refers to a data structure that keeps
track of information about a specific process. The CPU requires this
information to complete the job. 1) Process ID (PID): A unique
identifier assigned to each process in the system. 2) Program Counter
(PC): A pointer to the address of the next instruction to be executed
for that process. 3)- Registers: CPU registers that store the current
context of the process, including data and address registers. 4)
Processor Status Information: Information about the current state of
the process, such as whether it's ready, waiting, or executing. 5)
Memory Management Information: Base and limit registers, which
define the range of memory accessible to the process. I/O Status
Information: 6) Information about I/O operations the process is
currently involved in, including the status of open files and
devices.
PROCESS SCHEDUKING process scheduling is an essential part
of multiprogramming operating systems which allow more than
one process to be loaded into the executable memory at a time
and the loaded process shares the CPU using time multiplexing.
REDY QUEUE Ready queue keeps a set of all processes residing
in main memory, ready and waiting to execute. A new process is
always put in this queue. JOB QUEUE job queue keeps all the
processes in the system. As the process enter the system, it is put
into a job queue. TYPES OF SCHEDULER 1) long-term -: - IT is a
job scheduler. – it controls the degree of multiprogramming. – it is
almost absent or minimal in time sharing system. – it deals with
main memory for loading process 2) short-term -: - it is CPU
scheduler. – It provides lesser control over degree of
multiprogramming. – it is also minimal in time sharing system. – it
deals with CPU 3) Medium-Term -: - it is a process swapping
scheduler or swapper – it reduces the degree of
multiprogramming. – it is a part of time sharing systems. – it deals
with main memory. CONTEXT SWITCH the context switch is an
essential feature of a multitasking operating system. Switching
the CPU to another process requires saving the state of the old
process and loading the saved state for the new process. This task
is known as context switch. – when context switching occurs the
kernel saves the context of the old process in its PCB and loades
the saved contaxt of the new process scheduled to run.
MVT - in this petitioning region or partition size is not fixed and
can vary dynamically. – can grow or shrink at run time. –operating
system maintains a table indicating the table indicating which
parts of the memory are free and witch are allocated
THREAD SCHEDULING A thread is a basic unit of CPU utilization.
Its own program counter, a register set and stack space. It shares
with the peer thread its code section, data section and OS
resources such as open files and signals collectively called task.
MULTITHREDING Advantage 1) Responsiveness –: if we
multithreaded an interactive system it increases responsiveness to
the user. 2) Resource sharing  Sharing data and code allows an
application to have several different threads of activity within the
same data. 3) Economy -: Threads share resources of the process
to witch they belong. It is more economical to create and context
switch threads. TYPES USER LEVEL THREAD 1) user level
thread are faster to create and manage. 2) implemented by a
thread library at the user level. 3) user level thread can turn on
any operating system. 4) Multithread application cannot take
advantage of multiprocessing. KERNEL LEVEL THREAD 1) kernel
level thread are slower to create and manage. 2) operating system
support directly to kernel threads. 3) kernel level threads are
specific to the operating system. 4) kernel routines themselves
can be multithreaded. ONE TO ONE MODEL - in one-to-one
thread model, one-to-one relationship between a user-level
thread to a kernel-level thread. The one-to-one model maps user
thread to a kernel thread. Eg Windows 95, 98, NT, 2000
implement one-to-one model. MANY-TO-MANY MODEL in this
model many user level threads multiplex to the kernel thread of
smaller or equal numbers. The number of kernel threads may be
specific to either a particular application or a particular machine.
Eg Solaris 2, IRIX, HP-UX and Tru64 UNIX implement manay to
many thread.
|| CHAPTER 3 ||
CPU- I/O BRUST CYCLE - CPU scheduling is greatly affected by
how a process behaves during its execution. Almost all the
process continue to switch between CPU and I/O devices during
there execution. The success of CPU scheduling depends upon the
observed property of processed such as process execution is a
cycle of CPU execution and I/O Waits. Processes alternate back
and forth between these two states. Process execution begins
with a CPU burst. It is followed by an I/O burst, which is followed
by another CPU burst then another I/O burst and so on.
Eventually the last CPU burst will end with a system request to
terminate execution, rather than another I/O burst.
TYPES OF SCHEDULING 1) Preemptive Scheduling i) in the
preemptive schedule, the processes with higher priorities are
executed first. ii) the CPU utilization is high in this type of
scheduling. iii) Waiting and response time of preemptive
scheduling is less. iv) In this process the CPU is allocated to the
process for a specific time period. V) only the processes having
higher priority are scheduled. Vi) it does not treat all process as
equal. 2 Non-preemptive Scheduling i) in non-preemptive
scheduling once the CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU either by
terminating or by switching to the waiting state. ii) the CPU
utilization is less efficient. iii) waiting and response time of the
non-preemptive scheduling method is higher. iv) in this process
CPU is allocated to the process until it terminates or switches to
the waiting state. v) process having any priority can get scheduled
vi) it treats all process as equal.
DISPATCHERthe module of the operating system that performs
the function of setting up the execution of the selected process on
the CPU is known as dispatcher. Dispatcher is a component witch
involves in the CPU scheduling. When the scheduler completes its
job of selecting a process it is the dispatcher witch takes that
process to the desired state. – the time taken by dispatcher to
stop one process and start another process to run is called
dispatch latency. Function  1. loading the register of the user
mode. 2. Switching operating system to the user mode. 3.restart
the program by jumping to the proper location in the user
program. SCHEDULING ALGORITHMS
1) SJF(Short job first) the SJF scheduling algorithm is also
known as process next scheduling algorithm that schedules the
processes according to the length of the CPU burst they required.
Advantage 1.Overall performance is significantly improve in
terms of response time. 2. SJF algorithm eliminates the variance in
waiting and turnaround time. disadvantage  1. There is a risk of
starvation of longer processes. 2. It is very difficult to know the
length of next CPU burst. 2) Priority Scheduling  A priority is
associated with each process and the CPU is allocated to the
process with the highest priority, hence it is called priority
scheduling. Advantage 1. It is simple in use 2. Suitable for
application with varying time and resource requirements.
Disadvantage 1. Apriority scheduling can leave some low
priority waiting processes indefinitely for CPU.
3) ROUND ROBIN(RR) the Round Robin (RR) scheduling
algorithm is designed especially for time sharing system. RR is the
pre-emptive process scheduling algorithm. Advantage 1.
Algorithm logic is simple 2. System support multiprogramming
effectively. Disadvantage 1. The average waiting time under RR
is often long. 2.duration quantum is very high.
4) Multilevel Queue  -Multilevel Queue Scheduling based on
response – time requirements, some process required a quick
response by the processor; some process can wait. – A multilevel
queue scheduling algorithm partition the ready into separate
queues. – In a multilevel queue scheduling processes are
permanently assigned to one queue depending upon their
properties such as the size of the memory or the type of the
process or priority of the process. So each queue follows a
separate scheduling algorithm. Advantage 1. In MLQ one can
apply different scheduling algorithms to different processes. 2. In
MLQ the processes are permanently assigned to their respective
queues and do not between queues. Disadvantage 1. In MLQ
the process does not moved one queue to another queue.
Turnaround time turnaround time is the interval from the time
of submission of a process to the time of completion.
Waiting time waiting time is the sum of periods spent waiting in
ready queue as the CPU scheduling algorithm does not affect the
amount of time that a process spends waiting in the ready queue.
|| CHAPTER 4 ||
CRITICAL SECTION PROBLEM consider a system consistency of
n processes { p0, p1,…, pn-1}. Each process has a segment of code
called a critical section. critical section is a code segment that can
be accessed by only one process at a time. Critical section
contains shared variable which to be synchronized to maintain
consistency of data variables. There are three sections -: 1) Entry
section -: Each process must request permission to enter it’s
critical section processes may go to critical section through entry
section. 2) Exit Section -: Each critical section is terminated by exit
section. 3) Remainder Section -: The code after section is
remainder by exit section. Peterson Solution  In Petrson
solution, when a process is executing in a critical state, then the
other process only executes the rest of the code and the opposite
can happen. Peterson solution preserves all three conditions -:
1) Mutual Exclusion -: it is comforted as at any time only one
process can be access the critical section 2) progress -: it is also
comforted as a process that is outside the critical section is unable
to block other processes from entering into the critical section.
3) Bounded Waiting -: it is assured as every process gets a fair
chance to enter the critical section.
SEMAPHORES A semaphores S is n integer variable, which can
be accessed only through two operation Wait() and Signal().
USE we can use semaphore to deal with the n processes.
TYPES 1) Binary Semaphore-: binary semaphore is a semaphore
with an integer value that can range only between 0 and 1.
2) counting semaphore-: the value of counting semaphore can
range over an unrestricted domain.
BOUNDED-BUFFER PROBLEM Bounded buffer problem is also
called producer-consumer problem. Solution to this problem is,
creating two counting semaphores “full” and “empty” to keep
track of the current number of full and empty buffers respectively.
Bounded buffer problem can be handled using semaphores,
mutex semaphores provide mutual exclusion. Empty and full
semaphores count number of empty and full buffers respectively.
DINING PHILOSOPHER PROBLEM -the dining philosopher is a
popular classic synchronization problem for concurrency control. -
five philosophers sit around a circular table. Each philosopher
spends his life alternatively thinking and eating. In the center of
the table is a large plate of food. A philosopher needs two
chopsticks to eat a food. Since the philosopher are sharing
chopsticks, it is not possible for all of them to be eating at the
same time. Solution A solution of the dining philosophers
problem is to use a semaphore to represent a chopstick. A
chopstick can be picked up by executing a wait() operation on the
semaphore and released by executing a signal() operation
semaphore. READ AND WRITE PROBLEM the readers-writers
problem is a classical problem of process synchronization, it
relates to a data set such as a file that is shared between more
than one process at a time. – the readers-writers problem is used
for managing synchronization among various reader and writer
process so that there are no problems with the data sets.
|| CHAPTER 5 ||
FRAGMENTATION As processes are loaded and removed from
memory, the free memory space is broken into little pieces. It
happens after sometimes that processes cannot be allocated to
memory blocks considering their small size and memory blocks
remains unused. This problem is known as fragmentation.
TYPES i) Internal Fragmentation -: in internal fragmentation the
process is allocated a memory blocks of size more than the size of
that process. Due to this some part of the memory is left unused
and this cause internal fragmentation. – when internal
fragmentation occurs, a process that need 57 bytes of memory,
for example, may be allocated a block that contains 60 bytes over
even 64. – the extra bytes that the client does not need to go
waste and over time these tiny chunks of unused memory can
build up and create large quantities of memory that cannot be put
to use by the allocator. ii) External fragmentation-: in the external
fragmentation the total memory space is enough to satisfy a
request or to reside a process in it, but it is not contiguous so it
cannot be used. For example, there is hole of 20k and 10k is
available in multiple partition allocation schemes. The next
process request for 30k of memory. Actually 30k of memory is
free which satisfy the request but hole is not contiguous.
Compaction is a method used to overcome the external
fragmentation problem. All free blocks are together as one large
block of free space.
SEGMENTATION HARDWARE segmentation is a non-contiguous
memory allocation technique which support the user’s view of
memory. Segmentation divides the user program and the data
associated with the program into the number of segments. A
segment is defined as a logical grouping of instruction each
segment is actually a different logical address space of the
program. Advantage-: 1) segmentation eliminates fragmentation
problem. 2) segmentation allows dynamically growing segments
disadvantage-: 1) increased complex in the operating system.
DEMAND PAGING demand paging is a method of virtual
memory management. With demand-paged virtual memory pages
are only loaded when they are demanded during program
execution; pages that are never accessed are thus never loaded
into physical memory. The paging hardware in translating the
address through the page table, will notice that the invalid bit is
set causing a trap to the operating system. This trap is the result
of the operating system’s failure to bring the desired page into
memory. Page Fault it may happen that a process tries to
access a page that was not brought into memory. That mean the
page is marked invalid in the page table entry. Access to such a
page cause a page-fault trap. Page table this table has the
ability to mark an entry invalid through a valid invalid bit or
special value of protection bits. Cache memory the solution is
to add fast memory between CPU and main memory called as
Cache memory. Logical address An addresses generated by a
program is a logical address space.

You might also like