Os - Swe - CH 1-4
Os - Swe - CH 1-4
SYSTEMS
By: Dessalew G.(MSc)
CHAPTER ONE
5
Continued…
● The application programs: such as word processors,
spreadsheets, compilers, and Web browsers define
the ways in which these resources are used to solve
users’ computing problems.
6
Purpose and Role of Operating Systems
● Computing started as an experiment to determine
what could be done.
8
Continued…
● Since bare hardware alone is not particularly easy to
use, application programs are developed.
9
Continued…
● In addition, we have no universally accepted definition of
what is part of the operating system.
18
Scheduling and Dispatch in OS
● In multiprogrammed system there are multiple
processes or threads competing for the CPU at the
same time.
● This situation occurs whenever two or more of them
are simultaneously in the ready state.
20
Important Concepts
● Memory management
○ The main memory is central to the operation of a
modern computer system.
○ Main memory is a large array of bytes, ranging in
size from hundreds of thousands to billions.
○ Each byte has its own address.
○ Main memory is a repository of quickly accessible
data shared by the CPU and I/O devices.
○ For a program to be executed, it must be mapped
to absolute addresses and loaded into memory.
21
Continued…
○ As the program executes, it accesses program
instructions and data from memory by generating
these absolute addresses.
○ Eventually, the program terminates, its memory
space is declared available, and the next program
can be loaded and executed.
○ To improve both the utilization of the CPU and the
speed of the computer’s response to its users,
memory management is needed.
○ Many different memory management schemes are
used. 22
Continued…
● Device Management
○ A process may need several resources to execute:
■ main memory, disk drives, access to files, and so on.
○ If the resources are available, they can be granted.
○ Otherwise, the process will have to wait until
sufficient resources are available.
○ The various resources controlled by the operating
system can be thought of as devices.
○ For efficient allocation of those devices, device
management is needed.
23
Continued…
● File System
○ Files are logical units of information created by
processes.
○ A disk will usually contain thousands or even
millions of them, each one independent of the
others.
○ Processes can read existing files and create new
ones if need be.
○ Information stored in files must be persistent, that
is, not be affected by process creation and
termination. 24
Continued…
○ Files are managed by the operating system.
○ How they are structured, named, accessed, used,
protected, implemented, and managed are major
topics in operating system design.
○ As a whole, that part of the operating system
dealing with files is known as the file system.
25
Continued…
● Interrupt
○ The interrupt is a signal emitted by hardware or
software when a process or an event needs
immediate attention.
○ It alerts the processor to a high-priority process
requiring interruption of the current working
process.
○ In I/O devices one of the bus control lines is
dedicated for this purpose and is called
the Interrupt Service Routine (ISR).
26
History of Operating System
● OS have been evolving through the years.
○ They have historically been closely tied to the
architecture of the computers.
31
Continued…
● The Third Generation (1965–1980): ICs and Multiprogramming
○ The IBM 360 was the first major computer line to use (small-
scale) ICs (Integrated Circuits).
○ Price/performance advantage over the second-generation
machines, which were built up from individual transistors.
○ The idea of a family of compatible computers was soon
adopted by all the other major manufacturers.
○ The greatest strength of the ‘‘single-family’’ idea was
simultaneously its greatest weakness.
○ The original intention was that all software, including the
operating system, OS/360, had to work on all models.
32
Continued…
○ They popularized several key techniques absent in second-
generation operating systems.
○ Probably the most important of these was multiprogramming.
○ Another major feature was the ability to read jobs from cards
onto the disk as soon as they were brought to the computer
room.
○ Then, whenever a running job finished, the operating system
could load a new job from the disk into the now-empty
partition and run it.
○ This technique is called spooling.
○ Although they were well suited for big scientific calculations,
they were still basically batch systems.
33
Continued…
○ This desire for quick response time paved the way for
timesharing, a variant of multiprogramming, in which each
user has an online terminal.
○ The first general-purpose timesharing system, CTSS
(Compatible Time Sharing System), was developed at M.I.T.
Reading assignment
○ MULTICS (MULTiplexed Information and Computing Service).
○ Cloud computing
○ UNIX
○ BSD (Berkeley Software Distribution
○ POSIX
○ MINIX 34
Continued…
● The Fourth Generation (1980–Present): Personal Computers
○ With the development of LSI (Large Scale Integration) circuit
chips containing thousands of transistors on a square
centimeter of silicon the age of the personal computer
dawned.
○ Kildall wrote a disk-based operating system called CP/M
(Control Program for Microcomputers) for intel's 8080 .
■ 8080 is the first general purpose 8-bit CPU.
○ In the early 1980s, IBM designed the IBM PC and looked
around for software to run on it.
○ People from IBM contacted Bill Gates to license his BASIC
interpreter.
35
Continued…
○ They also asked him if he knew of an operating system to run
on the PC.
○ Gates realized that Seattle Computer Products, had a suitable
operating system, DOS (Disk Operating System).
○ He approached them and asked to buy it (allegedly for
$75,000), which they readily accepted.
○ Gates then offered IBM a DOS/BASIC package, which IBM
accepted.
○ After this period a number of operating systems are developed
■ MS-DOS (MicroSoft Disk Operating System)
■ Mac OS X
■ Windows NT 36
Continued…
■ Windows Me (Millennium Edition).
■ FreeBSD
■ X Window System
■ Network Operating Systems
■ Distributed Operating Systems
○ Note: Network operating systems are not fundamentally
different from single-processor operating systems.
○ They obviously need a network interface controller
○ A distributed operating system, in contrast, is one that appears
to its users as a traditional uniprocessor system, even though it
is actually composed of multiple processors.
37
Continued…
● The Fifth Generation (1990–Present): Mobile Computers
○ The idea of combining telephony and computing in a phone
like device has been around since the 1970s.
○ The first real smartphone did not appear until the mid-1990s
when Nokia released the N9000.
○ N9000 literally combined two, mostly separate devices:
■ a phone and a PDA (Personal Digital Assistant).
○ In 1997, Ericsson coined the term smartphone for its GS88
‘‘Penelope.’’
○ Now that smartphones have become ubiquitous, the
competition between the various operating systems is fierce.
38
Continued…
○ Google’s Android is the dominant operating system.
■ Apple’s iOS is a clear second, but this was not always the
case and all may be different again in just a few years.
○ After all, most smartphones in the first decade after their
inception were running Symbian OS.
○ It was the operating system of choice for popular brands like
Samsung, Sony Ericsson, Motorola, and especially Nokia.
○ For phone manufacturers, Android had the advantage that it
was open source and available under a permissive license.
39
40
CHAPTER TWO
42
Operating System Structure
● Monolithic Systems
○ By far the most common organization.
○ In the monolithic approach the entire operating system runs
as a single program in kernel mode.
○ The OS is written as a collection of procedures, linked together
into a single large executable binary program.
○ Each procedure in the system is free to call any other one.
○ Being able to call any procedure you want is very efficient,
■ but having thousands of procedures that can call each
other without restriction may also lead to a system that is
unwieldy and difficult to understand.
43
Continued…
○ Also, a crash in any of these procedures will take down the
entire operating system.
○ One first compiles all the individual procedures and then binds
them all together into a single executable file using the
system linker.
○ For each system call there is one service procedure that takes
care of it and executes it.
○ The utility procedures do things that are needed by several
service procedures,
■ such as fetching data from user programs.
44
Continued…
46
Continued…
● Layered Systems
○ Organize the operating system as a hierarchy of layers, each
one constructed upon the one below it.
○ The first system constructed in this way was the THE system.
■ It was built at the Technische Hogeschool Eindhoven in
the Netherlands.
○ The system had six layers:
■ Layer 0 dealt with allocation of the processor, switching
between processes when interrupts occurred or timers
expired.
■ Layer 1 did the memory management.
47
Continued…
○ Layer 2 handled communication between each process and
the operator console (that is, the user).
○ Layer 3 is responsible for Input output management.
○ Layer 4 was found where the user programs were found.
○ The system operator process was located in layer 5.
49
Continued…
52
Continued…
● Virtual Machines
○ unlike all other operating systems, these virtual machines are
not extended machines, with files and other nice features.
○ Instead, they are exact copies of the bare hardware, including
kernel/user mode, I/O, interrupts, and everything else the real
machine has.
○ Because each virtual machine is identical to the true
hardware, each one can run any operating system that will run
directly on the bare hardware.
53
Design Issues
● Making the multiplicity of processors and storage
devices transparent to the users has been a key
challenge to many designers.
56
Continued…
○ The thread takes less time to terminate as
compared to the process but unlike the process,
threads do not isolate.
○ Note: In some cases where the thread is processing
a bigger workload compared to a process’s
workload then the thread may take more time to
terminate.
○ But this is an extremely rare situation and has
fewer chances to occur.
57
Process State
● As a process executes, it changes state.
● The state of a process is defined in part by the current
activity of that process.
● A process may be in one of the following states:
○ New: The process is being created.
○ Running: Instructions are being executed.
○ Waiting: The process is waiting for some event to occur (such
as an I/O completion or reception of a signal).
○ Ready: The process is waiting to be assigned to a processor.
○ Terminated: The process has finished execution.
58
Continued…
● It is important to realize that only one process can be
running on any processor at any instant.
60
Process Control Block
● Each process is represented in the operating system
by a process control block (PCB) also called a task
control block.
● It contains many pieces of information associated with
a specific process. Including:
○ Process state:
■ The state may be new, ready, running, waiting,
halted, and so on.
○ Program counter:
■ The counter indicates the address of the next
instruction to be executed for this process. 61
Continued…
○ CPU registers:
■ The registers vary in number and type,
depending on the computer architecture.
● They include accumulators, index registers,
stack pointers, and general-purpose registers,
plus any condition-code information.
○ CPU-scheduling information:
■ This information includes a process priority,
pointers to scheduling queues, and any other
scheduling parameters.
62
Continued…
○ Memory-management information:
■ This information may include such items as the
value of the base and limit registers and the
page tables, or the segment tables, depending
on the memory system used by the operating
system.
○ Accounting information:
■ This information includes the amount of CPU
and real time used, time limits, account
numbers, job or process numbers, and so on.
63
Continued…
○ I/O status information:
■ This information includes the list of I/O devices
allocated to the process, a list of open files, and
so on.
Fig A: Process control block (PCB). Fig B: Diagram showing CPU switch from process to process.
64
The Role of Interrupts
● An interrupt is a signal emitted by a device attached to
a computer or from a program within the computer.
● It requires the operating system (OS) to stop and figure
out what to do next.
● An interrupt temporarily stops or terminates a service
or a current process.
● Most I/O devices have a bus control line called
Interrupt Service Routine (ISR) for this purpose.
● An interrupt signal might be planned (requested by a
program) or unplanned (caused by an event that may
not be related to a program)
65
Continued…
● Today, almost all computing systems are interrupt-
driven.
● What this means is that they follow the list of
computer instructions in a program and run the
instructions until they get to the end or until they
sense an interrupt signal.
● If the latter event happens, the computer either
resumes running the current program or begins
running another program.
● In either case, it must stop operations while deciding
on the next action.
66
Continued…
● Why we require Interrupt?
○ External devices are comparatively slower than
CPU.
○ So if there is no interrupt, CPU would waste a lot of
time waiting for external devices to match its speed
with that of CPU.
○ This decreases the efficiency of CPU.
○ Hence, interrupt is required to eliminate these
limitations.
67
Concurrent execution
● Concurrency is the execution of multiple instruction
sequences at the same time.
● It happens in the operating system when there are
several process threads running in parallel.
● The running process threads always communicate
with each other through shared memory or message
passing.
● Concurrency results in the sharing of resources
resulting in problems like deadlocks and resource
starvation.
68
Deadlock
● In a multiprogramming environment,
○ several processes may compete for a finite number
of resources.
● A process requests resources;
○ if the resources are not available at that time, the
process enters a waiting state.
○ Sometimes, a waiting process is never again able to
change state,
○ this is because the resources it has requested are
held by other waiting processes.
● This situation is called a deadlock. 69
Continued…
● Conditions for deadlock
○ Coffman et al. (1971) showed that four conditions
must hold for there to be a (resource) deadlock:
73
Deadlock Prevention
● Do not allow one of the four conditions to occur.
● Try to fail anyone of the necessary condition!
● By ensuring that at least one of these conditions
cannot hold, we can prevent the occurrence of a
deadlock.
● These methods prevent deadlocks by constraining
how requests for resources can be made.
74
Continued…
● No Mutual Exclusion(make shareable)
○ The mutual-exclusion condition must hold for non-sharable
resources.
○ For example, a printer cannot be simultaneously shared by
several processes.
○ Sharable resources, in contrast, do not require mutually
exclusive access and thus cannot be involved in a deadlock.
○ Read-only files are a good example of a sharable resource, and
aren’t susceptible to deadlock.
○ In general, however, we cannot prevent deadlocks by denying
the mutual-exclusion condition, because some resources are
intrinsically non-sharable.
75
Continued…
● No Hold and no Wait(Request all resources initially)
○ Whenever a process requests a resource, it does not hold any
other resources.
○ One protocol that can be used requires each process to
request and be allocated all its resources before it begins
execution.
○ An alternative protocol allows a process to request resources
only when it has none.
○ A process may request some resources and use them:
■ Before it can request any additional resources, however, it
must release all the resources that it is currently allocated.
(it takes money to make money)
76
Continued…
● Allow Preemption(Take resources away)
○ Release any resource already being held if the process can't
get an additional resource.
○ If a needed resource is held by another process, which is also
waiting on some resource, preempt it.
○ E.g. p2 waiting for something held by p1, then take resource
away from p1 and give to p2
○ This protocol is often applied to resources whose state can be
easily saved and restored later, such as CPU registers and
memory space.
○ There are the following types of resources.
■ Preemptable Resources
77
Continued…
● No Circular Wait(Order resources numerically)
○ One way to ensure that this condition never holds is to
impose a total ordering of all resource types and to require
that each process requests resources in an increasing order
of enumeration.
○ Number resources and only request in ascending order.
○ Circular wait can be eliminated by having a rule saying that
a process is entitled only to a single resource at any
moment.
○ If it needs a second one then it must release the first one.
○ Each resource will be assigned with a numerical number.
78
Continued…
○ A process can request for the resources only in
increasing order of numbering.
○ For Example:
■ If process P1 is allocated resources R5, now next time
if P1 ask for R4, R4 is less than R5 such request will
not be granted, only request for resources more than
R5 will be granted
79
Deadlock Avoidance
● Deadlock-prevention ensure that at least one of the
necessary conditions for deadlock cannot occur and,
hence, that deadlocks cannot hold.
● With the complete sequence of requests and releases
for each process, the system can decide for each
request whether or not the process should wait in
order to avoid a possible future deadlock.
● Each request requires that in making this decision the
system consider the resources currently available, the
resources currently allocated to each process, and the
future requests and releases of each process.
80
Continued…
● A deadlock-avoidance algorithm dynamically
examines the resource-allocation state to ensure that a
circular wait condition can never exist.
SCHEDULING AND
DISPATCH
List of Contents
1. Introduction
2. Preemptive and Non-preemptive Scheduling
3. Scheduling Criteria
4. Types of Scheduling Algorithms
85
Introduction
● When a computer is multiprogrammed, it frequently
has multiple processes or threads competing for the
CPU at the same time.
● This situation occurs whenever two or more of them
are simultaneously in the ready state.
● If only one CPU is available, a choice has to be made
which process to run next.
● The part of the operating system that makes the
choice is called the scheduler,
○ and the algorithm it uses is called the scheduling
algorithm.
86
Continued…
● Whenever the CPU becomes idle, the operating
system must select one of the processes in the ready
queue to be executed.
● The selection process is carried out by the short-term
scheduler, or CPU scheduler.
● The scheduler selects a process from the processes in
memory that are ready to execute and allocates the
CPU to that process.
87
Continued…
● Dispatcher
○ Another component involved in the CPU-
scheduling function is the dispatcher.
○ The dispatcher is the module that gives control of
the CPU to the process selected by the short-term
scheduler.
○ This function involves the following:
■ Switching context
■ Switching to user mode
■ Jumping to the proper location in the user
program to restart that program 88
Continued…
○ The dispatcher should be as fast as possible, since it
is invoked during every process switch.
○ The time it takes for the dispatcher to stop one
process and start another running is known as the
dispatch latency.
89
Preemptive and Non-preemptive Scheduling
● Preemptive Scheduling:
○ Preemptive scheduling is used when a process
switches from running state to ready state or from
waiting state to ready state.
○ The resources (mainly CPU cycles) are allocated to
the process for the limited amount of time and
then is taken away, and the process is again placed
back in the ready queue if that process still has CPU
burst time remaining.
○ That process stays in ready queue till it gets next
chance to execute.
90
Continued…
● Non-Preemptive Scheduling:
○ Non-preemptive Scheduling is used when a process
terminates, or a process switches from running to
waiting state.
○ In this scheduling, once the resources (CPU cycles) is
allocated to a process, the process holds the CPU till it
gets terminated or it reaches a waiting state.
○ In case of non-preemptive scheduling, it does not
interrupt a process running CPU in middle of the
execution.
○ Instead, it waits till the process complete its CPU burst
time and then it can allocate the CPU to another process. 91
Scheduling Criteria
● There are several different criteria to consider when
trying to select the "best" scheduling algorithm for a
particular situation and environment, including:
○ CPU utilization: Ideally the CPU would be busy
100% of the time, so as to waste 0 CPU cycles. On a
real system CPU usage should range from 40% (
lightly loaded ) to 90% ( heavily loaded).
○ Throughput: Number of processes completed per
unit time. May range from 10 / second to 1 / hour
depending on the specific processes.
92
Continued…
○ Turnaround time: Time required for a particular
process to complete, from submission time to
completion.
○ Waiting time: How much time processes spend in
the ready queue waiting their turn to get on the
CPU.
○ Response time: The time taken in an interactive
program from the issuance of a command to the
commence of a response to that command.
93
Types of Scheduling Algorithms
● First Come First Served (FCFS)
○ The process which arrives first in the ready queue is
firstly assigned the CPU.
○ In case of a tie, process with smaller process id is
executed first.
○ It is always non-preemptive in nature.
○ Jobs are executed on first come, first serve basis.
○ Easy to understand and implement.
○ Its implementation is based on FIFO queue.
○ Poor in performance as average wait time is high.
94
Continued…
● If the processes arrive in the order P1, P2, P3, and are
served in FCFS order, we get the result shown in the
following Gantt chart, which is a bar chart that
illustrates a particular schedule, including the start and
finish times of each of the participating processes:
99
Continued…
Using SJF scheduling, we would schedule these processes
according to the following Gantt chart:
105
Continued…
● Round Robin Scheduling
○ CPU is assigned to the process on the basis of FCFS
for a fixed amount of time.
○ This fixed amount of time is called as time quantum
or time slice.
○ After the time quantum expires, the running
process is preempted and sent to the ready queue.
○ Then, the processor is assigned to the next arrived
process.
○ It is always preemptive in nature.
106
Continued…
● The average waiting time
under the RR policy is
often long.
● Consider the following set
of processes that arrive at
time 0, with the length of
the CPU burst given in
milliseconds:
P2 3 7–0=7 7–3=4
P3 3 10 – 0 = 10 10 – 3 = 7
109
Continued…
● Let’s calculate the average waiting time for this
schedule.
○ P1 waits for 6 milliseconds,
○ P2 waits for 4 milliseconds, and
○ P3 waits for 7 milliseconds.
110
111
CHAPTER FOUR
MEMORY MANAGEMENT
List of Contents
1. Introduction
2. Address binding
3. Logical Versus Physical Address Space
4. Memory management tasks
5. Swapping
6. Contiguous Memory Allocation
7. Fragmentation
8. Segmentation
9. Caching
113
Introduction
● Memory consists of a large array of bytes, each with its
own address.
● The CPU fetches instructions from memory according
to the value of the program counter.
● These instructions may cause additional loading from
and storing to specific memory addresses.
● A typical instruction-execution cycle, first fetches an
instruction from memory.
● The instruction is then decoded and may cause
operands to be fetched from memory.
● After the instruction has been executed on the
operands, results may be stored back in memory. 114
Continued…
● The memory unit sees only a stream of memory
addresses;
○ it does not know how they are generated (by the
instruction counter, indexing, indirection, literal
addresses, and so on) or what they are for
(instructions or data).
● Accordingly, we can ignore how a program generates a
memory address.
● We are interested only in the sequence of memory
addresses generated by the running program.
● Main memory (RAM) is an important resource that
must be very carefully managed. 115
Continued…
● The part of the operating system that manages (part
of) the memory hierarchy is called the memory
manager.
● Its job is to efficiently manage memory:
○ keep track of which parts of memory are in use,
○ allocate memory to processes when they need it,
and;
○ deallocate it when they are done.
116
Continued…
● Main memory and the registers, built into the
processor itself are the only general-purpose storage
that the CPU can access directly.
● Any data being used by the instructions, must be in
one of these direct-access storage devices.
● If the data are not in memory, they must be moved
there before the CPU can operate on them.
o Cache memory is an extremely fast memory type that
acts as a buffer between RAM and the CPU.
o It holds frequently requested data and instructions so
that they are immediately available to the CPU when
needed. 117
Address binding
● A program resides on a disk as a binary executable file.
● To be executed, the program must be brought into
memory and placed within a process.
2 Relative addresses
At the time of compilation, a compiler converts symbolic addresses into
relative addresses.
3 Physical addresses
The loader generates these addresses at the time when a program is
loaded into main memory.
119
Continued…
● The binding of instructions and data to memory
addresses can be done at any step along the way:
○ Compile time: If you know at compile time where
the process will reside in memory, then absolute
code can be generated.
○ Load time: If it is not known at compile time where
the process will reside in memory, then the compiler
must generate relocatable code.
■ In this case, final binding is delayed until load
time.
120
Continued…
○ Execution time: If the
process can be moved
during its execution from
one memory segment to
another, then binding
must be delayed until run
time.
■ Special hardware must
be available for this
scheme to work.
Fig: Multistep processing of a user program
121
Logical Versus Physical Address Space
● An address generated by the CPU is commonly
referred to as a logical address,
● Whereas an address seen by the memory unit that is,
the one loaded into the memory-address register of
the memory is commonly referred to as a physical
address.
● The compile-time and load-time address-binding
methods generate identical logical and physical
addresses
● However, the execution-time address binding scheme
results in differing logical and physical addresses.
122
Continued…
● In this case, we usually refer to the
logical address as a virtual address.
● The set of all logical addresses
generated by a program is a logical
address space.
● The set of all physical addresses
corresponding to these logical
addresses is a physical address space.
● The run-time mapping from virtual
to physical addresses is done by a
hardware device called the memory Fig: Dynamic relocation
using a relocation
management unit (MMU). register
123
Memory management tasks
● Protection: Make sure process accesses only the
memory it is allowed to access.
● Flexible sharing: Dynamic relocation to allow use of
whatever memory is currently available.
● Caching and virtualization: Allow process to require
more memory than is currently available.
● Memory management task #1: Allocate memory
space among user programs (keep track of which parts
of memory are currently being used and by whom)
Memory management task #2: address translation
and protection
124
Swapping
● A process must be in memory to be executed.
● A process, however, can be swapped temporarily out of
memory to a backing store and then brought back into
memory for continued execution.
● Swapping is the process of bringing in each process in
main memory, running it for a while and then putting
it back to the disk.
● Swapping makes it possible for the total physical
address space of all processes to exceed the real
physical memory of the system, thus increasing the
degree of multiprogramming in a system.
125
Continued…
129
Continued…
● In the variable-partition scheme, the operating system
keeps a table indicating which parts of memory are
available and which are occupied.
● Initially, all memory is available for user processes and
is considered one large block of available memory, a
hole.
● Memory contains a set of holes of various sizes.
● At any given time, then, we have a list of available block
sizes and an input queue.
● The operating system can order the input queue
according to a scheduling algorithm.
130
Continued…
● The big concern with dynamic storage allocation is,
how to satisfy a request of n-size from a list of free
holes.
● There are many solutions to this problem.
● However the three most common solutions are:
○ First-fit: Allocate the first hole that is big enough.
○ Best-fit: Allocate the smallest hole that is big
enough.
○ Worst-fit: Allocate the largest hole.
131
Fragmentation
● Fragmentation is a problem where the memory blocks
cannot be allocated to the processes due to their small
size and the blocks remain unused.
● It can also be understood as when the processes are
loaded and removed from the memory they create free
space.
● Basically, there are two types of fragmentation:
○ External fragmentation
■ Total memory space is enough to satisfy a
request or to reside a process in it, but it is not
contiguous, so it cannot be used.
132
Continued…
● Internal fragmentation
○ Memory block assigned to process is bigger.
○ Some portion of memory is left unused, as it cannot
be used by another process.
Solutions to fragmentation
○ Compaction: Shuffle memory contents to place all
free memory together in one large block.
○ Compaction is possible only if relocation is dynamic,
and is done at execution time.
133
Continued…
0 0
OS OS
400K 400K
P5 P5
900K 100K 900K
1000K
P4 P4
1700K 300K 1600K
2000K Compact 1900K P3
P3
2300K 660K
260K
2560K 2560K
134
Continued…
○ Paging: it is a memory management mechanism
that allows the physical address space of a process
to be non-contagious.
○ Here physical memory is divided into blocks of equal
size called Pages.
○ The pages belonging to a certain process are loaded
into available memory frames.
○ Page Table: is the data structure used by a virtual
memory system in a computer operating system to
store the mapping between virtual address and
physical addresses. 135
Continued…
o The operating system retrieves data from secondary
storage in same-size blocks called pages.
o Divide physical memory into fixed-sized blocks
called frames (size is power of 2, between 512 bytes
and 8,192 bytes)
o Divide logical memory into blocks of same size
called pages.
o To run a program of size n pages, need to find n free
frames and load program.
136
Continued…
137
Segmentation
● Segmentation is a technique to break memory into
logical pieces where each piece represents a group of
related information.
● For example, data segments or code segment for each
process, data segment for operating system and so on.
138
Continued…
o A segment is a logical unit such as
o main program, procedure, function
o local variables, global variables, common block
o stack, symbol table, arrays and so on.
140
Continued…
141
Continued…
144
145