Puter System Overview
Puter System Overview
Contact: [email protected]
[email protected]
Input
Units
Output
Units
Storage CD/DVD
Units Processor
HD
Ram
Input
Units
Output
Units
Storage Units CD/DVD
Processor HD
Ram
Introduction
• A computer is made up of hardware and software
• The hardware represents the body of the computer, that is, the tangible
devices such as a screen, keyboard, processor, memory, disks and printers.
• But, It is not easy to use the hardware components of the computer without
software.
Course Description
10) Identify the basic concepts related to concurrency, such as(race condition,
mutual exclusion and critical section).
11) Classify/categorize the solutions to the critical section problem (Hardware,
Software and Operating system).
12) Compare and contrast the different ways of memory management and the
advantages and disadvantages of each.
13) Identify the difference between physical and logical addresses
14) Discuss Memory Partitioning (Fixed and Dynamic partitioning)
15) Explain the concept of paging and segmentation
16) Assess the relative advantages of paging and segmentation
17) Identify hardware and software support for virtual memory management
18) Identify the objective of processes scheduling
Intended learning outcomes of course (ILOs)
19) Differentiate among types of scheduler such as short-term, medium-term,
and long-term.
20) Compare and contrast CPU scheduling algorithms used for both preemptive
and non-preemptive scheduling of processes such as FCFS, SJF, priority and
Round Robin.
21) Compare and contrast the file organizations techniques (sequential file,
indexed sequential file, indexed file, direct, or hashed file).
22) Identify the file Access rights (Execution, Reading, Appending, Updating
,Changing protection and Deletion)
23) Compare and contrast the disk scheduling algorithms to handle I/O requests
• Selection according to requestor (Random-FIFO- PRI- LIFO)
• Selection according to requested item (SSTF-SCAN-C-SCAN)
COURSE CHAPTERS
Chapter 1: Computer System Overview
Chapter 2: Operating System Overview
Chapter 3: Process Description and Control (Processes Management)
Chapter 4:Threads
Chapter 5: Concurrency: Mutual Exclusion and Synchronization
Chapter 6: Memory Management
Chapter 7: Virtual Memory
Chapter 8: Uniprocessor Scheduling
Chapter 9: File Management & Disk Scheduling
.
Contact: [email protected]
[email protected]
Outline
➢ Basic Elements of a computer system
➢ Processor registers
➢ Instruction Execution
➢ Interrupts
• Classes of Interrupts • Interrupt Handler
• Instruction Cycle with Interrupts • Interrupt Processing
• Multiple Interrupts and approaches dealing with multiple interrupts
➢ Memory Hierarchy
➢ Cache Memory
➢ I/O Communication Techniques
• Programmed I/O
• Interrupt-Driven I/O
• Direct Memory Access (DMA)
Operating System
➢ OS exploits the hardware resources of one or
more processors to provide a set of services to
users.
➢ OS also manages main and secondary memory
and I/O devices on behalf of its users.
Moves data
between storage (e.g. hard drive)
computer and communications equipment
external terminals
environments
such as:
unrecoverable
error occurs
Decode
Basic Instruction
Figure 1.2 Cycle
Basic Instruction Cycle
Instruction Fetch
and Execute
◼ The
processor fetches the instruction from
memory
◼ Program counter (PC) holds address of the
instruction to be fetched next.
▪ PC is incremented after each fetch so, it will
fetch the next instruction in sequence.
Instruction Register (IR)
◼ Processor interprets the
instruction and performs
required action.
◼ Types of instructions:
◼ Processor-memory
◼ Processor-I/O
◼ Data processing
◼ Control
Transfer data between processor and memory
• For example, the processor may fetch an instruction from location 149, which specifies
that the next instruction be from location 182.
• The processor sets the program counter to 182.
• Thus, on the next fetch stage, the instruction will be fetched from location 182 rather
than 150.
Interrupts
➢ An interruption of the normal sequence of execution
▪ This interruption is temporary, and, after the interrupt handler finishes, the
processor resumes normal activities.
User Program Interrupt Handler
Generally part of the operating system
Interrupt i
Interrupt
occurs here
occurs here i+1
resumes
completed
For example, pressing a key on a computer keyboard or moving the mouse, triggers
interrupts that cause the processor to read the keystroke by calling interrupt
handlers which read the key, or the mouse's position, and copy the associated
information into the computer's memory.
Hardware Interrupt
• For the user program, an interrupt suspends the normal sequence of execution.
• When the interrupt processing is completed, execution resumes.
• Thus, the user program does not have to contain any special code to accommodate
interrupts.
Interrupt Handler
S/W H/W
Handled by OS Handled by BIOS
BIOS(Basic Input Output System)
• BIOS, computer program that is typically stored in EPROM before on and used by the
CPU to perform start-up procedures when the computer is turned on.
• The first program loaded into memory is BIOS why ?
Because it is the database of H/W (database of drivers ) so, it knows where the device
driver of H/W in memory.
➢ Its two major procedures are:
1) Determining what peripheral devices (keyboard, mouse, disk drives, printers, video
cards, etc.) are available Min Memory
2) Loading OS into main memory.
Load BIOS
Load Drivers
Load OS
When users turn on their computer, the
microprocessor passes control to the BIOS ➢ So, neither the OS nor the application
program, which is always located at the programs need to know the details of the
same place on EPROM. peripherals (such as hardware addresses).
The functions of BIOS
BIOS identifies, configures, tests and connects computer hardware to the OS
immediately after a computer is turned on. The combination of these steps is called the
boot process.
1) Power-on self-test (POST): tests the hardware of the computer before loading the OS.
2) Software/drivers: locates the software and drivers that interface with the OS once
running.
3) Bootstrap loader: locates the OS.
Processing of H/W Interrupt
❖ Any H/W has a device driver loaded into a certain address in memory
❖ The device driver contains a function (Interrupt Service Routine ) to deal with certain
interrupt.
Min Memory
device driver
Data Structure
Buffer for
HALT interrupts
• Arrival Time
• Priority
• Processor checks Figure 1.7 Instruction Cycle with Interrupts
for interrupts
• If no interrupts, fetch the next instruction for the current program
• If an interrupt is pending, suspend execution of the current program, and
execute the interrupt handler.
• When the interrupt-handler routine is completed, the processor can resume
execution of the user program at the point of interruption.
Interrupt Processing
The control is transferred to the interrupt-handler program. The execution of this program
results in the following operations:
6) The contents of the processor registers need to be saved, because these registers may
be used by the interrupt handler. Typically, the interrupt handler will begin by saving
the contents of all registers on the stack.
7) The handler performs the interrupt processing.
8) When interrupt processing is complete, the saved register values are retrieved from the
stack and restored to the registers).
9) Finally, is to restore the old PSW and program counter values from the stack.
As a result, the next instruction to be executed will be from the previously interrupted
program.
Program status word (PSW)
➢ All processor designs include a register or set of registers, often known
as the program status word (PSW), that contains condition codes plus
other status information.
• Condition codes: Result of the most recent arithmetic or logical operation (e.g.,
positive result , negative result , zero, overflow).
• Status information: Includes interrupt enable/disable bit , execution mode
(kernel/user-mode bit).
Multiple Interrupts
An interrupt occurs while another interrupt is being processed
• e.g. a program may be receiving data from communications
line and printing results at the same time
• The processor ignores any new interrupt while processing one interrupt.
• If an interrupt occurs during this time, it remains pending/disabled and will be checked
by the processor after the processor has reenabled interrupts.
• After the interrupt-handler routine completes, interrupts are reenabled before resuming
the user program, and the processor checks to see if additional interrupts have occurred.
The drawback of the approach
For example, when input arrives from the communications line, it needs to
be absorbed quickly to make room for more input.
Multiprogramming
• Processor has more than one program to execute
• The sequence the programs are executed depend
on their relative priority and whether they are
waiting for I/O.
• After an interrupt handler completes, control
may not return to the program that was executing
at the time of the interrupt.
The use of the stack in interrupt handling
• The stack is used to save the state of the CPU including the program counter and
status register, allowing the interrupt service routine to execute.
• After handling the interrupt, the original state is restored from the stack to continue
program execution.
Note:
1) In case of more than one interrupt → interrupt buffer
2) Interrupt ca be organized on Arrival Time or Priority
3) Multiple cores can handle independent interrupt
Memory Hierarchy
◼ Major constraints in memory
• amount
• speed
• expense
➢ The solution is providing a small, fast memory between the processor and
main memory, namely the cache (Invisible to OS).
Types of Cache Memory
Motherboard
Processor
Cache
1) L1 cache
• Sometimes called Primary Cache, is the smallest and fastest memory level.
• It is commonly 64KB in size and up to 100 times faster than RAM.
• If a CPU has four cores (quad core CPU), then each core will have its own level 1 cache.
• So, a quad-core CPU would have a total of 256 KB
• If the processor fails to find the required data in L1, it looks for it in the L2 and L3
cache.
Processor
Cache Cache
L1 L2
2) L2 cache. This level 2 cache may be :
• Outside the CPU (on a separate chip) and is placed between the primary cache and
the rest of the memory.
• Or Inside the CPU
All the cores of a CPU can have their own separate level 2 cache or they can share
one L2 cache among themselves.
➢ The memory size of this cache is in the range of 256 KB to the 512 KB, sometimes as
high as 1MB. (256KB to 32MB).
➢ It is slower than the L1 cache. (about 25 times as fast as RAM).
Core1 Core2
Cache Cache
Shared L2 Cache
3) L3 cache
• The largest and slowest cache compared to L1 and L2.
• shared by multiple processor cores.
• With multicore processors, each core can have dedicated L1 and L2 cache, but they
can share L3 cache.
• The memory size of this cache is up to 32MB, while AMD's revolutionary Ryzen 7
5800X3D CPUs come with 96MB L3 cache. Some server CPU L3 caches can exceed
this, featuring up to 128MB.
• (twice as fast as the RAM)
Shared L3 cache
Locality
• Programs tends to reuse data and instructions near those they have used recently.
• The ability of cache memory to improve a computer's performance relies on the
concept of locality of reference.
• Locality describes various situations that make a system more predictable.
• Cache memory takes advantage of these situations to create a pattern of memory
access that it can rely upon.
Locality of Reference
Single cache
The L2 cache is slower and typically larger than the L1 cache, and the L3 cache is
slower and typically larger than the L2 cache.
The design of cache memory
(a) Cache
(a) Cache
Why block and not
byte or word ?
Block M – 1
Search Time in
tags almost 0
Average access time
Processor Cache Main Memory
T1 T2 x1 x2
T1 <<T2 Avg= x1+x2 /2
.5 * x1 + .5 *x2
Weight 1 * value 1 +
Weight 2 * value 2
Average Access time= T1+T2 /2 Sum of weights =1
Assumption: all values
Weighted Average are equiprobable
Two cases:
Average Access time= Found in Cache + not Found in Cache
= Hit ratio * T1 + (1-Hit ratio ) * (T2+T1)
Hit ratio=#Times found in Cache / Total # of Times
Hit Ratio(H) = hit / (Hit + Miss) = # of Hits/Total accesses
Miss Ratio = 1-Hit ratio
Miss Ratio = miss / (hit + miss) = # of miss/total accesses
T1
0 1 h
Possible values of h
0 means code has no locality (Impossible)
1 means that cache is large enough as main memory that always data found in it
(Impossible)
0 &1 is not realistic
If H=0 Average access time is T1+T2 (without cache time is only T2) →Worst
If H=1 Average access time is T1
So, for the cache to improve performance, H must be close to 1 How?
1- large cache size
2- writing a code with high level of locality
Why does cache improve performance ?
Because all programs, whether we like it or not, contain locality
→ away of 0 and according to size.
Virtual Memory
T1 <<T2 << T3
h 1-h
T1
alpha 1-alpha
T1+T2 T1+T2+T3
Cache
cache size
Design
Issues
write policy block size
Main
categories
are:
replacement mapping
algorithm function
▪ Cache size
• Small caches have a significant impact on performance
▪ Block size
• The unit of data exchanged between cache and main memory
• Hit means the information was found in the cache
• As the block size increases, the hit ratio will at first increase because of the
principle of locality.
• The hit ratio will begin to decrease, as the block size becomes bigger, the
probability of using the newly fetched data becomes less than the
probability of reusing the data that have to be moved out of the cache to
make room for the new block.
▪ Mapping function
• Determines which cache location the block in main
memory will occupy
▪ Replacement algorithm
• Chooses which block to replace when a new block is to
be loaded into the cache When all lines are occupied
1) Least Recently Used (LRU) Algorithm
– Replace a block that has been in the cache the longest
with no references to it.
– Implementation: having a USE bit for each line
2) Least Frequently Used (LFU) Algorithm
̶ Replace that block with fewest references or hits.
̶ Implementation: associate a counter with each line and
increment when used.
▪ Write policy (to handle consistency problem as
there exist more than one copy of data)
Defines how data is written to main memory if the contents of a block in the cache
are altered, it is necessary to write it to main memory.
▪ Can occur every time block is updated (write Through)
• Writes data to the cache and the main memory at the same time.
This policy is easy to implement in the architecture but is not as efficient
since every write to cache is a write to the slower main memory.
▪ Can occur only when block is replaced (write Back)
• Writes data to the cache but only writes the data to the main memory when
data is about to be replaced in the cache.
– Minimizes memory operations
– Leaves memory in an obsolete state(As both cache and main memory have
different data)
I/O Communication Techniques
➢ When the processor encounters an instruction relating to I/O, it executes that
instruction by issuing a Read command to the appropriate I/O module.
➢ I/O data transfer techniques are used to transfer data from I/O devices to
memory and vice versa.
I/O Devices
Processor Memory (connected through
I/O
Controller/Module)
I/O Communication Techniques
The I/O module performs the requested action then sets the
appropriate bits in the I/O status register. (it does not
interrupt the processor)
Programmed I/O
◼ With programmed I/O the performance level of the entire system
is severely degraded (Waste of processor time) Why?
1) The processor has to wait a long time for the I/O module of
concern to be ready for reception or transmission of more data.
2) The processor, while waiting, must repeatedly interrogate the
status of the I/O module.
Interrupt-Driven I/O
▪ CPU sends the ‘Read‘ command to the I/O module about the
status and then goes on to do some useful work.
▪ When the I/O module is ready, it sends an interrupt signal to
CPU.
▪ When the CPU receives the interrupt signal, it checks the
status.
▪ If the status is ready, then the CPU read the word from the
I/O module and writes the word into the main memory.
▪ If the operation was done successfully, the processor goes on
to the next instruction.
1) Transfer rate is limited by the speed with which the processor can
test and service a device
2) The processor is tied up in managing an I/O transfer
▪ a number of instructions must be executed for each I/O transfer
Direct Memory Access (DMA)
➢ I/O Controller → DMA Controller
➢ The DMA function can be performed by a separate module on the system bus
or it can be incorporated into an I/O module.
Bus
CPU Memory
DMA Controller
Buffer
I/O Devices
Direct Memory Access (DMA)
When the processor wishes to read or write data, it issues
a command to the DMA module containing:
➢ The processor executes more slowly during a DMA transfer when processor
access to the bus is required.
• The DMA module needs to take control of the bus to transfer data to and
from memory.
• Because of this competition for bus usage, there may be times when the
processor needs the bus and must wait for the DMA module.
✓ For a multiple-word I/O transfer, DMA is far more efficient than
interrupt-driven or programmed I/O.
Any
Que
stio
ns?