Haramaya University College of Computing and Informatics: Department of Software Engineering
Haramaya University College of Computing and Informatics: Department of Software Engineering
GROUP MEMBERS
NAME ID
1)ABENEZER ENDALEW………………………………………………………..
2)BIKILA KENENI………………………………………………………………..
3)FIRAOL TSEGAYE…………………………………………………………….
4)GULUMA TAFA………………………………………………………………..
5)HAYU YONATHAN……………………………………………………………
6)LATERA TUJO………………………………………………………………..
7)SISAY TIBEBU………………………………………………………………..
BATE,HARAMAYA,OROMIA
1) In an operating system that implements paging, different page replacement algorithms (PRAs)
are needed to decide which memory page needs to be evicted (replaced) when a page fault
occurs and when a new page needs to come in. Below are three of these page replacement
algorithms. Explain how each of these page replacement algorithms work?
When the system runs out of memory, a page replacement algorithm (PRA) decides which page to remove to make
room for a new one.
• Evicts the page that has been in memory the longest, i.e., the first-in page is the first-out.
• Example: If pages are loaded in the order A, B, C, and page D needs space, A will be evicted.
• Advantage: Simple and easy to implement.
• Disadvantage: May remove pages still in use, leading to inefficiency.
• Evicts the page that hasn’t been used for the longest time.
• Assumes that less recently used pages are less likely to be needed.
• Example: If A, B, C are in memory, and A was used longest ago, A is evicted.
• Advantage: Generally effective and widely used.
• Disadvantage: Requires tracking page usage, which can be resource-intensive.
2. A. Explain what a device controller and device driver is, including their role in Input/Output (I/O)
operation.
B. Explain the purpose of device registers (Data Out register, Data In register, Status register and
Control register)
Device controller: A device controller is a hardware component that manages the interaction between the computer
and a specific I/O device, such as a keyboard, mouse, printer, or hard drive.
It serves as the intermediary between the I/O device and the computer’s main system.
Device Driver: A device driver is a software component that acts as a translator between the operating system (OS)
and the device controller.
• It contains the instructions the operating system needs to control the device.
Together, the device controller and device driver coordinate to perform Input/Output (I/O) operations. Here's how
the process works:
1. I/O Request: The operating system (via the user or an application) issues an I/O request, such as reading a
file from the disk.
2. Driver Interaction: The device driver translates this request into device-specific commands and forwards
them to the device controller.
3. Device Controller Operation: The device controller executes the commands by directly interacting with the
hardware device.
o For example, the disk controller moves the read/write head to the appropriate location on the hard drive and
retrieves the requested data.
4. Data Transfer: The device controller transfers the data either directly to the memory (via Direct Memory
Access (DMA)) or through the CPU.
3. Explain the following categories of I/O devices and how they work with example.
A. Character Stream devices and Block devices B. Sequential and Random access devices.
Definition: These devices transfer data as a continuous stream of individual characters (bytes).
Operation: Data is processed one character at a time, without concern for structure or blocks.
Examples: Keyboard: Sends characters one by one to the computer as you type.
How It Works: When you press a key on the keyboard, the character is immediately sent to the CPU for
processing without waiting to form a block of data.
Block Devices:
Definition: These devices transfer data in fixed-size chunks or blocks.
Operation: Data is read or written in large, structured blocks, often buffered in memory.
Characteristics: -Used for devices where data storage and retrieval require structure.
Examples: Hard Drive: Stores files in sectors and retrieves them in blocks.
How It Works: When a file is accessed, the operating system retrieves the relevant blocks of data from the storage
device and loads them into memory for processing.
Buffering: Temporary storage used during data transfers to accommodate speed differences between devices.
• Example of Buffering: Keyboard Input: When typing, the characters are stored in a buffer before being sent
to the application for processing. This ensures no keystrokes are lost even if the CPU is temporarily busy.
Advantages of Buffering:
• Handles speed differences between devices (e.g., CPU and hard drive).
• Prevents data loss in high-speed data transfers.
• Improves system performance by allowing devices to work independently.
Spooling: Data is stored in a queue while waiting for a device to become available.
Example of Spooling:
Printing Jobs: -Print jobs from multiple applications are stored in a spool until the printer is ready to process each
job one by one.
-While one job is printing, new print jobs can still be submitted and queued.
Advantages of Spooling:
-Definition: Polling is a technique where the CPU repeatedly checks (polls) the status of a device to see if it is ready for input
or output operations.
-How It Works: The CPU continuously queries the device’s status register in a loop until it receives confirmation that the
device is ready.
Example: A program might check a printer’s status repeatedly to see if it is ready to accept a new print job.
Disadvantages: -Inefficient because the CPU wastes time in a busy loop instead of performing other tasks.
B. Interrupts
Interrupts allow devices to signal the CPU when they need attention, avoiding the need for polling. When an
interrupt occurs, the CPU pauses its current task, handles the interrupt, and then resumes.
-Definition: High-priority interrupts that cannot be disabled. They are used for critical events that must be addressed
immediately.
-Use Case: Hardware failures like power supply issues or system crashes.
-Handling interrupts involves context switching, which can slightly slow performance.
C. Programmed I/O (PIO)
• Definition: In Programmed I/O, the CPU directly controls data transfer between the device and memory. The CPU
issues commands and waits for the device to complete each operation.
How It Works:
Disadvantages: -CPU is heavily involved, wasting time during the data transfer.
D. Interrupt-Driven I/O
• Definition: In interrupt-driven I/O, the device notifies the CPU via an interrupt when it is ready for data transfer,
eliminating the need for polling.
How It Works:
1. CPU initiates the I/O operation and continues executing other tasks.
2. The device sends an interrupt to the CPU when it is ready.
3. The CPU pauses its current task, processes the interrupt, and resumes the task.
Example: A network card generates an interrupt when a packet arrives, signaling the CPU to process the packet.
Advantages: -More efficient than PIO, as the CPU isn’t tied up waiting.
-Definition: Direct Memory Access allows a device to transfer data directly to or from memory without involving the
CPU for every byte of data.
-How It Works:
1. The CPU initiates the DMA transfer by providing the necessary parameters (source, destination, and size).
2. The DMA controller handles the transfer, freeing the CPU for other tasks.
3. Once the transfer completes, the DMA controller notifies the CPU using an interrupt.
Example: Transferring a large file from a hard disk to memory without CPU intervention.
• Synchronous I/O: CPU waits for the I/O to complete before continuing.
• Asynchronous I/O: The CPU can perform other tasks while waiting for the I/O operation to finish.
B. Port-Mapped vs. Memory-Mapped I/O:
• Port-Mapped I/O: Devices are assigned a separate address space, distinct from the system's memory address space.
• Memory-Mapped I/O: Devices share the system’s memory space, making it accessible like regular memory.
Key Differences:
Feature Port-Mapped I/O Memory-Mapped I/O
Address Space Separate from memory space Shared with memory space
Address Space Size Limited (e.g., 256 or 64K ports) Larger (based on memory size)
• Caching: Caching is a technique used to store frequently accessed data in a faster storage area (the cache), so it can be
retrieved quickly without accessing slower storage like disks.
o Cache Hit: Data is found in the cache, speeding up access.
o Cache Miss: Data is not in the cache, requiring slower retrieval from disk.
• Benefit: Significantly improves performance by reducing access times.