0% found this document useful (0 votes)
20 views11 pages

Haramaya University College of Computing and Informatics: Department of Software Engineering

The document outlines various concepts related to operating systems, including page replacement algorithms (FIFO, MRU, LRU), device controllers and drivers, types of I/O devices, buffering and spooling, and I/O techniques (polling, interrupts, programmed I/O, DMA). It explains the differences between synchronous and asynchronous I/O, port-mapped and memory-mapped I/O, and highlights the role of clocks and timers in task scheduling and process coordination. Additionally, it discusses caching mechanisms to enhance file access performance.

Uploaded by

bikilakeneni32
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views11 pages

Haramaya University College of Computing and Informatics: Department of Software Engineering

The document outlines various concepts related to operating systems, including page replacement algorithms (FIFO, MRU, LRU), device controllers and drivers, types of I/O devices, buffering and spooling, and I/O techniques (polling, interrupts, programmed I/O, DMA). It explains the differences between synchronous and asynchronous I/O, port-mapped and memory-mapped I/O, and highlights the role of clocks and timers in task scheduling and process coordination. Additionally, it discusses caching mechanisms to enhance file access performance.

Uploaded by

bikilakeneni32
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

HARAMAYA UNIVERSITY COLLEGE OF

COMPUTING AND INFORMATICS


DEPARTMENT OF SOFTWARE ENGINEERING
COURSE: OPERATING SYSTEM SECTION B

INSTRUCTOR: Ms. METI DEJENE

GROUP MEMBERS

NAME ID

1)ABENEZER ENDALEW………………………………………………………..

2)BIKILA KENENI………………………………………………………………..

3)FIRAOL TSEGAYE…………………………………………………………….

4)GULUMA TAFA………………………………………………………………..

5)HAYU YONATHAN……………………………………………………………

6)LATERA TUJO………………………………………………………………..

7)SISAY TIBEBU………………………………………………………………..

SUBMISSION DATE: DECEMBER 3,2024 G.C

BATE,HARAMAYA,OROMIA
1) In an operating system that implements paging, different page replacement algorithms (PRAs)
are needed to decide which memory page needs to be evicted (replaced) when a page fault
occurs and when a new page needs to come in. Below are three of these page replacement
algorithms. Explain how each of these page replacement algorithms work?

When the system runs out of memory, a page replacement algorithm (PRA) decides which page to remove to make
room for a new one.

A. First-In, First-Out (FIFO):

• Evicts the page that has been in memory the longest, i.e., the first-in page is the first-out.
• Example: If pages are loaded in the order A, B, C, and page D needs space, A will be evicted.
• Advantage: Simple and easy to implement.
• Disadvantage: May remove pages still in use, leading to inefficiency.

B. Most Recently Used (MRU):

• Evicts the page that was accessed most recently.


• Assumes that pages used recently are less likely to be needed again.
• Example: If A, B, C are in memory and C is accessed last, then C is evicted when D needs space.
• Advantage: Effective for certain workloads where old pages are reused frequently.
• Disadvantage: Can mis predict in general-use scenarios.

C. Least Recently Used (LRU):

• Evicts the page that hasn’t been used for the longest time.
• Assumes that less recently used pages are less likely to be needed.
• Example: If A, B, C are in memory, and A was used longest ago, A is evicted.
• Advantage: Generally effective and widely used.
• Disadvantage: Requires tracking page usage, which can be resource-intensive.
2. A. Explain what a device controller and device driver is, including their role in Input/Output (I/O)
operation.
B. Explain the purpose of device registers (Data Out register, Data In register, Status register and
Control register)

A. Device Controller and Device Driver:

Device controller: A device controller is a hardware component that manages the interaction between the computer
and a specific I/O device, such as a keyboard, mouse, printer, or hard drive.
It serves as the intermediary between the I/O device and the computer’s main system.

Device Driver: A device driver is a software component that acts as a translator between the operating system (OS)
and the device controller.

• It contains the instructions the operating system needs to control the device.

Role in I/O Operations:

Together, the device controller and device driver coordinate to perform Input/Output (I/O) operations. Here's how
the process works:

1. I/O Request: The operating system (via the user or an application) issues an I/O request, such as reading a
file from the disk.
2. Driver Interaction: The device driver translates this request into device-specific commands and forwards
them to the device controller.
3. Device Controller Operation: The device controller executes the commands by directly interacting with the
hardware device.
o For example, the disk controller moves the read/write head to the appropriate location on the hard drive and
retrieves the requested data.
4. Data Transfer: The device controller transfers the data either directly to the memory (via Direct Memory
Access (DMA)) or through the CPU.

B. Purpose of Device Registers:

1. Data Out Register: Transfers data to the device.


o Example: Sending data to a printer.
2. Data In Register: Receives data from the device.
o Example: Reading data from a keyboard.
3. Status Register: Displays the current status of the device (e.g., busy, ready, error).
4. Control Register: Sends control commands to the device, like start or stop operations.

3. Explain the following categories of I/O devices and how they work with example.
A. Character Stream devices and Block devices B. Sequential and Random access devices.

A. Character Stream Devices vs. Block Devices:

Character Stream Devices:

Definition: These devices transfer data as a continuous stream of individual characters (bytes).

Operation: Data is processed one character at a time, without concern for structure or blocks.

Characteristics: a) No buffering is typically used; data flows directly as it is produced or consumed.

b) Suitable for devices where data is inherently sequential.

Examples: Keyboard: Sends characters one by one to the computer as you type.

Mouse: Streams cursor position updates or button clicks.

How It Works: When you press a key on the keyboard, the character is immediately sent to the CPU for
processing without waiting to form a block of data.

Block Devices:
Definition: These devices transfer data in fixed-size chunks or blocks.

Operation: Data is read or written in large, structured blocks, often buffered in memory.

Characteristics: -Used for devices where data storage and retrieval require structure.

-Supports random access to any block of data.

Examples: Hard Drive: Stores files in sectors and retrieves them in blocks.

USB Drive: Transfers chunks of data when reading or writing files.

How It Works: When a file is accessed, the operating system retrieves the relevant blocks of data from the storage
device and loads them into memory for processing.

B. Sequential vs. Random Access Devices:

• Sequential Access Devices: Data is accessed in a fixed order.

Example: Magnetic tape.

o Slower but suitable for linear processes like backups.


• Random Access Devices: Data can be accessed in any order.
o Example: Hard disk.
o Faster and better for interactive applications.

4. What is Buffering and Spooling? Explain.

Buffering: Temporary storage used during data transfers to accommodate speed differences between devices.

• Example of Buffering: Keyboard Input: When typing, the characters are stored in a buffer before being sent
to the application for processing. This ensures no keystrokes are lost even if the CPU is temporarily busy.

Advantages of Buffering:
• Handles speed differences between devices (e.g., CPU and hard drive).
• Prevents data loss in high-speed data transfers.
• Improves system performance by allowing devices to work independently.

Spooling: Data is stored in a queue while waiting for a device to become available.

Example of Spooling:

Printing Jobs: -Print jobs from multiple applications are stored in a spool until the printer is ready to process each
job one by one.

-While one job is printing, new print jobs can still be submitted and queued.

Advantages of Spooling:

• Allows multitasking by decoupling the device from the application.


• Increases device utilization by keeping them busy without waiting for CPU instructions.
• Enables efficient management of single-task devices like printers

5. Properly explain each of the following terms.


A. Polling (Busy waiting)
B. Interrupts (Maskable and Non-maskable Interrupts)
C. Programmed I/O
D. Interrupt driven I/O
E. Direct memory access (DMA)

A. Polling (Busy Waiting)

-Definition: Polling is a technique where the CPU repeatedly checks (polls) the status of a device to see if it is ready for input
or output operations.
-How It Works: The CPU continuously queries the device’s status register in a loop until it receives confirmation that the
device is ready.

Example: A program might check a printer’s status repeatedly to see if it is ready to accept a new print job.

Advantages: -Simple to implement.

-Provides immediate response once the device is ready.

Disadvantages: -Inefficient because the CPU wastes time in a busy loop instead of performing other tasks.

B. Interrupts

Interrupts allow devices to signal the CPU when they need attention, avoiding the need for polling. When an
interrupt occurs, the CPU pauses its current task, handles the interrupt, and then resumes.

1. Maskable Interrupts (MI):


-Definition: Interrupts that can be enabled or disabled (masked) by the CPU based on the priority of the current task.

-Use Case: Non-critical tasks like keyboard input or printer status.

2. Non-Maskable Interrupts (NMI):

-Definition: High-priority interrupts that cannot be disabled. They are used for critical events that must be addressed
immediately.

-Use Case: Hardware failures like power supply issues or system crashes.

Advantages of Interrupts: -Efficient use of the CPU.

-Ensures immediate attention to important events.

Disadvantages of Interrupts: -Adds complexity to the system design.

-Handling interrupts involves context switching, which can slightly slow performance.
C. Programmed I/O (PIO)

• Definition: In Programmed I/O, the CPU directly controls data transfer between the device and memory. The CPU
issues commands and waits for the device to complete each operation.

How It Works:

1. CPU sends a command to the device.


2. CPU waits or continuously checks the device’s status (polling) until the operation completes.
3. Data is transferred, and the CPU moves to the next task.

Example: Reading a block of data from a disk directly by the CPU.

Advantages: -Simple to implement.

-Works well for low-speed devices.

Disadvantages: -CPU is heavily involved, wasting time during the data transfer.

-Not suitable for high-speed data transfer.

D. Interrupt-Driven I/O

• Definition: In interrupt-driven I/O, the device notifies the CPU via an interrupt when it is ready for data transfer,
eliminating the need for polling.

How It Works:

1. CPU initiates the I/O operation and continues executing other tasks.
2. The device sends an interrupt to the CPU when it is ready.
3. The CPU pauses its current task, processes the interrupt, and resumes the task.

Example: A network card generates an interrupt when a packet arrives, signaling the CPU to process the packet.

Advantages: -More efficient than PIO, as the CPU isn’t tied up waiting.

-Better for multitasking systems.


Disadvantages: -Context switching during interrupts can slightly degrade performance.

-Complex implementation compared to PIO.

E. Direct Memory Access (DMA)

-Definition: Direct Memory Access allows a device to transfer data directly to or from memory without involving the
CPU for every byte of data.

-How It Works:

1. The CPU initiates the DMA transfer by providing the necessary parameters (source, destination, and size).
2. The DMA controller handles the transfer, freeing the CPU for other tasks.
3. Once the transfer completes, the DMA controller notifies the CPU using an interrupt.

Example: Transferring a large file from a hard disk to memory without CPU intervention.

Advantages: -High-speed data transfer.

-Reduces CPU overhead, allowing it to handle other tasks.

Disadvantages: -Adds complexity to the system.

-Requires a DMA controller, which is additional hardware.

6. Discuss the difference between the following.


A. Synchronous and synchronous I/O
B. Port mapped and Memory mapped I/O

A. Synchronous vs. Asynchronous I/O:

• Synchronous I/O: CPU waits for the I/O to complete before continuing.
• Asynchronous I/O: The CPU can perform other tasks while waiting for the I/O operation to finish.
B. Port-Mapped vs. Memory-Mapped I/O:

• Port-Mapped I/O: Devices are assigned a separate address space, distinct from the system's memory address space.
• Memory-Mapped I/O: Devices share the system’s memory space, making it accessible like regular memory.

Key Differences:
Feature Port-Mapped I/O Memory-Mapped I/O

Address Space Separate from memory space Shared with memory space

Programming Slightly more complex Simpler

Hardware Complexity Lower Higher

Efficiency Slower due to extra instructions Faster for unified access

Address Space Size Limited (e.g., 256 or 64K ports) Larger (based on memory size)

7. Explain the role of Clock and Timers in operating system operations.


Clocks and timers in an operating system help manage time-related functions.
Task Scheduling: Ensure processes get equal CPU time through time slicing.
Time Tracking: Maintain system time and timestamps for logs.
Process Coordination: Synchronize events, delays, and periodic tasks.
8. Accessing files from disk is often slower than accessing data in memory. One mechanism to
enhance file access performance is Caching. Explain the concept of Caching, Cache hit and
Cache miss.

• Caching: Caching is a technique used to store frequently accessed data in a faster storage area (the cache), so it can be
retrieved quickly without accessing slower storage like disks.
o Cache Hit: Data is found in the cache, speeding up access.
o Cache Miss: Data is not in the cache, requiring slower retrieval from disk.
• Benefit: Significantly improves performance by reducing access times.

You might also like