CH 16.1 Purpose of OS
CH 16.1 Purpose of OS
5.1 Operating
System
• Defining an Operating System:
⚬ Acts as an intermediary between users and the computer hardware.
⚬ Example: Consider the OS like a translator between you (the user) and a foreign
machine, making sure your commands are understood and executed.
• User Interface:
⚬ Enables user interaction with applications and software.
⚬ Example: The desktop environment where you can click on icons to open programs.
⚬ Simplifies the complexity of hardware operations for users.
⚬ Example: You don't need to know how to write printer drivers to print a document;
the OS handles this for you.
• Resource Management Details:
⚬ Optimizes and monitors resource utilization to prevent bottlenecks.
⚬ Manages Input/Output operations seamlessly.
⚬ Example: Prioritizing processes so your video call application receives enough CPU
power to function smoothly during a call
Direct Memory Access (DMA)
• Conceptual Overview:
⚬ DMA is a feature that allows certain hardware subsystems to access main
memory independently of the CPU, enhancing system efficiency.
• Standard Data Transfer Problem:
⚬ Normally, a device sends data to the CPU to be written to memory, which
requires CPU cycles and can slow down processing.
⚬ Example: A USB drive needs to transfer data to the computer; without DMA, the
CPU would be directly involved, managing every bit of data, which is inefficient.
• Cycles in DMA Transfer:
⚬ With DMA, only 2 cycles are needed: one to read and one to write, compared to
the CPU managing every step of the process.
⚬ Example: Think of it as the difference between an assembly line (DMA) and a
single craftsman (CPU) doing all the work.
Direct Memory Access (DMA)
• Scheduling algorithms are essential for multitasking, allowing multiple tasks to occur
simultaneously.
• To ensure the processor's time is distributed equitably among all processes.
• To manage the allocation of peripherals (like printers and scanners) fairly among processes.
• To manage memory resources fairly, ensuring that no single process monopolizes the system
memory.
• To ensure that higher-priority tasks are given precedence and are executed sooner.
• To guarantee that all processes have the opportunity to finish in a reasonable amount of time.
• To minimize the amount of time users must wait for their tasks to be completed.
• To keep the CPU busy at all times, ensuring that no CPU cycles are wasted.
• To service the largest possible number of jobs in a given amount of time, maximizing
throughput.
States
Ready
• The process is loaded in the queue, awaiting CPU allocation.
• It's not currently executing but is queued for processor time.
Running
• Comes from Ready Queue
• The process is actively executing on the CPU.
• It uses the allocated CPU time slice given by the operating system.
Blocked
• Awaiting an event (like I/O completion) to proceed with execution.
• It's suspended from execution until the event is resolved.
• Print job while printer is out of ink for instance
Transition
Ready to Running
• Triggered when a process gets CPU time, either because it's the next in line in the ready queue or a
higher priority process preempts the current one.
• The operating system shifts control of the CPU to this process.
Running to Ready
• Occurs when a process's allocated time slice is exhausted, and it's moved back to the
ready queue for the next opportunity to execute.
Ready/Running to Blocked
• Happens when the process initiates an I/O operation, pausing its execution until the
operation is complete.
Transition
Blocked to Ready/Running
• A blocked process cannot transition directly to running; it must first enter the ready state when its
blocking event (like I/O) concludes.
• The operating system then decides its subsequent allocation to the CPU based on the current
scheduling policy.
Ready to Blocked
• This direct transition is not feasible because initiating an I/O operation requires the
process to be in the running state initially. A ready process isn't actively executing to
initiate such operations.
The Scheduler
High-Level Scheduler
• Its role is to select which processes are loaded into the ready queue from the backing store, essentially
deciding which processes are brought into main memory to await execution.
Low-Level Scheduler
• Determines which of the ready processes should be allocated the CPU. The decision is
based on a process's position in the queue or its priority level, influencing which process
is transitioned to the running state.
Scheduling Algorithms
First-Come, First Served
• Processes are scheduled in the order they arrive in the ready queue, operating much like a queue in a
supermarket.
Round Robin
• Allocates CPU time in fixed time slots, rotating through the processes in the ready queue. This
method is similar to taking turns in a group discussion, ensuring everyone has a chance to speak.
Transition to Uni