0% found this document useful (0 votes)
12 views17 pages

CH 16.1 Purpose of OS

The document provides an overview of system software, focusing on the role of operating systems, user interfaces, resource management, and multitasking. It explains concepts like Direct Memory Access (DMA), the kernel's function, and various scheduling algorithms essential for efficient process management. Additionally, it discusses the states of processes and the transitions between them, highlighting the importance of scheduling in optimizing CPU utilization.

Uploaded by

Beast NKb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views17 pages

CH 16.1 Purpose of OS

The document provides an overview of system software, focusing on the role of operating systems, user interfaces, resource management, and multitasking. It explains concepts like Direct Memory Access (DMA), the kernel's function, and various scheduling algorithms essential for efficient process management. Additionally, it discusses the states of processes and the transitions between them, highlighting the importance of scheduling in optimizing CPU utilization.

Uploaded by

Beast NKb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

System Software

5.1 Operating
System
• Defining an Operating System:
⚬ Acts as an intermediary between users and the computer hardware.
⚬ Example: Consider the OS like a translator between you (the user) and a foreign
machine, making sure your commands are understood and executed.
• User Interface:
⚬ Enables user interaction with applications and software.
⚬ Example: The desktop environment where you can click on icons to open programs.
⚬ Simplifies the complexity of hardware operations for users.
⚬ Example: You don't need to know how to write printer drivers to print a document;
the OS handles this for you.
• Resource Management Details:
⚬ Optimizes and monitors resource utilization to prevent bottlenecks.
⚬ Manages Input/Output operations seamlessly.
⚬ Example: Prioritizing processes so your video call application receives enough CPU
power to function smoothly during a call
Direct Memory Access (DMA)
• Conceptual Overview:
⚬ DMA is a feature that allows certain hardware subsystems to access main
memory independently of the CPU, enhancing system efficiency.
• Standard Data Transfer Problem:
⚬ Normally, a device sends data to the CPU to be written to memory, which
requires CPU cycles and can slow down processing.
⚬ Example: A USB drive needs to transfer data to the computer; without DMA, the
CPU would be directly involved, managing every bit of data, which is inefficient.
• Cycles in DMA Transfer:
⚬ With DMA, only 2 cycles are needed: one to read and one to write, compared to
the CPU managing every step of the process.
⚬ Example: Think of it as the difference between an assembly line (DMA) and a
single craftsman (CPU) doing all the work.
Direct Memory Access (DMA)

• DMA Operational Steps:


⚬ DMA initiates the data transfer.
⚬ CPU is free to carry out other tasks while DMA manages the data transfer.
⚬ After the transfer completes, DMA sends an interrupt signal to the CPU to inform it of the
completion.
⚬ Example: Similar to receiving a text message that your package has been delivered while
you're working on something else.
Kernel

• It is a crucial part of the Operating System.


• Responsible for communication between hardware, software, and memory.
• Manages processes, devices, and memory.
• Application Access Through Kernel:
⚬ Applications, such as a camera app, must request access to hardware
through the kernel.
⚬ The kernel evaluates and grants permission for hardware access.
Kernel

• Abstraction of Hardware Complexity:


⚬ The operating system, via the kernel, abstracts hardware complexities,
providing a simpler interface for users. (Example: Users interact with
high-level camera controls instead of direct hardware commands.)
• Use of Device Drivers:
⚬ Utilizes device drivers to manage hardware operations, ensuring smooth
performance and compatibility.
Multitasking

• Enables a user to perform more than one computer task at a time.


• Processor Capability:
⚬ Although the processor can do only one task at a time, multitasking is
achieved by rapidly switching the CPU among processes, which creates
the illusion of simultaneous execution.
• Scheduling for Multitasking:
⚬ Scheduling algorithms are used to decide which processes should be
carried out and when, to ensure an efficient multitasking environment
without process conflict (e.g., avoiding deadlocks).
• Resource Utilization:
⚬ Multitasking optimizes the use of computer resources by monitoring each
state of the process and managing them effectively.
Multitasking
• Kernel's Role in Execution:
⚬ Overlaps the execution of each process based on scheduling algorithms, making it
appear as if many processes are executed at the same time.
• Types of Scheduling:
⚬ Preemptive Scheduling:
■ If a higher priority process arrives, the CPU can be reallocated to it, interrupting
the current process if necessary. This allows for responsive systems that can
handle high-priority tasks quickly (e.g., system interrupts for urgent computing
tasks).
⚬ Non-Preemptive Scheduling:
■ A process retains resources until it has completed, and cannot be interrupted by
other processes, leading to a more predictable, but less flexible system (e.g.,
batch processing tasks).
Multitasking
• Resource Allocation:
⚬ In preemptive scheduling, resources are allocated to a process for a limited time, allowing
for flexible and dynamic resource management.
⚬ In non-preemptive scheduling, once allocated, resources are not taken away until the
process voluntarily relinquishes them or completes its execution.
• Flexibility vs Rigidity:
⚬ Preemptive scheduling is a more flexible form of scheduling, accommodating urgent tasks
efficiently.
⚬ Non-preemptive scheduling is a more rigid form of scheduling, suitable for tasks with a
predictable processing time.
Necessity of Scheduling Algos

• Scheduling algorithms are essential for multitasking, allowing multiple tasks to occur
simultaneously.
• To ensure the processor's time is distributed equitably among all processes.
• To manage the allocation of peripherals (like printers and scanners) fairly among processes.
• To manage memory resources fairly, ensuring that no single process monopolizes the system
memory.
• To ensure that higher-priority tasks are given precedence and are executed sooner.
• To guarantee that all processes have the opportunity to finish in a reasonable amount of time.
• To minimize the amount of time users must wait for their tasks to be completed.
• To keep the CPU busy at all times, ensuring that no CPU cycles are wasted.
• To service the largest possible number of jobs in a given amount of time, maximizing
throughput.
States
Ready
• The process is loaded in the queue, awaiting CPU allocation.
• It's not currently executing but is queued for processor time.

Running
• Comes from Ready Queue
• The process is actively executing on the CPU.
• It uses the allocated CPU time slice given by the operating system.

Blocked
• Awaiting an event (like I/O completion) to proceed with execution.
• It's suspended from execution until the event is resolved.
• Print job while printer is out of ink for instance
Transition
Ready to Running
• Triggered when a process gets CPU time, either because it's the next in line in the ready queue or a
higher priority process preempts the current one.
• The operating system shifts control of the CPU to this process.

Running to Ready
• Occurs when a process's allocated time slice is exhausted, and it's moved back to the
ready queue for the next opportunity to execute.

Ready/Running to Blocked
• Happens when the process initiates an I/O operation, pausing its execution until the
operation is complete.
Transition

Blocked to Ready/Running
• A blocked process cannot transition directly to running; it must first enter the ready state when its
blocking event (like I/O) concludes.
• The operating system then decides its subsequent allocation to the CPU based on the current
scheduling policy.

Ready to Blocked
• This direct transition is not feasible because initiating an I/O operation requires the
process to be in the running state initially. A ready process isn't actively executing to
initiate such operations.
The Scheduler

High-Level Scheduler
• Its role is to select which processes are loaded into the ready queue from the backing store, essentially
deciding which processes are brought into main memory to await execution.

Low-Level Scheduler
• Determines which of the ready processes should be allocated the CPU. The decision is
based on a process's position in the queue or its priority level, influencing which process
is transitioned to the running state.
Scheduling Algorithms
First-Come, First Served
• Processes are scheduled in the order they arrive in the ready queue, operating much like a queue in a
supermarket.

Shortest Job First


• Prefers processes with the shortest estimated running time, focusing on minimizing waiting time
similar to a fast track service for shorter tasks.

Shortest Remaining Time First


• Dynamically selects the process with the shortest estimated time to completion from the current
pool, akin to constantly re-evaluating the quickest path in navigation apps.

Round Robin
• Allocates CPU time in fixed time slots, rotating through the processes in the ready queue. This
method is similar to taking turns in a group discussion, ensuring everyone has a chance to speak.
Transition to Uni

You might also like