0% found this document useful (0 votes)
3 views20 pages

Asiignment 1 of OS

The document covers various aspects of system design and architecture, including the roles of multiple device controllers, symmetric vs asymmetric multiprocessing, and clustering types. It also discusses memory management techniques like virtual memory and DMA, as well as the importance of interrupts in operating systems. Key differences between character and block devices, along with the significance of interrupt vectors for efficient handling, are also highlighted.

Uploaded by

21pwcse2008
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views20 pages

Asiignment 1 of OS

The document covers various aspects of system design and architecture, including the roles of multiple device controllers, symmetric vs asymmetric multiprocessing, and clustering types. It also discusses memory management techniques like virtual memory and DMA, as well as the importance of interrupts in operating systems. Key differences between character and block devices, along with the significance of interrupt vectors for efficient handling, are also highlighted.

Uploaded by

21pwcse2008
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Assignment # 1

Answer all questions in a clear and to-the-point manner. Avoid over-explaining.

System Design and Architecture

1. What is the purpose of using multiple device controllers in a computer system?

The purpose of using multiple device controllers in a computer system is to:

 Manage different I/O devices independently (e.g., keyboard, disk, printer), allowing
each device to operate concurrently.

 Improve system performance by enabling parallel I/O operations without waiting for
one device to complete.

 Reduce CPU overhead by offloading device-specific operations to dedicated controllers.

 Enhance modularity and fault isolation, so issues with one device/controller don’t affect
others.

In short, they allow efficient and organized communication between the operating system and
various hardware devices.

2. Explain the difference between symmetric and asymmetric multiprocessing.

Symmetric Multiprocessing (SMP):

 All processors are identical and share a single memory and I/O system.

 Each processor runs the operating system and performs tasks independently.

 Workload is evenly distributed among all processors.

 Example: Modern multi-core CPUs.

Asymmetric Multiprocessing (AMP):

 Processors are assigned specific roles—typically one main processor controls the system,
others handle specific tasks.

 Only the master processor runs the OS; the others may not have full OS access.

 Simpler design but less flexible and efficient.


 Example: Some embedded systems.

Key Difference:
In SMP, all processors are peers; in AMP, one processor is in control and others are
subordinates.

3. Explain the difference between symmetric and asymmetric clustering.

Clustering is the technique of connecting multiple computers (nodes) to work together as a


single system to improve performance, reliability, and scalability.

Key Points:

 Purpose: Ensures high availability, load balancing, and fault tolerance.

 How it works: If one node fails, others can take over its tasks (failover).

 Types:

o Load-balancing cluster: Distributes workloads across multiple active nodes.

o Failover cluster: Standby nodes take over if an active node fails.

In short: Clustering allows multiple systems to function cooperatively for better efficiency and
system uptime.

Answer:

Symmetric Clustering:

 All nodes (servers) in the cluster are active and share the workload.

 Each node runs applications and monitors other nodes.

 Provides high availability and load balancing.

 Example: Web server clusters handling user requests in parallel.

Asymmetric Clustering:

 One node is active while the other(s) are in standby mode.

 Standby node(s) take over only if the active node fails.

 Simpler but may underutilize resources.

 Example: Failover systems for critical applications.


Key Difference:
In symmetric clustering, all nodes actively handle tasks. In asymmetric clustering, only one is
active while others wait to take over in case of failure.

4: What is the difference between a core and a processor?

Processor (CPU):

 The main chip in a computer responsible for executing instructions.


 May contain one or multiple cores.

Core:

 An individual processing unit within a processor.


 Each core can execute its own thread independently.

Key Difference:

 A processor is the physical chip, while a core is one of the execution units inside it.
 For example, a quad-core processor has four cores, allowing it to handle four tasks
simultaneously.

5: In a personal computer, which would be more suitable: a dual-core processor or a


dual-processor setup? Justify your answer.

A dual-core processor is more suitable for a personal computer.

Justification:

 Cost-effective: Cheaper than installing two separate processors.

 Lower power consumption: Consumes less energy and generates less heat.

 Efficient for everyday tasks: Easily handles multitasking, web browsing, office work, and
media playback.

 Better compatibility: Most personal computers and operating systems are optimized for
multi-core processors, not multi-processor setups.

In summary: A dual-core processor offers the right balance of performance, cost, and energy
efficiency for personal computer use.
6: If designing an operating system for an embedded system, what policies would you
implement for the following aspects:

a) Scheduling

b) Memory Management

c) Power Management

d) Reliability and Security

e) I/O Management

If designing an OS for an embedded system, here are the recommended policies:

a) Scheduling:

 Real-time or priority-based scheduling.

 Ensure timely response for critical tasks (e.g., in automotive or medical systems).

 Use algorithms like Rate Monotonic (RM) or Earliest Deadline First (EDF) for real-time
systems.

b) Memory Management:

 Static memory allocation to avoid fragmentation and unpredictability.

 Minimal use of dynamic memory; use fixed-size memory pools.

 No virtual memory (typically) to reduce overhead and latency.

c) Power Management:

 Implement sleep modes and CPU frequency scaling.

 Use event-driven models to keep the processor idle when possible.

 Reduce peripheral activity during idle states.

d) Reliability and Security:

 Use watchdog timers to reset the system on faults.


 Memory protection to isolate critical code.

 Use secure boot and code signing to prevent unauthorized code execution.

 Harden the system against physical tampering and buffer overflows.

e) I/O Management:

 Use interrupt-driven I/O for responsiveness and power efficiency.

 Implement buffering for smooth data flow.

 Ensure device-specific drivers are lightweight and optimized.

These policies ensure the embedded system remains efficient, predictable, and secure, which is
essential for its specialized functions.

Memory Management and Controllers

7. What is the role of device drivers in the operating system?

Device drivers act as a bridge between the operating system and hardware devices.

Role in the OS:

 Translate OS commands into device-specific instructions.

 Allow the OS to communicate with hardware (e.g., keyboard, printer, disk).

 Abstract hardware differences, so applications don’t need to know hardware details.

 Handle interrupts, I/O operations, and error reporting for the devices.

In short: Device drivers enable the OS to control and interact with hardware in a standardized
and efficient way.
8: Explain what NUMA architecture is and how it affects OS memory management.

NUMA (Non-Uniform Memory Access) is a computer memory design used in multiprocessor


systems where memory access time depends on the memory location relative to a processor.

Key Features:

 Each processor has its own local memory.


 A processor can access local memory faster than remote memory (memory attached to
another processor).

Effect on OS Memory Management:

 The OS must be NUMA-aware to optimize performance.


 It should allocate memory close to the processor that will use it to reduce latency.
 Helps in improving scalability and performance in large multi-core systems.
 Poor memory placement can lead to bottlenecks and slower performance.

In summary: NUMA requires smarter memory management by the OS to maintain high


performance in multi-CPU systems.

9: What is the role of virtual memory, and how does it extend the capabilities of RAM?

Virtual memory is a memory management technique that gives each process the illusion of
having its own large, continuous block of memory.

Role of Virtual Memory:

 Allows programs to use more memory than physically available RAM by using disk
space (paging or swapping).
 Provides process isolation, enhancing security and stability.
 Simplifies memory allocation and management for applications.

How It Extends RAM:

 Uses secondary storage (like a hard drive or SSD) to simulate extra memory.
 Swaps data between RAM and disk, making it seem like there's more memory than
actually installed.

In short: Virtual memory extends the system’s usable memory beyond physical RAM, enabling
multitasking and efficient memory use.
10: What is DMA? Why is it needed?

DMA (Direct Memory Access)

DMA is a system feature that allows peripherals (like disks, GPUs, or network cards) to
transfer data directly to/from memory without CPU intervention, improving speed and
efficiency.

Why is DMA Needed?

1. Reduces CPU load – Frees the CPU for other tasks instead of managing data transfers.
2. Faster transfers – DMA controllers handle bulk data more efficiently than the CPU.
3. Better performance – Critical for high-speed I/O (e.g., SSDs, video streaming).

Example Uses:

 Reading/writing data to storage (HDD/SSD).


 Sending/receiving network packets.
 Audio/video data streaming.

In short, DMA speeds up data transfers and makes computing systems more efficient.

11: How does a controller inform the device driver that it has finished its operation?

When a controller (e.g., disk, network, or GPU controller) finishes an operation, it notifies the
device driver using one of the following methods:

1. Interrupts (Most Common Method)

 The controller triggers a hardware interrupt to the CPU.


 The CPU pauses its current task, executes the driver’s Interrupt Service Routine
(ISR) to handle the completion.
 Used for real-time response (e.g., disk I/O, network packets).

2. Polling

 The driver repeatedly checks a status register in the controller to see if the operation is
done.
 Less efficient (wastes CPU cycles), but used in simple systems where interrupts are
unavailable.

3. DMA Completion Notification

 If DMA was used, the DMA controller sends an interrupt when the transfer is done.
 The driver then processes the data in memory.

4. Callback Mechanisms (Software-Level)

 Some high-level drivers use asynchronous I/O with callbacks, where the OS signals
completion to the driver.

Why Interrupts are Preferred

 Efficient – CPU isn’t wasted on polling.


 Low Latency – Driver reacts immediately.

Summary

Controllers typically use interrupts to notify drivers of task completion, ensuring fast and
efficient operation handling.

12: Differentiate between character devices and block devices.

1. Character Devices

 Data Transfer: Handle data one character (byte) at a time (e.g., keyboards, mice,
serial ports).
 Access Method: Sequential access (cannot randomly seek data).
 Buffering: Usually unbuffered or use small buffers.
 Examples:
o Keyboard (/dev/input/)
o Mouse (/dev/mouse)
o Serial ports (/dev/ttyS*)
o Sound cards (/dev/snd/*)

2. Block Devices

 Data Transfer: Handle data in fixed-size blocks (e.g., 512B, 4KB).


 Access Method: Random access (can read/write any block directly).
 Buffering: Use large buffers for better performance.
 Examples:
o Hard drives (/dev/sda, /dev/nvme0n1)
o SSDs (/dev/sd*)
o USB drives (/dev/sdb1)
o RAM disks (/dev/ram*)

Key Differences
Feature Character Devices Block Devices

Data Unit Byte-by-byte Fixed-size blocks

Access Method Sequential Random access

Buffering Minimal or none Heavy buffering (for speed)

Slower (per-byte
Performance Faster (bulk transfers)
overhead)

Usage Interactive devices Storage devices

Summary

 Character devices are for streaming/sequential data (e.g., keyboards).


 Block devices are for storage with random access (e.g., disks).
 The Linux kernel treats them differently in /dev/ (e.g., /dev/tty vs /dev/sda).

Interrupts

13. What happens when an interrupt occurs?

When an interrupt is triggered (by hardware, software, or exceptions), the CPU follows these
steps:

1. Interrupt Trigger

 Hardware Interrupt: Generated by devices (e.g., keyboard press, disk I/O


completion).
 Software Interrupt: Requested by programs (e.g., system calls).
 Exception: Caused by errors (e.g., division by zero, page fault).

2. CPU Response

 Finishes current instruction (unless it’s a non-maskable interrupt).


 Saves the current state (program counter, registers) onto the stack.
 Disables further interrupts (if maskable) to prevent nested handling.

3. Interrupt Handling

 CPU checks the Interrupt Vector Table (IVT) or Interrupt Descriptor Table
(IDT) to find the Interrupt Service Routine (ISR).
 Executes the ISR (a kernel/driver function that handles the interrupt).
4. Post-Interrupt Actions

 Restores saved state (registers, program counter) from the stack.


 Re-enables interrupts (if masked earlier).
 Resumes normal execution from where it left off.

Key Points

 Hardware Interrupts allow devices to notify the CPU asynchronously.


 Software Interrupts (like system calls) allow controlled entry into kernel mode.
 Exceptions force the CPU to handle errors or special events (e.g., page faults).

Example

When a keyboard key is pressed:

1. Keyboard controller sends an interrupt.


2. CPU pauses the running program, saves its state.
3. Executes the keyboard driver’s ISR to read the key.
4. CPU restores the program and continues execution.

Summary

Interrupts ensure efficient CPU usage by allowing immediate responses to events without
constant polling. The CPU suspends current work, handles the interrupt, and
resumes seamlessly.

14. What are interrupts and why is interrupt handling crucial for OS performance?
briefly discuss.

An interrupt is a signal sent to the CPU by hardware or software indicating an event that
needs immediate attention. Interrupts temporarily halt the CPU's current task, forcing it to
execute a special routine (Interrupt Service Routine - ISR) to handle the event before
resuming normal operation.

Types of Interrupts:

1. Hardware Interrupts – Triggered by external devices (e.g., keyboard press, disk I/O
completion).
2. Software Interrupts – Generated by programs (e.g., system calls like read(), write()).
3. Exceptions – Caused by CPU errors (e.g., division by zero, page fault).
Why Interrupt Handling is Crucial for OS Performance?

1. Efficient CPU Utilization


o Without interrupts, the CPU would waste cycles polling devices for status
updates.
o Interrupts allow the CPU to work on other tasks until an event occurs.
2. Responsiveness
o Critical events (e.g., keyboard input, network packets) are handled immediately,
improving user experience.
3. Concurrency & Multitasking
o Interrupts allow the OS to switch tasks quickly, enabling smooth multitasking.
4. Device Communication
o Devices (disks, NICs, GPUs) rely on interrupts to signal operation completion,
ensuring smooth I/O operations.
5. Error Handling
o Exceptions (like segmentation faults) are caught via interrupts, allowing the OS
to prevent crashes and recover gracefully.

Example:

 When a disk read completes, the disk controller sends an interrupt → CPU pauses
current work → OS processes the data → CPU resumes execution.

Summary

Interrupts are essential for:


✅ Avoiding CPU wastage (no busy waiting)
✅ Fast response to hardware/software events
✅ Stable multitasking & I/O operations
Without proper interrupt handling, an OS would be slow, inefficient, and unresponsive.

15. What is an interrupt vector? How does the interrupt vector help optimize the
process of interrupt handling? briefly discuss.

An interrupt vector is a table (or array) that stores memory addresses of Interrupt Service
Routines (ISRs). Each entry corresponds to a specific interrupt type, allowing the CPU to
quickly locate and execute the correct handler when an interrupt occurs.

How It Works:

1. When an interrupt is triggered, the device or CPU provides an interrupt number (e.g.,
IRQ line for hardware interrupts).
2. The CPU uses this number as an index in the Interrupt Vector Table
(IVT) or Interrupt Descriptor Table (IDT) (in modern systems).
3. The table entry contains the address of the ISR to execute.
4. The CPU jumps to this address, runs the ISR, and then resumes normal execution.

How Interrupt Vectors Optimize Interrupt Handling

1. Fast Lookup
o Instead of searching for handlers dynamically, the CPU directly indexes the
vector table for instant access.
2. Prioritization
o Critical interrupts (e.g., hardware faults) can be assigned higher-priority vector
entries for faster response.
3. Modularity & Scalability
o New devices/drivers can register their ISRs in the table without modifying core
OS code.
4. Efficient Multitasking
o The OS can manage multiple interrupts without CPU polling, improving system
throughput.
5. Hardware/Software Uniformity
o Both hardware and software interrupts (e.g., system calls) use the same
mechanism, simplifying design.

Example:

 A keyboard press generates IRQ 1 → CPU checks entry 1 in the IVT → Jumps to the
keyboard driver’s ISR.

Summary

The interrupt vector:


✅ Speeds up ISR lookup (no searching)
✅ Enables prioritization (critical interrupts first)
✅ Keeps the OS flexible (easy to add new handlers)
Without it, interrupt handling would be slow and disorganized, hurting system performance.

16. What happens if two interrupts occur at the same time? briefly discuss.

When two interrupts occur at the same time, the system resolves the conflict
using interrupt prioritization and masking. Here’s what happens:
1. Interrupt Prioritization

 Each interrupt has a priority level (e.g., hardware faults > disk I/O > keyboard input).
 The CPU checks priorities and serves the higher-priority interrupt first.

2. Interrupt Masking

 While handling one interrupt, the CPU masks (disables) lower-priority interrupts.
 Higher-priority interrupts can still preempt the current ISR (nested interrupts).

3. Sequential Handling

 After finishing the higher-priority ISR, the CPU unmasks lower-priority interrupts and
processes the pending one.

Example Scenario

 Interrupt A (High priority: Disk I/O completion)


 Interrupt B (Low priority: Keyboard press)

What Happens?

1. CPU detects both interrupts simultaneously.


2. Disk I/O ISR runs first (higher priority).
3. Keyboard interrupt waits (masked until disk ISR finishes).
4. After disk ISR completes, keyboard ISR executes.

17. What is the difference between maskable and non-maskable interrupts? Provide an
example where each might be used. briefly discuss.

Maskable vs. Non-Maskable Interrupts (NMI)

Feature Maskable Interrupts Non-Maskable Interrupts (NMI)

Can be Yes (can be disabled by the


No (always processed)
blocked? CPU)

Priority Lower priority Highest priority

Can be delayed or ignored


Handling Must be handled immediately
temporarily
Feature Maskable Interrupts Non-Maskable Interrupts (NMI)

Common devices (keyboard, Critical failures (hardware errors,


Triggered by
disk, timer) power loss)

Examples of Usage

1. Maskable Interrupt

 Example: A keyboard press generates an interrupt.


 Why Maskable?
o The OS can temporarily disable interrupts during critical tasks (e.g., kernel
operations).
o Prevents lower-priority interrupts from disrupting high-priority processes.

2. Non-Maskable Interrupt (NMI)

 Example: A memory parity error (RAM corruption detected).


 Why Non-Maskable?
o Indicates a critical system failure that must not be ignored.
o Ensures the OS takes immediate action (e.g., logging the error, shutting down
safely).

Key Differences

✔ Maskable Interrupts → Used for normal device I/O (flexible handling).


✔ Non-Maskable Interrupts → Reserved for emergency events (unavoidable).

System Calls and Dual-Mode Operations

18. What is the difference between a system call and a function call? briefly discuss.

Feature System Call Function Call

Runs in kernel Runs in user


Execution
mode (privileged) mode (unprivileged)

Purpose Requests OS services (e.g., Performs program-specific logic


Feature System Call Function Call

(e.g., math operations, string


file I/O, process control)
manipulation)

High (context switch


Overhead Low (no mode switching)
between user/kernel mode)

Uses software Direct jump or call instruction


Invocation interrupts (e.g., int 0x80 in
x86) or syscall/sysenter

Validated by OS (prevents No OS mediation (trusts the


Safety
unauthorized access) program)

Examples open(), fork(), write() printf(), strlen(), malloc()

Key Differences

1. Privilege Level
o System calls switch to kernel mode for secure OS access.
o Function calls stay in user mode.
2. Performance
o System calls are slower due to mode switching and OS checks.
o Function calls are fast (no kernel interaction).
3. Usage Context
o System calls = OS services (e.g., creating a file).
o Function calls = Program logic (e.g., sorting an array).

Example

 System Call: read() → Asks the OS to fetch data from a file.


 Function Call: strlen() → Computes string length locally.

19. Explain the three general methods used to pass parameters to the operating system.
Which one is used by the Linux operating system? briefly discuss.

1. Registers
o How it works: Parameters are stored in CPU registers (e.g., EAX, EBX in
x86).
o Pros: Fast (no memory access).
o Cons: Limited by register count.
2. Memory Block (Stack or Fixed Location)
o How it works: Parameters are placed in a stack or a predefined memory
block, and a pointer is passed via a register.
o Pros: Supports many/large parameters.
o Cons: Slower (memory access required).
3. Program Counter (PC) Relative Addressing
o How it works: Parameters are stored right after the system call instruction in
memory.
o Pros: Simple for small programs.
o Cons: Rarely used in modern OSes.

Which Method Does Linux Use?

 Primary method: Registers (for efficiency).


 For complex/large data: Memory block (e.g., struct data passed via pointers).
 Example: On x86-64 Linux:
o System call number → RAX
o Parameters → RDI, RSI, RDX, R10, R8, R9
o Extra parameters → Stack

20. Explain the concept of dual-mode operation in operating systems. How do they
contribute to system security and efficiency? briefly discuss.

Dual-mode operation divides CPU execution into two privilege levels to protect the OS
from untrusted user programs:

1. Kernel Mode (Privileged Mode)


o Full access to hardware (e.g., CPU, memory, I/O devices).
o Used by the OS kernel for critical tasks (e.g., process scheduling, disk
operations).
2. User Mode (Unprivileged Mode)
o Restricted access to hardware.
o Used by applications (e.g., browsers, games).
o Must request OS services via system calls to perform privileged operations.

How Dual-Mode Enhances Security & Efficiency

1. Security
 Prevents unauthorized access: User programs cannot directly modify hardware or
OS data structures.
 Isolates processes: Bugs/crashes in user programs don’t corrupt the kernel.
 Controlled resource access: System calls validate requests (e.g., file permissions).

2. Efficiency

 Reduces overhead: The kernel handles hardware management, avoiding redundant


checks in apps.
 Enables multitasking: The OS can safely switch between user processes.

Example

 A game (user mode) tries to access the hard disk → Must invoke the OS (kernel
mode) via read().
 The OS validates the request → If permitted, performs the operation → Returns
results to the game.

Key Mechanism: Mode Bit

 A hardware flag (e.g., in the CPU) toggles between modes.


 0 = Kernel mode, 1 = User mode.
 Switching modes occurs during:
o System calls (user → kernel).
o Interrupts/exceptions (user → kernel).
o OS returning control to user programs (kernel → user).

21. Explain the two general approaches to implementing commands in an operating


system. Compare the approach where the command interpreter contains the code to
execute commands with the approach used by UNIX, where commands are
implemented through system programs. Discuss the advantages and disadvantages
of each approach. briefly discuss.

1. Built-in Command Interpreter (Monolithic Approach)

 How it works: The command interpreter (e.g., shell) directly contains the code to
execute commands (e.g., cd, echo).
 Examples: Early MS-DOS, some embedded systems.

Advantages:
 Faster execution (no need to launch separate programs).
 Tighter integration with the shell (easier error handling).

Disadvantages:

 Less flexible (hard to add new commands without modifying the shell).
 Bloat (large interpreter due to built-in commands).
 No reusability (commands can’t be used outside the shell).

2. External System Programs (UNIX Approach)

 How it works: Commands (e.g., ls, grep) are separate executable files stored
in /bin, /usr/bin, etc. The shell simply launches them.
 Examples: Linux, macOS, modern UNIX-like systems.

Advantages:

 Modularity: Easy to add/update commands without changing the shell.


 Reusability: Commands can be called by scripts, other programs, or different shells.
 Smaller shell: The interpreter stays minimal (only handles parsing/launching).

Disadvantages:

 Slightly slower (overhead of process creation for each command).


 Dependent on PATH: Requires proper environment setup.

Key Comparison

Feature Built-in Interpreter UNIX (External Programs)

Speed Faster (no process creation) Slower (fork/exec overhead)

Flexibility Rigid (hard to extend) Highly modular (easy updates)

Larger (contains all


Shell Size Minimal (only core logic)
commands)

Reusabilit Works system-wide (scripts,


Limited to shell
y pipes)
22. Which of the following operations should be restricted to privileged mode? Give
reason: briefly discuss.

a. Set the timer value


b. Read the clock
c. Clear memory
d. Disable interrupts
e. Modify entries in the device status table
f. Access an I/O device

The following operations must be restricted to privileged (kernel) mode to ensure system
security and stability:

a. Set the timer value

Reason:

 Controls scheduling and task switching.


 Malicious changes could disrupt multitasking or cause denial-of-service (DoS).

b. Read the clock

Not necessarily privileged (can be allowed in user mode).


Reason:

 Reading time is harmless (no security risk).


 Some systems allow user programs to access clock values (e.g., time() in C).

c. Clear memory

Reason:

 Could erase critical OS data or other processes’ memory.


 Must be controlled to prevent crashes/data loss.

d. Disable interrupts

Reason:

 Prevents CPU from handling critical events (e.g., hardware failures).


 Could freeze the system or cause data corruption.

e. Modify entries in the device status table

Reason:

 Controls hardware access (e.g., disk, network).


 Unauthorized changes could crash devices or breach security.

f. Access an I/O device

Reason:

 Direct hardware access risks data corruption or leaks (e.g., reading another user’s
files).
 Must be mediated by the OS (via system calls like read()/write()).

Summary Table

Operation Privileged? Reason

Set the timer value Yes Prevents scheduling attacks.

Read the clock No No security risk.

Clear memory Yes Protects OS/process data.

Disable interrupts Yes Avoids system freezes.

Modify device status


Yes Prevents unauthorized hardware control.
table

Access an I/O device Yes Ensures secure hardware mediation.

You might also like