0% found this document useful (0 votes)
17 views4 pages

CSE-301. Operating System

System calls enable user processes to request services from the operating system, which manages processes, memory, and secondary storage. Operating systems also provide system programs for file, process, and device management, while command interpreters facilitate user interaction. The layered approach to system design enhances modularity and abstraction but may introduce performance overhead and complexity.

Uploaded by

kd ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views4 pages

CSE-301. Operating System

System calls enable user processes to request services from the operating system, which manages processes, memory, and secondary storage. Operating systems also provide system programs for file, process, and device management, while command interpreters facilitate user interaction. The layered approach to system design enhances modularity and abstraction but may introduce performance overhead and complexity.

Uploaded by

kd ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Purpose of system calls

Answer: System calls allow user-level processes to request services from the operating system.

Five Major Activities of an Operating System in Process Management:


Answer:
a. Creation and deletion of user and system processes
b. Suspension and resumption of processes
c. Process synchronization mechanisms
d. Process communication mechanisms
e. Deadlock handling mechanisms

Three Major Activities of an Operating System in Memory Management:


Answer:
a. Track which parts of memory are in use and by whom
b. Decide which processes to load when space becomes available
c. Allocate and deallocate memory as needed

Three Major Activities of an Operating System in Secondary-Storage Management:


Answer:
a. Free-space management
b. Storage allocation
c. Disk scheduling

System Programs: System programs are special types of software that provide a basic environment for running application
programs and managing the hardware of a computer. They act as a support layer between the operating system and the
user/application programs.
Examples of System Programs:
 File Explorer (Windows)i
 Terminal / Bash (Linux/macOS)
 Task Manager
Purpose of System Programs:
 File Management – Handles creation, deletion, and manipulation of files and directories.
 Process Management – Manages execution, scheduling, and termination of processes.
 Device Management – Controls and coordinates input/output devices.
 System Monitoring – Monitors system performance and resource usage.
 Security and Protection – Ensures data security and user access control.
 Communication Support – Enables data exchange between users and programs.
 Program Execution – Loads and runs application programs.

Command Interpreter: A Command Interpreter is a program that takes commands from the user and tells the computer's
operating system to perform the requested tasks. It is also known as a shell.
Purpose of Command Interpreter:
 Acts as a bridge between the user and the operating system.
 Receives commands from the user.
 Translates those commands into actions.
 Executes programs or system functions as instructed.
 Displays output or error messages to the user.

Command Interpreter Usually Separate from the Kernel: The command interpreter is usually kept separate from the kernel
for the following reasons:
 Flexibility: Users can choose or change the shell without modifying the kernel.
 Safety and Stability: Keeping it separate prevents user command errors from affecting the core of the operating
system.
 Modularity: Separating components makes the system easier to maintain and update.
 User-Level Program: The shell is a user-level program that interacts with the kernel through system calls, not as part
of it.

Layered Approach: The layered approach is a system design method where the system is divided into multiple layers, each
with a specific function. Each layer only interacts with the layer directly below or above it, creating a clear separation of
concerns. This makes the system easier to develop, maintain, and understand.

Main Advantage of the Layered Approach to System Design: The main advantage of the layered approach is its modularity
and abstraction, which make the system easier to understand, maintain, and extend.
Key Benefits:
 Modularity: Each layer handles a specific task, allowing independent development and debugging.
 Abstraction: Hides lower-level complexity, simplifying higher-layer design.
 Ease of Maintenance: Updates can be made in one layer without affecting others.
 Scalability: New layers or features can be added easily.
 Flexibility: Supports integration of new technologies without redesigning the whole system.
 Hierarchical Structure: Enhances clarity and teamwork by organizing the system logically.

Disadvantages of Using the Layered Approach:


 Performance Overhead: Multiple layers add processing delays as data passes through each layer.
 Reduced Flexibility: Strict layering blocks direct communication between non-adjacent layers, limiting optimizations.
 Increased Complexity: More layers can complicate design, debugging, and interface management.
 Unclear Boundaries: Some functions overlap layers, causing ambiguity in responsibilities.
 Inflexibility for Special Systems: Not ideal for real-time or resource-constrained systems needing direct hardware
access.
 Higher Resource Use: Each layer may maintain its own data structures, increasing memory consumption.
Five Services Provided by an Operating System
a. Program Execution: The operating system loads a program’s contents into memory and starts its execution. User-level
programs cannot be trusted to manage CPU time properly, so the OS controls CPU allocation.
b. I/O Operations: The OS handles communication with devices like disks and tapes at a low level. Users simply specify the
device and operation, and the system converts this into device-specific commands. User programs are not trusted to access
only authorized devices or to use them correctly.
c. File-System Manipulation: The OS manages file creation, deletion, allocation, and naming. It tracks disk space usage and
ensures that blocks are allocated and freed correctly. Protection checks prevent unauthorized access. User programs cannot
reliably manage these tasks or enforce security.
d. Communications: The OS manages message passing between systems by converting messages into packets, sending
them through the network controller, and reassembling them at the destination. It handles packet ordering and error correction.
User programs cannot coordinate network access or filter packets effectively.
e. Error Detection: Error detection happens at both hardware and software levels. The OS checks data transfers for corruption
and verifies data integrity on storage media. It also monitors system consistency, like allocated versus free disk blocks. A global
error-handling program (the OS) manages errors, so individual processes do not need extensive error handling code.

Why Some Systems Store the OS in Firmware and Others on Disk


Firmware Storage:
 Faster booting due to direct access from non-volatile memory .
 Ideal for embedded systems with simple, lightweight OS integrated into hardware.
 More reliable as firmware is less prone to corruption, especially in unstable power conditions.
 Better security since firmware is harder to tamper with and supports secure boot.
 Suitable for devices with limited storage capacity.
Disk Storage:
 Allows easy updates and upgrades without hardware changes.
 Provides larger storage capacity for complex OS and applications.
 Supports complex systems like PCs and servers needing full-featured OS.
 Enables multi-boot systems, allowing multiple OS installations.
 Generally cost-effective and widely available for high-capacity needs.

Designing a System to Allow Choice of Operating Systems at Boot:


 Boot Loader: Use a boot loader program (like GRUB or Windows Boot Manager) that runs first when the system
starts. It presents a menu listing all installed operating systems.
 Multiple OS Installation: Install each OS on a separate partition or disk so they don’t overwrite each other.
 Configuration File: The boot loader uses a configuration file to know where each OS is located and how to load it.
 User Selection: When the computer boots, the boot loader shows the list of OS options, allowing the user to choose
which one to start.
 Default and Timeout: The boot loader can automatically boot a default OS after a timeout if no choice is made.

What the Bootstrap Program Needs to Do


 Initialize Hardware: Perform basic checks and initialize essential hardware components like CPU, memory, and
input/output devices.
 Locate the Operating System: Find the OS kernel in a predetermined location, usually on disk or firmware.
 Load the OS Kernel into Memory: Read the OS kernel from storage into main memory.
 Transfer Control to the OS: Pass execution to the loaded OS so it can take over system control and continue
startup.

Three Main Purposes of an Operating System:


Answer:
Convenient Program Execution: Provides an environment for users to run programs efficiently and easily on the hardware.
Resource Allocation: Manages and allocates hardware resources (CPU, memory, I/O) fairly and efficiently among programs.
Control Program: It serves two major functions:
a. Supervises execution of programs to prevent errors or misuse
b. Manages and controls the operation of input/output (I/O) devices

When Should an Operating System "Waste" Resources?


It is appropriate for an operating system to "waste" resources (like CPU time, memory, or storage) in situations where the goal
is to improve user convenience, system responsiveness, or overall performance.

Examples of Purposeful Resource "Wasting":


1. Idle Processes or Background Services: The OS may run background processes or keep resources reserved for potential
use, which may seem wasteful but improves responsiveness.
2. Preloading or Caching: Loading commonly used programs or data into memory in advance uses extra memory but speeds
up access.
3. GUI-Based Systems: Graphical interfaces use more resources than command-line interfaces but enhance usability and user
experience.
4. Multiprogramming and Multitasking: Keeping several processes in memory at once may use more memory, but increases
CPU utilization and system throughput.

Why It's Not Really Wasteful:


Such resource usage is intentional and done to achieve better performance, responsiveness, and user satisfaction. The
apparent "waste" actually leads to more efficient overall system use when viewed from the user's or system's perspective.

Main Difficulty in Writing an Operating System for a Real-Time Environment:


The main difficulty is ensuring that tasks are completed within strict time constraints.
Explanation: In a real-time system, the operating system must:
 Respond to inputs or events within a guaranteed time limit
 Schedule and manage processes deterministically
 Handle hardware interrupts and I/O operations quickly
 Avoid delays caused by resource contention or unpredictable behavior
How Kernel Mode and User Mode Provide Basic Protection:
The distinction between kernel mode and user mode provides basic protection by limiting what instructions and operations a
program can perform. In kernel mode, the CPU can execute all instructions and access all hardware directly. In user mode, the
CPU restricts programs from executing certain privileged instructions or accessing hardware devices directly. This separation
prevents user programs from accidentally or maliciously interfering with critical system resources, ensuring system stability and
security.

Two Difficulties of Using an Unmodifiable Memory Partition for the OS


Inability to Update or Patch the OS: Since the OS memory partition cannot be modified, applying updates, bug fixes, or security
patches becomes very difficult or impossible without replacing the entire hardware or memory chip.
Limited Flexibility for OS Operations: Some operating system functions require modifying certain parts of the OS code or data
during runtime (e.g., loading drivers, managing system tables). An unmodifiable partition prevents these necessary dynamic
changes, limiting OS functionality and adaptability.

Two Possible Uses of Multiple CPU Modes


 Different Privilege Levels for Security: CPUs can have several modes (e.g., user mode, supervisor mode, hypervisor
mode) to enforce varying levels of privilege. This allows better control over access to sensitive resources and
protects the system from faulty or malicious code.
 Support for Virtualization: Additional modes can be used to run virtual machines efficiently by isolating the host OS,
guest OS, and hypervisor, each operating at different privilege levels to ensure secure and stable virtualization.
Timers could be used to compute the current time. Provide a short description of how this could be accomplished.

Two Reasons Why Caches Are Useful:


Faster Access: Caches store frequently used data closer to the CPU, reducing access time compared to slower main memory
or storage devices.
Improved Performance: By reducing the need to repeatedly access slower hardware (like disk or RAM), caches significantly
boost system speed and efficiency.

Problems Caches Solve:


Reduce latency in data access.
Decrease load on slower storage devices.
Minimize redundant data fetches, improving system responsiveness.

Problems Caches Cause:


Coherency issues: Keeping data consistent between the cache and the main memory or device can be difficult.
Complexity: Cache management (e.g., replacement policies, synchronization) increases system design complexity.
Overhead: Maintaining the cache requires extra memory and processing.

Why Not Make the Cache as Large as the Device It Caches?


Cost: High-speed cache memory (like SRAM or SSD) is much more expensive than slower storage (like HDD).
Technology limits: Cache is built with faster but costlier and less dense memory, making large-scale caching impractical.
Functionality: Caches are designed for speed, not for storing massive volumes of data. Replacing a device with cache would
lose key features like long-term durability, persistence, and cost-efficiency.

Distinguish Between Client-Server and Peer-to-Peer Models of Distributed Systems


Answer: In the client-server model, there is a clear distinction between the client and the server. The client requests services,
and the server provides them. The server is typically centralized and holds the main resources or data.
In contrast, the peer-to-peer (P2P) model does not enforce strict roles. All nodes are considered peers, and each can act as
both a client and a server. A node can request services from other peers and also provide services in return.
Example: Recipe Sharing System
 In a client-server model, all recipes are stored on a central server. If a user (client) wants a recipe, it must request it
from that server.
 In a peer-to-peer model, any peer node can request a recipe from other peers. Other nodes that have the recipe can
provide it. Thus, each peer may both request (client) and provide (server) recipes.

System Calls Required to Start a New Process in UNIX: To start a new process, a UNIX shell (or command interpreter)
typically uses the following two main system calls:
fork()
 Creates a new process by duplicating the calling (parent) process.
 The new process is called the child and is an exact copy of the parent, except for a few differences like the PID.
 Both the parent and child continue execution after the fork() call.
exec() (family of calls, e.g., execl(), execvp(), etc.)
 Replaces the child process's memory space with a new program.
 Loads the executable file of the command to be run and starts its execution.
 Does not return if successful—it replaces the current process image.

Using the program shown in Figure 3.30, explain what the output will be at LINE A.
Answer: The result is still 5, as the child updates its copy of value. When control returns to the parent, its value remains at 5.

Including the initial parent process, how many processes are created by the program shown in Figure 3.31?

Answer: Eight processes are created.

Three Major Complications of Concurrent Processing in an Operating System


Concurrent (simultaneous) processing enables multiple tasks or processes to run at the same time. While powerful, it
introduces several complications for operating system design and reliability:
1. Race Conditions: A race condition occurs when two or more processes access shared resources (like memory or files) at the
same time, and the result depends on the timing of their execution. Without proper coordination, this can lead to inconsistent or
incorrect data.
Example: Two apps writing to the same file at the same time may corrupt the file’s content.
2. Deadlocks: A deadlock occurs when two or more processes are each waiting for the other to release a resource, and none
of them can proceed. This can cause the system or applications to freeze indefinitely.
Example: Process A holds Resource X and waits for Resource Y, while Process B holds Y and waits for X.
3. Increased Complexity in Synchronization: The OS must ensure that processes or threads coordinate correctly when
accessing shared data or resources. Designing proper synchronization (e.g., using locks, semaphores) is complex and error-
prone.
Example: Incorrect use of synchronization tools may lead to bugs like data corruption or program crashes.

Exactly Once Semantics in RPC and Lost ACK Message:


Answer: The "exactly once" semantic ensures that a remote procedure call (RPC) is executed exactly one time. The general
algorithm uses acknowledgments (ACKs) combined with timestamps (or incremental counters) so the server can distinguish
duplicate requests.
Sequence of Events:
The client sends the RPC request with a timestamp to the server and starts a timeout timer.
The client waits for either:
(a) An ACK from the server confirming the RPC was executed, or
(b) The timeout expires.
If the timeout expires, the client assumes the RPC was not performed and retransmits the request with a later timestamp.
Handling Lost ACK:
The ACK might be lost due to network issues even though the server already executed the RPC.
When the server receives the retransmitted RPC, it uses the timestamp to detect that it is a duplicate request and does not
execute the RPC again.
Instead, the server resends the ACK to inform the client that the RPC has been performed.

You might also like