ESE - Solved QB - Operating System
ESE - Solved QB - Operating System
Increase efficieny
Reduce costs.
Improve quality.
better control over operations.
Reduce times and thus reduce the production and delivery times of the
service.
Process Scheduling allows the OS to allocate CPU time for each process.
Another important reason to use a process scheduling system is that it
keeps the CPU busy at all times. This allows you to get less response time
for programs.
1. Threat
2. Attack
Operating systems can be targeted with Denial of Service attacks, where the
goal is to overwhelm the system's resources, making it unavailable to
legitimate users. This can be achieved through network-based attacks,
resource exhaustion, or other methods.
Batch processing was very popular in the 1970s. The jobs were executed
in batches. People used to have a single computer known as a mainframe.
Users using batch operating systems do not interact directly with the
computer. Each user prepares their job using an offline device like a punch
card and submitting it to the computer operator. Jobs with similar
requirements are grouped and executed as a group to speed up processing.
Once the programmers have left their programs with the operator, they sort
the programs with similar needs into batches.
The batch operating system grouped jobs that perform similar functions. These
job groups are treated as a batch and executed simultaneously. A computer
DR.RAIS ABDUL HAMID KHAN
system with this operating system performs the following batch processing
activities:
In the simple batched system there is no direct interaction between the user
and the computer. The user creates the job on the punch cards and submits
it to the computer operator. Then the operator makes batches and the
computer starts to execute them sequentially.
Multiple jobs are implemented by the CPU by switching between them, but the
switches occur so frequently. So, the user can receive an immediate response.
For an example, in a transaction processing, the processor executes each user
program in a short burst or quantum of computation, i.e.; if n users are present,
then each user can get a time quantum. Whenever the user submits the command,
the response time is in few seconds at most.
o MTS
o Lynx
o QNX
o VxWorks etc.
RTOS is used in real-time applications that must work within specific deadlines.
Following are the common areas of applications of Real-time operating systems
are given below.
In Hard RTOS, all critical tasks must be completed within the specified time
duration, i.e., within the given deadline. Not meeting the deadline would result
in critical failures such as damage to equipment or even loss of human life.
For Example,
Soft RTOS accepts a few delays via the means of the Operating system. In this
kind of RTOS, there may be a closing date assigned for a particular job, but a
delay for a small amount of time is acceptable. So, cut off dates are treated softly
via means of this kind of RTOS.
For Example,
This type of system is used in Online Transaction systems and Livestock price
quotation Systems.
14. Describe Virtual Machine (VM). What are the characteristics of the
virtual machines?
VMs are isolated from the rest of the system, and multiple VMs can exist on a
single piece of hardware, like a server. That means, it as a simulated image of
application software and operating system which is executed on a host computer
or a server.
It has its own operating system and software that will facilitate the resources to
virtual computers.
Ability to move the virtual computers between the physical host computers
as holistically integrated files.
The below diagram shows you the difference between the single OS with no VM
and Multiple OS with VM −
The process can be split down into so many threads. For example, in a browser,
many tabs can be viewed as threads. MS Word uses many threads - formatting
text from one thread, processing input from another thread, etc.
Need of Thread:
o It takes far less time to create a new thread in an existing process than to
create a new process.
o Threads can share the common data, they do not need to use Inter- Process
communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.
Types of Threads
User-level thread
The operating system does not recognize the user-level thread. User threads can
be easily implemented and it is implemented by the user. If a user performs a
user-level thread blocking operation, the whole process is blocked. The kernel
level thread does not know nothing about the user level thread. The kernel-level
thread manages user-level threads as if they are single-threaded processes?
Examples: Java thread, POSIX threads, etc.
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that
do not support threads at the kernel-level.
3. It is faster and efficient.
1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.
The kernel thread recognizes the operating system. There is a thread control
block and process control block in the system for each thread and process in the
kernel-level thread. The kernel-level thread is implemented by the operating
system. The kernel knows about all the threads and manages them.
DR.RAIS ABDUL HAMID KHAN
Components of Threads
1. Program counter
2. Register set
3. Stack space
Here are some key aspects of shell programming, along with its advantages
and disadvantages:
3. Security Concerns: Shell scripts may pose security risks if not written
carefully. Poorly written scripts may inadvertently expose vulnerabilities
or allow unauthorized access.
1. Process State: Indicates the current state of the process (e.g., running,
ready, blocked, terminated).
3. CPU Registers: The values of the CPU registers for the process. This
includes the contents of general-purpose registers, the program counter,
and other relevant registers.
5. Priority: The priority level of the process, which may be used by the
scheduler to determine the order in which processes are executed.
7. Open Files: A list of files that the process has opened during its
execution.
10 .I/O Status Information: Information about the I/O devices the process
is using, including open files, status of I/O operations, etc.
When a context switch occurs (i.e., the operating system switches from
executing one process to another), the contents of the CPU registers are
saved into the PCB of the currently running process, and the PCB of the
new process to be executed is loaded into the CPU registers. This allows
the operating system to seamlessly switch between processes.
1. Kernel:
- The kernel is the core component of the operating system.
- It provides essential services for all other parts of the operating system.
- It manages system resources, such as CPU, memory, and I/O devices.
- The kernel is responsible for handling system calls and managing the
overall system state.
2. Device Drivers:
- Device drivers are specialized modules that enable the operating
system to communicate with hardware devices.
- They provide an abstraction layer between the hardware and the rest of
the operating system, allowing software to interact with devices without
needing to understand their low-level details.
3. System Libraries:
- System libraries contain reusable, standardized code that applications
can use to perform common tasks.
DR.RAIS ABDUL HAMID KHAN
5. File System:
- The file system organizes and manages files on storage devices.
- It provides a hierarchical structure for organizing data, and it includes
mechanisms for file storage, retrieval, and access control.
- File systems are crucial for managing data persistence and supporting
various storage devices.
6. Process Management:
- Process management involves creating, scheduling, and terminating
processes.
- The operating system manages the execution of multiple processes,
ensuring they share resources efficiently and run concurrently.
7. Memory Management:
- Memory management is responsible for allocating and deallocating
memory for processes.
- It includes mechanisms such as virtual memory, which allows
processes to use more memory than physically available by swapping data
between RAM and storage.
8. I/O Management:
- I/O management handles input and output operations, allowing
processes to communicate with external devices.
DR.RAIS ABDUL HAMID KHAN
First-Come, First-Served (FCFS) and Shortest Job First (SJF) are two fundamental
scheduling algorithms used in operating systems to manage the execution of processes. They
differ in their approach to prioritizing processes and determining the order in which they are
run.
FCFS, also known as the FIFO (First In, First Out) algorithm, is a non-preemptive
scheduling algorithm that prioritizes processes based on their arrival time. The process that
arrives first is the first to be executed, and this order is maintained until all processes have
finished.
For example:
DR.RAIS ABDUL HAMID KHAN
Shortest Job First (SJF) is a scheduling algorithm used in operating systems for
managing the execution of processes. The key idea behind SJF is to prioritize
processes based on their burst time—the time it takes for a process to execute
from start to finish. The process with the shortest burst time is scheduled first.
For example :
DR.RAIS ABDUL HAMID KHAN
Synchronization Problems
These problems are used for testing nearly every newly proposed
synchronization scheme. The following problems of synchronization are
considered as classical problems:
Dining-Philosophers Problem
The Dining Philosopher Problem states that K philosophers seated around
a circular table with one chopstick between each pair of philosophers.
There is one chopstick between each philosopher. A philosopher may eat
if he can pickup the two chopsticks adjacent to him. One chopstick may
be picked up by any one of its adjacent followers but not both. This
problem involves the allocation of limited resources to a group of
processes in a deadlock-free and starvation-free manner.
1. Mutual Exclusion:
- Semaphore: Semaphores are integer variables used to control access to
critical sections. They have two standard operations: wait (decrement) and
signal (increment). A semaphore can be used to ensure that only one
process at a time can execute a critical section.
2. Hardware Instructions:
- *Test-and-Set (TAS) and Compare-and-Swap (CAS):* These atomic
hardware instructions provide a way to implement mutual exclusion. TAS
sets a variable and returns its previous value, while CAS updates a variable
only if its current value matches an expected value.
3. Monitors:
- A monitor is a high-level abstraction that encapsulates shared data and
procedures that operate on that data. Only one process can execute a
monitor procedure at a time, ensuring mutual exclusion.
4. Condition Variables:
DR.RAIS ABDUL HAMID KHAN
5. Semaphores:
- Besides mutual exclusion, semaphores can be used for process
synchronization. Counting semaphores allow a specified number of
processes to access a resource simultaneously.
6. Message Passing:
- Processes can communicate and synchronize using message passing.
Synchronization is achieved by sending and receiving messages at
appropriate points in the execution.
7. Deadlock Handling:
- Deadlocks can occur when two or more processes are unable to proceed
because each is waiting for the other to release a resource. Techniques like
deadlock detection, prevention, and recovery are employed to handle
deadlocks.
8. Barrier Synchronization:
- Barriers are synchronization mechanisms that allow multiple processes
to wait for each other at a predefined point in the program. Once all
processes reach the barrier, they are released simultaneously.
9. Readers-Writers Problem:
- In situations where multiple processes need to access shared data, the
readers-writers problem arises. Solutions involve providing exclusive
access to writers while allowing multiple readers to access the data
concurrently.
1. Mutual Exclusion:
- Condition: At least one resource must be held in a non-shareable mode,
meaning that only one process can use the resource at a time.
- Deadlock Type: Resource Holding Deadlock
- Description: Processes holding resources prevent others from accessing
them.
3. No Preemption:
- Condition: Resources cannot be preempted (taken away) from a
process; they must be explicitly released by the process holding them.
- Deadlock Type: Hold and Wait Deadlock
- Description: Processes may hold resources and wait for others, without
releasing the held resources.
4. Circular Wait:
- Condition: There must be a circular chain of two or more processes,
each waiting for a resource held by the next one in the chain.
- Deadlock Type: Mutual Exclusion Deadlock
- Description: A cycle of waiting occurs among processes, with each
waiting for a resource held by the next.
These four conditions together are known as the Coffman conditions, and
they are necessary for the occurrence of a deadlock. If all four conditions
are present simultaneously, a deadlock can occur.
DR.RAIS ABDUL HAMID KHAN
1. External Fragmentation:
- Definition: External fragmentation occurs when free memory or
storage is scattered throughout the system, but there is not enough
contiguous free space to satisfy a memory or storage request.
- Causes:
- Allocation and Deallocation of Variable-Sized Blocks: As processes
or files are allocated and deallocated, memory or storage becomes divided
into small, non-contiguous chunks. Over time, these fragments can
accumulate.
- Variable Partition Sizes: If the memory or storage is divided into
variable-sized partitions, the remaining small gaps between partitions can
lead to external fragmentation.
2. Internal Fragmentation:
- Definition: Internal fragmentation occurs when a process or file is
allocated more memory or storage space than it actually needs, and the
excess space is wasted.
- Causes:
- Fixed Partition Sizes: If the memory or storage is divided into fixed-
sized partitions, a process may be allocated a partition larger than its actual
size. The difference between the allocated space and the actual space
needed is internal fragmentation.
DR.RAIS ABDUL HAMID KHAN
Examples:
Impact:
1. Memory Partitioning:
- The entire available memory is initially considered a single, large
block.
- This block is then recursively split into smaller blocks, each being a
power of 2 in size.
2. Power of 2 Blocks:
- The block sizes are typically 2^0, 2^1, 2^2, 2^3, and so on.
- Each block is assigned an address and labelled with its size.
3. Buddy Assignment:
- When a process requests a specific amount of memory, the system
allocates a block that is at least as large as the requested size.
- If the allocated block is larger than needed, it is split into two buddies
of half the size.
- The system keeps track of which blocks are currently allocated and
which are free.
4. Merging Buddies:
- When a process releases memory, the system checks if the adjacent
block is also free and of the same size (i.e., the buddy).
- If both blocks are free and have the same size, they are merged back
into a larger block.
1. Internal Fragmentation: The buddy system may still have some internal
fragmentation, especially when a process is allocated a block that is larger
than its exact size.
1. Sequential Access:
- In sequential access, data is read or written in a linear fashion from the
beginning to the end of the file.
DR.RAIS ABDUL HAMID KHAN
2. Random Access:
- Random access allows direct access to any specific location within the
file.
- Each block or record within the file has a unique address or index, and
you can jump to any location without going through the entire file
sequentially.
- Random access is suitable for applications that need quick and direct
access to specific pieces of data.
- Example: Accessing data in a database file using indexing.
The choice of file access method depends on factors such as the nature of
the data, the type of application, and the performance requirements.
Different file systems and programming languages provide support for
various access methods.
1. First-Come-First-Serve (FCFS):
- Principle: Requests are serviced in the order they arrive in the disk
queue.
- Advantage: Simple and easy to implement.
- Disadvantage : Can lead to poor performance, especially when there is
a mix of short and long I/O requests (the "convoy effect").
5. LOOK Algorithm:
- Principle: Similar to SCAN but does not go all the way to the end of
the disk. It reverses direction when there are no more requests in the
current direction.
- Advantage: Reduces response time for requests close to the current
position of the disk arm.
- Disadvantage: Similar to SCAN, it may cause starvation for requests at
one end of the disk.
6. C-LOOK Algorithm:
- Principle: Similar to C-SCAN but without servicing requests in
between the jumps from one end to the other.
- Advantage: Reduces arm movement and provides a more predictable
service time.
- Disadvantage: Similar to C-SCAN, it may result in increased response
time for requests near the disk's ends.
1. Process Management:
- Process Creation and Termination: The OS is responsible for creating,
scheduling, and terminating processes, which are instances of running
programs.
- Process Scheduling: The OS manages the execution of multiple
processes, determining which process gets access to the CPU at any
given time.
2. Memory Management:
- Memory Allocation: The OS allocates and deallocates memory for
processes, ensuring efficient utilization of available memory.
- Virtual Memory: Many operating systems support virtual memory,
allowing processes to use more memory than physically available by
swapping data between RAM and storage.
4. Device Management:
- Device Drivers: The OS interacts with hardware devices through
device drivers, enabling communication between applications and
hardware components.
DR.RAIS ABDUL HAMID KHAN
6. User Interface:
- Command-Line Interface (CLI) or Graphical User Interface (GUI):
The OS provides a user interface for interaction, allowing users to
communicate with the system and run applications.
7. Networking:
- Network Protocols and Services: The OS supports networking
protocols and services, enabling communication between computers on a
network.
- Network Configuration: It manages network configurations, including
IP addresses, routing tables, and network settings.
8. Error Handling:
- Fault Tolerance: The OS may include features to detect and recover
from errors, ensuring system stability.
- Error Logging: It logs system errors and events for diagnostics and
troubleshooting.
1. CPU Utilization:
DR.RAIS ABDUL HAMID KHAN
2. Throughput:
- Objective: Maximize the number of processes that are completed per
unit of time.
- Rationale: Increased throughput means more tasks are finished in a
given timeframe, improving the overall system performance.
3. Turnaround Time:
- Objective: Minimize the time taken to execute a process from the
submission to its completion.
- Rationale: Short turnaround time indicates quicker response and better
user satisfaction.
4. Waiting Time:
- Objective: Minimize the total time processes spend waiting in the ready
queue.
- Rationale: Reducing waiting time enhances system responsiveness and
efficiency.
5. Response Time:
- Objective: Minimize the time it takes for a system to respond to a user's
input.
- Rationale: Fast response times improve user experience and
interactivity.
6. Fairness:
- Objective: Provide fair access to CPU resources for all processes.
- Rationale: Ensures that no process is unfairly starved of CPU time,
promoting equitable resource allocation.
7. Predictability:
- Objective: Achieve consistent and predictable performance.
DR.RAIS ABDUL HAMID KHAN
8. Balancing Workload:
- Objective: Distribute CPU workload evenly among processes and
system resources.
- Rationale: Balancing workload prevents certain processes from
monopolizing resources, leading to better overall system performance.
1. Sending a Message:
- Definition: Sending a message involves a process or a component
generating a message and transmitting it to another process or component.
- Process: The sending process creates a message containing
information, data, or instructions to be communicated.
- Transmission: The message is then transmitted through a
communication channel, which could be shared memory, a network, or any
other communication medium.
- Destination: The message is delivered to the intended recipient process
or component.
DR.RAIS ABDUL HAMID KHAN
2. Receiving a Message:
- Definition: Receiving a message involves a process or a component
waiting for and accepting a message from another process or component.
- Waiting: The receiving process enters a state of waiting or polling until
a message arrives.
- Arrival: Upon the arrival of a message, the receiving process retrieves
the message from the communication channel or message queue.
- Processing: The receiving process then processes the content of the
received message, using the information, data, or instructions as needed.
These two basic operations of sending and receiving messages form the
foundation of message passing as a communication paradigm. Message
passing enables communication and coordination between independent
processes, allowing them to exchange information and synchronize their
activities. This communication method is essential for building distributed
and concurrent systems, where multiple processes or components need to
collaborate to achieve a common goal. The effectiveness of message
passing often depends on the underlying communication infrastructure and
the mechanisms used to ensure reliable and orderly communication
between processes.
2. Isolation of Processes:
- Purpose: Prevent interference between processes and ensure that each
process operates independently.
- Explanation: By assigning separate memory partitions to different
processes, memory partitioning helps isolate the address spaces of
individual processes. This isolation ensures that a process cannot
unintentionally access or modify the memory used by another process.
3. Protection:
- Purpose: Provide a level of protection for each process's memory space.
- Explanation: Memory partitioning allows the operating system to set
access permissions and boundaries for each partition. This helps prevent
unauthorized access to or modification of memory regions, enhancing the
security and stability of the system.
7. Ease of Implementation:
- Purpose: Provide a straightforward and practical approach to memory
management.
- Explanation: Memory partitioning is a relatively simple technique
compared to other memory management methods. It is easy to implement
and manage, making it suitable for various systems, especially those with
fixed-size partitions.
In demand paging, the operating system loads only the necessary pages of a
program into memory at runtime, instead of loading the entire program into
memory at the start.
36. List out the most important issues in the design of a real time
operating system.
1. Predictability:
- Challenge: Ensuring predictable and deterministic behavior in terms of
task execution times and response times to events.
- Importance: Predictability is crucial for meeting hard or soft real-time
requirements where deadlines must be consistently met.
2. Task Scheduling:
- Challenge: Developing efficient and deterministic scheduling
algorithms to prioritize and schedule tasks based on their deadlines and
priorities.
- Importance: Proper task scheduling is essential for meeting timing
requirements and maximizing system utilization.
3. Interrupt Handling:
- Challenge: Minimizing interrupt latency and ensuring that high-
priority interrupts can preempt lower-priority ones promptly.
- Importance: Reducing interrupt latency is critical for responding to
external events in a timely manner.
4. Resource Management:
- Challenge: Managing and allocating system resources such as CPU,
memory, and I/O devices efficiently and predictably.
- Importance: Proper resource management is vital for meeting real-time
constraints and preventing resource contention.
5. Concurrency Control:
- Challenge: Implementing mechanisms for managing concurrent access
to shared resources while maintaining data consistency.
- Importance: Effective concurrency control is necessary to prevent data
corruption and ensure that tasks meet their deadlines.
6. Communication Mechanisms:
- Challenge: Designing efficient communication mechanisms for inter-
task communication and synchronization.
- Importance: Effective communication is crucial for coordinating tasks
and sharing information in a real-time system.
DR.RAIS ABDUL HAMID KHAN
7. Memory Management:
- Challenge: Implementing memory management strategies that
minimize fragmentation and provide predictable memory access times.
- Importance: Efficient memory management is vital for maintaining
system stability and meeting timing constraints.
8. Fault Tolerance:
- Challenge: Incorporating mechanisms for detecting and recovering
from faults to ensure system reliability.
- Importance: Fault tolerance is critical in safety-critical applications
where system failures can have severe consequences.
9. Clock Management:
- Challenge: Ensuring accurate and precise timekeeping for tasks and
events in the system.
- Importance: Accurate timekeeping is essential for meeting deadlines
and coordinating time-sensitive activities.
Example:
Let's take a disk with 180 tracks (0-179) and the disk queue having input/output
requests in the following order: 75, 90, 40, 135, 50, 170, 65, 10. The initial head
position of the Read/Write head is 45 and will move on the left-hand side. Find the
total number of track movements of the Read/Write head using the SCAN algorithm.
Solution:
= 215
FCFS stands for First-Come-First-Serve. It is a very easy algorithm among the all-disk
scheduling algorithms. It is an OS disk scheduling algorithm that runs the queued
requests and processes in the way that they arrive in the disk queue. It is a very easy
and simple CPU scheduling algorithm. In this scheduling algorithm, the process which
requests the processor first receives the processor allocation first. It is managed with
a FIFO queue.
Example:
Let's take a disk with 180 tracks (0-179) and the disk queue having input/output
requests in the following order: 75, 90, 40, 135, 50, 170, 65, 10. The initial head
position of the Read/Write head is 45. Find the total number of track movements of
the Read/Write head using the FCFS algorithm.
Solution:
= 30 + 15 + 50 + 95 + 85 + 120 + 105 + 55
DR.RAIS ABDUL HAMID KHAN
= 555
1. Blocking:
- Definition: Blocking refers to the state in which a process is temporarily
stopped or "blocked" while waiting for an I/O operation to complete.
- Explanation: When a process initiates an I/O operation, it may have to wait
until the operation is finished before continuing its execution. During this
waiting period, the process is said to be blocked. Blocking is typical in
synchronous I/O operations, where the process directly waits for the I/O to
complete before proceeding.
2. Buffering:
- Definition: Buffering involves temporarily storing data in a buffer before it
is consumed or processed by a program or transmitted to an I/O device.
- Explanation: Instead of directly transferring data between processes and I/O
devices, a buffer is used as an intermediate storage area. This allows for more
efficient handling of data transfers, especially when there is a difference in data
transfer rates between the producer (e.g., a process) and the consumer (e.g., an
I/O device). Buffering helps decouple the production and consumption rates,
reducing the likelihood of blocking due to speed mismatches.
Key Differences:
- Timing:
- Blocking: Occurs during the actual I/O operation, where the process waits
for the operation to complete.
- Buffering: Involves storing data in an intermediate buffer, often before or
after the I/O operation.
- Concurrency:
DR.RAIS ABDUL HAMID KHAN
- Blocking: Can lead to inefficiencies, as the process is halted until the I/O
operation finishes.
- Buffering: Supports concurrent execution, allowing processes to continue
working while data is being transferred in the background.
- Example:
- Blocking: A process reading from a file may be blocked until the requested
data is read from the storage device.
- Buffering: A printer spooler may use buffering to store print jobs
temporarily, allowing the printer to continue processing jobs even if the
printing speed is slower than the data generation speed.