0% found this document useful (0 votes)
16 views9 pages

OS Important M-2

Uploaded by

chota0101bheem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views9 pages

OS Important M-2

Uploaded by

chota0101bheem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Operating Systems

rd
3 SEM Exam – Important Questions

MODULE – 2

1. What is inter process communication?


Ans:

Interprocess Communication (IPC): Interprocess communication refers to the methods and


techniques used by processes in a computer system to share information, coordinate their
actions, and collaborate on tasks. Processes executing in a system can be categorized as either
independent or cooperating processes:

1. Independent Processes:
 These are processes that operate in isolation and do not interact with other processes. They
cannot affect other processes or be affected by them.
 Independent processes run concurrently but do not share resources or communicate with each
other.
2. Cooperating Processes:
 These are processes that can influence or be influenced by other processes executing in the
system.
 Cooperating processes may share resources, exchange data, or synchronize their activities to
achieve common goals.

Reasons for Cooperation Among Processes:

1. Information Sharing:
 Processes may need to access shared resources such as files or databases. IPC facilitates sharing
information among processes, ensuring that data is accessible to all users simultaneously.
2. Computation Speedup:
 By dividing a problem into smaller sub-tasks, multiple processes can work on them
simultaneously, potentially reducing the overall execution time. This is particularly advantageous
when multiple processors or cores are available.
3. Modularity:
 Systems can be structured into modular components or modules, with each module performing
a specific task. IPC allows modules to communicate and exchange information, enabling modular
and flexible system design.
4. Convenience:
 IPC provides a convenient means for processes, including those belonging to a single user, to
work on multiple tasks concurrently. By sharing information and resources, processes can
perform diverse activities efficiently.

Overall, IPC facilitates collaboration and interaction among processes, enabling efficient resource
utilization, faster computation, modularity, and convenience in system operation. It plays a vital
role in modern computing environments, supporting various applications and scenarios where
processes need to cooperate and communicate effectively.

3. Differentiate client server computing and peer to peer computing


Ans:
3. Discuss the implementation of IPC using message passing system in detail
4. Explain the multithreading models with neat diagram?
Ans:
7. Various benefits of multi threading programming
Ans:
The benefits of multithreading programming:

1. Faster Execution: Multithreading allows tasks to run concurrently, speeding up overall execution.
2. Responsive Applications: With multithreading, applications remain responsive even when
performing intensive tasks.
3. Efficient Resource Use: Multithreading optimizes CPU and memory usage by running multiple
tasks simultaneously.
4. Simplified Design: It simplifies code organization by breaking tasks into smaller, more
manageable threads.
5. Improved Throughput: Multithreading enhances system throughput by processing multiple tasks
at once.
6. Adaptability: Multithreading makes applications adaptable to varying hardware resources, scaling
performance as needed.
7. Asynchronous Operations: It enables asynchronous processing, allowing tasks to run
independently for faster completion.
8. Error Isolation: Multithreading isolates errors to specific threads, preventing them from affecting
the entire application.
9. Task Parallelism: It supports parallel execution of independent tasks, speeding up overall
processing.
10. Real-Time Processing: Multithreading prioritizes critical tasks for real-time performance without
delaying other operations.

These simple points capture the essence of how multithreading programming improves application
performance, responsiveness, and resource utilization.

15.Explain the CPU Scheduling criteria?


Ans:
CPU Scheduling
• Four situations under which CPU scheduling decisions take place:
1). When a process switches from the running state to the waiting state. For ex; I/O
request.
2). When a process switches from the running state to the ready state
3). For ex: when an interrupt occurs.
4). When a process switches from the waiting state to the ready state. For ex: completion
of I/O. 5. When a process terminates.
Scheduling under 1 and 4 is non- preemptive. Scheduling under 2 and 3 is preemptive.
SCHEDULING CRITERIA:
In choosing which algorithm to use in a particular situation, depends upon the properties of
the various algorithms. Many criteria have been suggested for comparing CPU- scheduling
algorithms. The criteria include the following:

PU Utilization:

1. Keep CPU busy: Aim to maximize CPU usage to ensure efficient processing.
2. Range of utilization: Ideally, CPU utilization should range between 40% to 90% in real systems.
3. Conceptual range: Theoretical CPU utilization can range from 0% to 100%.
4. System load: Higher CPU utilization indicates a heavily loaded system, while lower utilization
suggests a lightly loaded system.

Throughput:

1. Measure of work: Throughput measures the number of processes completed per unit of time.
2. Work completion rate: Higher throughput means more work is being done in a given time frame.
3. Varying rates: Throughput can vary from one process per hour for long tasks to ten processes per
second for short tasks.
4. Indication of system efficiency: Increased throughput indicates efficient CPU utilization and task
completion.

Turnaround Time:
1. Process completion time: Turnaround time measures the time taken to complete a process from
submission to completion.
2. Inclusive interval: Includes time spent in memory allocation, ready queue, CPU execution, and I/O
operations.
3. Importance: Lower turnaround time indicates faster process completion and improved system
efficiency.
4. Efficiency indicator: Monitoring turnaround time helps evaluate the effectiveness of CPU scheduling
algorithms.

Waiting Time:

1. Time spent in queue: Waiting time measures the duration a process spends in the ready queue.
2. Queue delay: It reflects the time a process waits for CPU allocation after arriving in the ready queue.
3. Impact on performance: Lower waiting time implies reduced idle time and faster task execution.
4. Scheduling impact: CPU scheduling algorithms aim to minimize waiting time to enhance system
responsiveness and performance.

Response Time:

1. Time to start responding: Response time measures the time it takes from submitting a request until
the system starts producing the first response.
2. Interactive system measure: In interactive systems, response time is crucial for providing a smooth
user experience.
3. Early output availability: Processes may start producing output early and continue computing while
previous results are being output to the user.
4. Importance: Response time reflects the system's ability to promptly react to user requests and
provide timely feedback, enhancing user satisfaction and system usability.
PROBLEMS

You might also like