0% found this document useful (0 votes)
14 views21 pages

Osy Model Ans DH

The document discusses various components of operating systems, including memory management, I/O system management, process management, and file management. It also explains concepts like process control blocks, scheduling algorithms, inter-process communication, and memory management techniques such as dynamic relocation and swapping. Additionally, it covers user-level and kernel-level threads, their advantages and disadvantages, and operating system tools like user management and performance monitoring.

Uploaded by

bhalekarsakshi42
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views21 pages

Osy Model Ans DH

The document discusses various components of operating systems, including memory management, I/O system management, process management, and file management. It also explains concepts like process control blocks, scheduling algorithms, inter-process communication, and memory management techniques such as dynamic relocation and swapping. Additionally, it covers user-level and kernel-level threads, their advantages and disadvantages, and operating system tools like user management and performance monitoring.

Uploaded by

bhalekarsakshi42
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Q.

3Explian component of Os
ANS
Main Memory Management:
This component manages the system's RAM, ensuring efficient allocation and deallocation of memory
to programs. It tracks which parts of memory are in use and by whom. It prevents programs from
accessing each other’s memory without permission. This ensures smooth and secure multitasking.
1. I/O System Management:
The OS controls input and output devices like keyboards, mice, and printers. It uses I/O drivers
and buffers to manage data transfer between hardware and software. This ensures efficient and
error-free communication. It abstracts hardware complexity, making it easier for programs to
access I/O devices.
2. Process Management:
The OS manages all running programs (processes), handling creation, scheduling, and
termination. It ensures fair allocation of CPU time to processes. It also manages inter-process
communication and prevents deadlocks. This allows multiple processes to run smoothly on the
system.
3. File Management:
This component organizes and manages data in files and directories on storage devices. It
provides functions like creating, reading, writing, and deleting files. The OS also controls
access permissions to ensure data security. It keeps track of file locations and manages storage
efficiently.

2.Explain prosess control block with diagram(PcB)


q. compare time shared os and multiprogramming os

Time-Shared OS Multiprogramming OS
Focuses on time-sharing. Focuses on CPU efficiency.
Interactive systems. Batch processing systems.
Uses time slices. Switches on I/O wait.
Requires preemption. Can be non-preemptive.
For multi-user tasks. For multiple programs.
Examples: UNIX, Windows. Examples: OS/360, early UNIX.

Q Explain 4 types of system call


anser already in question 2

@Define : Process , Program A process is defined as an entity which


represents the basic unit of work to be implemented in the system.
A program is a piece of code which may be a single line or millions of
lines.

Q.Explain with diagram single level directory structure andtwo level directory
structure with adv and disv.

Hh

j
Q.4
1.Explain Round robin algorithm with suitable example

Definition:
Round Robin (RR) is a CPU scheduling algorithm that assigns a fixed time slice
(or quantum) to each process in the ready queue. Processes are executed in a
circular order, ensuring that no single process monopolizes the CPU.

Round Robin is a CPU scheduling method.


• Each process gets a fixed time (time quantum) to run.
• If the process isn’t done in that time, it goes to the back of the queue.
• The CPU then gives the next process a turn.
• This continues until all processes are finished.
With neat diagram explain inter process communication model
1. Shared memory

,.

Shared memory allows processes to share a portion of memory for


communication.
• Processes write data to this memory and read from it directly.
• It is faster because there’s no OS intervention during data exchange.
• Both processes must be on the same computer.
• Requires synchronization to prevent data overwriting or conflicts.
• Example: Process A writes sensor data to shared memory; Process B reads it
for analysis.
• Shared memory is best for large, frequent data sharing.
• Coordination tools like semaphores or mutexes are often needed.
• It requires the OS to allocate and manage the shared memory area.
• Used in applications like real-time systems or databases.
2. Message Passing
.

Message passing allows processes to communicate by sending and


receiving messages.
• It doesn’t require shared memory between processes.
• The operating system manages the message exchange.
• Messages can contain data like commands or information.
• Communication happens through functions like send() and
receive().
• It is useful for processes running on different computers.
• Example: In a chat app, a message from user A goes to user B.
• It is slower because the OS controls the communication process.
• Synchronization is simpler as messages are handled in order.
• Used in distributed systems like client-server models.

Q.Explain the following terms with respect to memory management: i)


Dynamic relocation ii) Swapping
i) Dynamic Relocation When a program gets swapped out to a disk memory,
then it is not always possible that when it is swapped back into main memory
then it occupies the previous memory location, since the location may still be
occupied by another process. We may need to relocate the process to a
different area of memory. Thus there is a possibility that program may be
moved in main memory due to swapping
. ii) Swapping Swapping is mechanism in which a process can be swapped
temporarily out of main memory (or move) to secondary storage (disk) and
make that memory available to other processes. At some later time, the
system swaps back the process from the secondary storage to main memory

Q. Describe how context switch is executed by operating system


• . When CPU switches to another process, the system must save the state
of the old process and load the saved state for the new process. This
task is known as a context switch
• . A context switch happens when the CPU changes from running one
process to another..It allows the operating system to run multiple
processes at the same time..First, the OS pauses the current process.
• .It saves the process’s state (like its current work) in a special storage
area.This state is saved in the process control block (PCB).
• Next, the OS picks a new process to run from the ready queue.The OS
loads the saved state of the new process.The CPU then starts running
the new process.This switch happens very quickly and users don’t
notice it.Context switching helps share CPU time but takes a small
amount of extra time.
.

Q.Explain partitioning and its types.


1. Static Partitioning (fixed)-
• This is the oldest and simplest technique used to put more than one processes in
the main memory.
• In static partitioning, the memory is divided into fixed-sized parts when the
computer starts.
• Each part can hold one program (process) at a time.
• The size of the parts cannot change later.
• If a program is smaller than a partition, some memory is wasted.
• If a program is too big, it cannot fit into the partition.
2. Dynamic Partitioning (Variable)
• In dynamic partitioning, the memory is divided based on the size of the
programs.
• Each program gets exactly the amount of memory it needs.
• This method reduces memory waste compared to static partitioning.
• However, over time, gaps of unused memory (called fragmentation) can
appear.
• The operating system may need to rearrange memory to fix these gaps.
.

Q.5Attempt any 2 (6M)


1 Explain LRU page replacement algorithm for following reference string. 7 0 1 2
0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 Calculate the page fault

• The LRU (Least Recently Used) algorithm manages pages in memory


when there isn't enough space to load a new page. It replaces the page that
hasn't been used for the longest time. Here's how it works:
• Memory Frames: The computer has a limited number of spaces (called
frames) to store pages.
• Page Request: When a page is requested:
◦ If the page is already in memory, it's called a hit (no replacement
needed).
◦ If the page is not in memory, it's a page fault, and the new page must be
loaded.
• Replace a Page:
◦ If all frames are full, the algorithm looks for the page that was used
longest ago and removes it to make space.

◦ Steps to Follow:
• A reference string tells the order in which pages are requested:
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
• Assume the number of frames (spaces for pages in memory). Let’s take 3 frames
for this example.
• Start placing pages in the frames as they are requested.
• If the requested page is already in memory, it’s a hit (no page fault).
• If the page is not in memory, it’s a page fault and the page must be loaded.
• When memory is full, use the LRU rule to remove the page that was used least
recently.
.
2. Optimal and FIFO
5 , 6, 7, 8 9, 7,8,5,9,7,8,7,9,6,5,6

Q.. The jobs are scheduled for execution as follows


.
.
Q>Explain following terms with respect to scheduling
i) CPU utilization ii) Throughput iii) Turnaround time iv) Waiting time
• CPU utilization: In multiprogramming the main objective is to keep CPU as busy
as possible. CPU utilization can range from 0 to 100 percent.
•  Throughput: It is the number of processes that are completed per unit time. It is
a measure of work done in the system.Throughput depends on the execution time
required for any process.
•  Turnaround time: The time interval from the time of submission of a process to
the time of completion of that process is called as turnaround time. It is the sum of
time period spent waiting to get into the memory, waiting in the ready queue,
executing with the CPU, and doing I/O operations.
•  Waiting time: It is the sum of time periods spent in the ready queue by a
process. When a process is selected from job pool, it is loaded into the main
memory. A process waits in ready queue till CPU is allocated to it.

Q.6 Attempt any TWO of the following:


1.What is the average turnaround time for the following process using :
i) FCFS scheduling algorithm
ii) SJF non preemptive scheduling algorithm
iii) Round Robin Scheduling algorithm

.
.

Explain user level thread and Kernel level thread with its advantages and disadvantages
User Level Thread

In a user thread, all of the work of thread management is done by the application and the kernel is not
aware of the existence of threads.

The thread library contains code for creating and destroying threads, for passing message and data
between threads, for scheduling thread execution and for saving and restoring thread contexts.

The application begins with a single thread and begins running in that thread.  User level threads are
generally fast to create and manage.
Advantages:
1. Faster to create and manage:
2. Less overhead:
3. 1. Thread switching does not require Kernel mode privileges.
4. 2. User level thread can run on any operating system
5. . 3. Scheduling can be application specific
6. . 4. User level threads are fast to create and manage.

Disadvantages:
1. Limited use of multi-core processors:
2. Blocking issue:
3. No True Parallelism:
4. . It is not appropriate for a multiprocessor system

.
Advantages:
1. Better use of multi-core processors: T
2. Independent execution:
3. Kernel can simultaneously schedule multiple threads from the same process on multiple process.
4 If one thread in a process is blocked, the Kernel can schedule another thread of the same process.
5.Kernel routines themselves can multithreaded.

Disadvantages:
1. Kernel threads are generally slower to create and manage than the user threads.
2. Transfer of control from one thread to another within same process requires a mode switch to the
Kernel
More overhead: S
Slower to create and manage

Q.Enlist the operating system tools.


Explain any two in detail.
Ans Following are the operating tools:
 User Managemen
t  Security policy
 Device Management
 Performance Monitor

User Management:
• This tool helps you control who can use the computer.
• You can create user accounts with usernames and passwords.
• You can decide what each user can do, like what files they can open or what programs they can
use.
• For example, an admin user can have full control of the system, while a regular user may only
use some programs.
• This tool helps keep the computer safe by making sure only the right people can use it.
• Security Policy:
• This tool helps protect the computer from hackers or unwanted access.
• It lets you set rules for things like passwords, firewalls, and who can use certain parts of the
system.
• You can make sure users have strong passwords and limit access to important files.
• This helps keep your computer and data safe.
• Device Management:
• This tool helps manage the devices connected to the computer, like printers, keyboards, and
monitors.
• It checks if the devices are working correctly.
• If a device isn’t working, it can help install the right software (called drivers) to make it work.
• It helps you fix problems with devices and make sure they run smoothly.
• Performance Monitor:
• This tool checks how well the computer is working.
• It looks at how much memory, CPU, and other parts of the system are being used.
• It can show if the computer is slow or if something is using too many resources, like memory or
CPU.
• This tool helps find problems and makes the computer work better by showing what needs
fixing.

Q.Explain multithreading model in detail.


Many-to-One Model:
• In the Many-to-One model, many threads are used, but they all run inside one process.
• All the threads share the same memory and resources.
• The problem is that if one thread has a problem, it can stop the whole program.
• The operating system doesn't manage the threads, so it can't run them on different processors to
speed up the work.
• Example: Imagine many workers doing the same job, but they are all in one room. If one
worker stops, everyone stops.

advantages:- 1)Fast switching:- Quickly switches between thread because it doesn’t involve the os
2)Low ovrehead:-use fewer system resources
Disadvntages:-1)blocing issue:- if one thread is blockes all thread can be blocked.
2)no multicore use- only one thread runs at a time so it doesn’t use multicore process
effectively

.
One-to-Many Model:
• In the One-to-Many model, one thread manages many threads.
• The operating system knows about these threads and can run them on different processors.
• This helps make the program run faster because it can use more than one processor at the same
time.
• If one thread has a problem, only that thread stops, and the others keep working.
• Example: Imagine one boss giving work to many workers. If one worker has a problem, the
other workers can still keep working.

Advantages:-
1)Better Performance:
• The threads can run on multiple processors (CPUs) at the same time, which makes the program
faster and more efficient.
• No Full Program Failure:
• If one thread has a problem or stops, the other threads can continue working. This makes the
program more reliable.
• Efficient Use of Resources:
• The operating system manages the threads, so it can balance the workload and use system
resources (like CPU and memory) more effectively.

Disadvantages
1.more ovrehead
2. slower switching

You might also like