0% found this document useful (0 votes)
13 views13 pages

Assignnment 1201

The document outlines various types of operating systems, including batch, time-sharing, distributed, real-time, network, and mobile operating systems, along with their structures such as monolithic, layered, microkernel, modular, and client-server models. It also discusses CPU scheduling algorithms, interprocess communication mechanisms, the importance of the Process Control Block (PCB), file access methods, and I/O control strategies. Each section provides detailed explanations and examples relevant to the functioning and management of operating systems.

Uploaded by

ahirwarnikks888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views13 pages

Assignnment 1201

The document outlines various types of operating systems, including batch, time-sharing, distributed, real-time, network, and mobile operating systems, along with their structures such as monolithic, layered, microkernel, modular, and client-server models. It also discusses CPU scheduling algorithms, interprocess communication mechanisms, the importance of the Process Control Block (PCB), file access methods, and I/O control strategies. Each section provides detailed explanations and examples relevant to the functioning and management of operating systems.

Uploaded by

ahirwarnikks888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

ROLL NUMBER – 2414106893

PROGRAMME

BACHELOR OF COMPUTER APPLICATION (BCA)

SEMESTER II

COURSE CODE & NAME DCA1201- OPERATING SYSTEM

NAME- NIKHIL AHIRWAR

ASSIGNMENT SET – 2

Ans 1-

Types of Operating Systems -

1. Batch Operating Systems--


1- Executes batches of jobs without user interaction.
2- Used in environments where tasks are repetitive.
3- Examples- IBM Mainframe systems.
2. Time-Sharing Operating Systems--
1- Allows multiple users to share system resources simultaneously.
2- Provides a quick response by allocating time slices to tasks.
3- Examples: UNIX, Multics.
3. Distributed Operating Systems--
1- Manages a group of independent computers and makes them appear as a single
system.
2- Promotes resource sharing and fault tolerance.
3- Examples: Google’s Kubernetes, Amoeba.
4. Real-Time Operating Systems (RTOS)--
1- Designed for applications requiring strict timing constraints.
2- Common in embedded systems and industrial control systems.
3- Types:
 Hard RTOS-- Guarantees task completion within deadlines.
 Soft RTOS-- Tries to meet deadlines but without strict guarantees.
4- Examples: VxWorks, FreeRTOS.
5. Network Operating Systems (NOS)--
1- Manages and coordinates resources over a network.
2- Provides functionalities like file sharing, remote login, and printer access.
3- Examples: Microsoft Windows Server, Novell NetWare.
6. Mobile Operating Systems--
1- Optimized for mobile devices with touchscreen interfaces.
2- Provides support for mobile-specific features like GPS, cameras, and apps.
3- Examples: Android, iOS.

Operating System Structures--


Operating systems are structured to manage resources efficiently and support various
functionalities. Common structures include--

1. Monolithic Structure
1- All OS services run in kernel space as a single process.
2- Simple and fast but less modular and difficult to debug.
2. Layered Structure
1- Divides the OS into layers, each built on top of the previous one.
2- Ensures modularity and simplifies debugging.
3. Microkernel Structure
1- Only essential services (example-- process management, memory
management) run in the kernel.
2- Other services run in user space, improving reliability and security.
4. Modular Structure
1- Uses modules that can be dynamically loaded or unloaded, combining
monolithic and microkernel features.
2- Provides flexibility and ease of extension.
5. Client-Server Model
1- OS functions are implemented as separate servers interacting with clients.
2- Promotes distribution and fault tolerance.

Ans 2 –

CPU Scheduling Algorithms--

CPU scheduling is the process of determining which process in the ready queue will be
allocated the CPU. The choice of algorithm affects system performance, responsiveness, and
throughput. Key algorithms include:

1. First-Come, First-Served (FCFS)

1- Processes are executed in the order of their arrival.


2- Simple but can cause convoy effect, where short processes wait behind longer ones.
3- Non-pre-emptive.

2. Shortest Job Next (SJN) / Shortest Job First (SJF)

1- Selects the process with the shortest CPU burst time.


2- Optimal in minimizing average waiting time.
3- Non-pre-emptive or pre-emptive (Shortest Remaining Time First, SRTF).
4- Requires knowledge of burst times, which is often estimated.

3. Priority Scheduling

1- Assigns a priority to each process; the highest priority process is executed first.
2- Can be pre-emptive or non-pre-emptive.
3- Starvation is a risk for low-priority processes; solved by aging.

4. Round Robin (RR)


1- Each process gets a fixed time slice (quantum) for execution, then goes back to the
ready queue if not completed.
2- Pre-emptive by design, ensuring fairness among processes.
3- Performance depends on the length of the time quantum.

5. Multilevel Queue Scheduling

1- Divides processes into different queues based on priority or type (e.g., foreground vs.
background).
2- Each queue has its own scheduling algorithm.
3- Fixed or dynamic priorities can determine inter-queue scheduling.

6. Multilevel Feedback Queue Scheduling

1- Allows processes to move between queues based on their behaviour and execution
time.
2- Adaptive and responsive to process requirements.
3- More complex to implement.

7. Earliest Deadline First (EDF)

1- Common in real-time systems.


2- Processes are prioritized based on the closest deadline.
3- Can handle dynamic priorities efficiently.

Importance of Scheduling:

1. Maximizes CPU Utilization:


Ensures the CPU is always executing tasks, reducing idle time.
2. Improves Throughput:
Increases the number of processes completed in a given time.
3. Reduces Waiting Time and Turnaround Time:
Balances resource allocation to minimize delays.
4. Promotes Fairness:
Distributes CPU time equitably among processes, preventing starvation.
5. Supports Multitasking:
Enables multiple processes to share CPU resources effectively, improving user
experience.
6. Optimizes System Performance:
By using appropriate algorithms, scheduling can achieve system goals like
responsiveness, energy efficiency, or real-time constraints.

Efficient scheduling ensures the system meets user expectations and aligns with specific
workload demands.

Ans 3 –

Interprocess Communication (IPC)


Interprocess Communication (IPC) refers to mechanisms that allow processes to
communicate and synchronize their actions when running concurrently in an operating
system. IPC is essential for the coordination of tasks in both single-core and multi-core
systems. Processes can share data, send messages, or use shared memory to coordinate their
actions.

Types of IPC Mechanisms

1. Message Passing:
1. Processes exchange messages using system calls such as send() and receive().
2. Useful for distributed systems.
3. Examples: Pipes, Message Queues, and Sockets.
2. Shared Memory:
1. A portion of memory is shared between processes.
2. Faster than message passing because it avoids kernel intervention after initial
setup.
3. Requires synchronization to avoid data inconsistency.
4. Example: Shared Memory Segment (shmget, shmat in UNIX).
3. Others:
1. Semaphores: For synchronization and mutual exclusion.
2. Signals: To notify processes of events.
3. File Systems: Processes can communicate via shared files.

Critical-Section Problem

The critical-section problem occurs in concurrent programming when multiple processes


access shared resources simultaneously, leading to data inconsistency or corruption.

Critical Section:

a- A part of the code where shared resources are accessed.


b- Must be executed mutually exclusively, i.e., only one process at a time can execute its
critical section.

Requirements for a Solution:

1. Mutual Exclusion: No two processes can be in their critical sections simultaneously.


2. Progress: Processes not in the critical section must not prevent others from entering
their critical section.
3. Bounded Waiting: A process should not wait indefinitely to enter its critical section.

Use of Semaphores

Semaphores are synchronization primitives used to manage access to shared resources and
solve the critical-section problem. A semaphore is essentially a variable that is used to signal
processes to coordinate their execution.

Types of Semaphores:
1. Binary Semaphore (Mutex):
1- Can only take values 0 and 1.
2- Used for mutual exclusion.
2. Counting Semaphore:
1- Takes non-negative integer values.
2- Used to control access to resources with multiple instances.

Operations on Semaphores:

1. Wait (P):

P(S): while (S <= 0); S = S - 1;

1 Decrements the semaphore.


2 If the semaphore value is 0, the process waits.
2. Signal (V):

V(S): S = S + 1;

1 Increments the semaphore.


2 Wakes up a waiting process if any.

Advantages of Using Semaphores:

1. Simple and efficient for synchronization.


2. Can solve complex synchronization problems like producer-consumer, reader-writer,
and dining philosopher problems.

Limitations:

1. Risk of deadlock if semaphores are misused.


2. Starvation: A process may wait indefinitely if higher-priority processes continuously
occupy the semaphore.
3. Harder to debug compared to higher-level synchronization constructs.

Ans 4-

Process Control Block (PCB)

The Process Control Block (PCB) is a data structure used by the operating system to manage
information about a process. Each process has its own PCB, which is created when the
process is created and destroyed when the process terminates.

Information in a PCB

The PCB contains the following critical pieces of information:


1. Process Identification Information:
a. Process ID (PID): A unique identifier for the process.
b. Parent Process ID: Identifies the parent process.
c. User ID (UID): Identifies the user who owns the process.
2. Process State:
a. Indicates whether the process is ready, running, waiting, or terminated.
3. Program Counter (PC):
a. Stores the address of the next instruction to be executed.
4. CPU Registers:
a. Holds the current state of the CPU, including accumulator, index registers,
stack pointer, etc., to resume the process after a context switch.
5. Memory Management Information:
a. Base and limit registers.
b. Page tables or segment tables, depending on the memory management scheme.
6. I/O Status Information:
a. List of open files.
b. Pointers to I/O devices allocated to the process.
7. Scheduling and Priority Information:
a. Process priority.
b. Information about the scheduling queue the process belongs to.
8. Accounting Information:
a. CPU usage, execution time, and other statistics.
b. Process creation time.
9. Interprocess Communication Information:
a. Details about messages sent/received or shared memory.

Importance of the PCB

 Facilitates Context Switching: Enables the operating system to save and restore the
state of a process during context switching.
 Process Tracking: Helps the OS keep track of each process, its state, and resources.
 Resource Allocation: Ensures proper resource management by storing information
about allocated resources.

Monitors

Monitors are high-level synchronization constructs used in concurrent programming to


manage access to shared resources. They combine synchronization and data encapsulation in
a single construct.

Characteristics of Monitors

1. Encapsulation:
a- A monitor is a collection of procedures, variables, and data structures grouped
into a single module.
b- The shared data is accessible only through monitor procedures, ensuring
encapsulation.
2. Automatic Mutual Exclusion:
a- Only one process can execute a monitor procedure at a time, ensuring mutual
exclusion without explicit locking by the programmer.
3. Condition Variables:
a- Monitors include condition variables for synchronization:
1- Wait (condition): The calling process is blocked until the condition is
signalled.
2- Signal (condition): Wakes up one waiting process.

Role of Monitors

Monitors play a crucial role in solving synchronization problems like the critical-section
problem. They:

1. Simplify Synchronization:
a. Provide a clean abstraction for managing shared resources.
b. Automatically handle mutual exclusion.
2. Avoid Deadlocks and Race Conditions:
a. Structured access to shared resources reduces programming errors that could
lead to deadlocks or race conditions.
3. Solve Classical Synchronization Problems:
a. Used to implement solutions for producer-consumer, readers-writers, and
dining philosopher’s problems.

Ans 5-

File Access Methods

File access methods define how data is accessed, read, or written in a file. Different access
methods are suited for different applications and storage structures.

1. Sequential Access:

a- Description:
1- Data in the file is accessed in a linear.. sequential manner, from the beginning
to the end.
2- Common for text files and simple data streams.
b- Operations:
1- Read next- Reads the next block of data.
2- Write next- Appends data at the end.
3- Rewind: Moves to the beginning of the file.
c- Advantages:
1- Simple and efficient for batch processing.
2- Minimal overhead.
d- Disadvantages:
1- Inefficient for random access or frequent updates.

2. Direct (or Random) Access:

a- Description:
1- Allows reading or writing to any location in the file without traversing
previous data.
2- Suitable for databases or applications needing non-linear data access.
b- Operations:
1- read (n): Reads from a specific location n.
2- write (n): Writes at a specific location n.
3- seek(n): Moves the file pointer to the nth position.
c- Advantages:
1- Efficient for applications requiring frequent updates or non-sequential
processing.
d- Disadvantages:
1- Requires additional metadata to track file structure.

3. Indexed Access:

a- Description:
1- Uses an index to map file locations to data blocks.
2- Provides both sequential and random access.
b- Operations:
1- Search using the index to locate a block, then read/write data.
c- Advantages:
1- Faster access compared to sequential or direct methods for indexed data.
2- Efficient for large datasets with known structure.
d- Disadvantages:
1- Additional overhead for maintaining the index.
2- Index corruption can lead to data inaccessibility.

4. Cluster Access (Specialized):

a- Description:
1- Groups files or data blocks into clusters to optimize access.
2- Common in big data and distributed systems.
b- Advantages:
1- High performance for distributed systems.
c- Disadvantages:
1- More complex management.

I/O Control Strategies

I/O Control Strategies define how an operating system manages input/output devices,
schedules operations, and transfers data between memory and devices.

1. Programmed I/O (Polling):

a- Description:
1- The CPU actively checks the I/O device's status to see if it is ready to
send/receive data.
2- Data is directly transferred between CPU and I/O device.
b- Advantages:
1- Simple and easy to implement.
c- Disadvantages:
1- Wastes CPU time on constant polling.
2- Inefficient for high-speed or high-volume I/O.

2. Interrupt-Driven I/O:

a- Description:
1- I/O devices signal the CPU via interrupts when they are ready.
2- The CPU processes other tasks while waiting for the interrupt.
b- Advantages:
1- Reduces CPU idle time.
2- Efficient for real-time systems.
c- Disadvantages:
1- Overhead in handling frequent interrupts.
2- Complex implementation.

4. Direct Memory Access (DMA):


a- Description:
1- Data is transferred directly between I/O devices and memory without CPU
intervention.
2- The CPU sets up the DMA controller and continues other tasks.
b- Advantages:
1- Frees up CPU for other processing.
2- Efficient for large data transfers.
c- Disadvantages:
1- Requires dedicated hardware (DMA controller).
2- Complex to implement.

4. Spooling:

a- Description:
1- Stands for Simultaneous Peripheral Operations Online.
2- Data is stored in a buffer (disk) and then sent to the I/O device, enabling
concurrent operations.
b- Advantages:
1- Improves efficiency by allowing multiple jobs to use I/O devices.
c- Disadvantages:
1- Introduces additional storage overhead.

5. Buffered I/O:

a- Description:
1- Data is temporarily stored in memory buffers before transferring
between devices or processes.
b- Advantages:

Smoothens data flow and reduces waiting time.


2-
d- Disadvantages:
3- Requires additional memory for buffering.

6. Asynchronous I/O:

a- Description:
1- I/O operations are performed in parallel with program execution.
2- The program does not block and can continue other tasks.
b- Advantages:
1- High efficiency in multitasking environments.
c- Disadvantages:
1- Requires complex programming and management.

Ans 6-

Paging and Segmentation

Paging and segmentation are memory management techniques used to handle the allocation
of memory to processes. Both aim to ensure efficient and secure memory usage, but they
differ in their approach.

Paging-

Paging divides the logical memory of a process and physical memory into fixed-size units
called pages and frames, respectively.

Key Concepts in Paging:

1. Pages:
a- Fixed-size blocks of the logical address space of a process.
2. Frames:

b- Fixed-size blocks in physical memory.

3. Page Table:

c- A data structure maintained by the OS to map logical page numbers to


physical frame numbers.

How Paging Works:

1. The logical address is divided into:


a- Page number: Index into the page table.
b- Page offset: Specifies the exact location within the page.
2. When a process requests data, the OS uses the page table to translate the logical
address to the corresponding physical address.

Advantages:
1- Eliminates External Fragmentation: Since pages and frames are of fixed size, there
is no unused memory between allocations.
2- Efficient Memory Use: Allows processes to use non-contiguous physical memory.

Disadvantages:

1- Internal Fragmentation: If a process does not fully utilize the last page, the unused
space within that page leads to waste.
2- Overhead: Managing the page table incurs additional memory and computational
overhead.

Segmentation

Segmentation divides the memory into variable-sized segments based on logical divisions in
a program, such as functions, data arrays, or objects.

Key Concepts in Segmentation-

1. Segments:

a- Logical divisions of a process, each with its own base and limit address.
3. Segment Table:
a- Maps each segment to its starting address (base) and length (limit) in physical
memory.

How Segmentation Works:

1. The logical address is divided into:


a- Segment number: Index into the segment table.
b- Offset: Specifies the location within the segment.
2. The OS checks whether the offset is within the segment limit, then translates it to a
physical address.

Advantages:

1- Logical Representation: Segments map directly to program structures, making them


easier to understand and manage.
2- Efficient Access: Useful for code sharing and dynamic memory allocation.

Disadvantages:

1- External Fragmentation: As segments are of variable size, unused memory gaps can
form between segments.
2- Complexity: Managing variable-sized segments is more complex than fixed-size
pages.

Page Map Table


The Page Map Table (Page Table) is crucial for implementing paging. It contains mappings
from logical page numbers to physical frame numbers.

Structure of a Page Table:

1- Each entry has:


a- Page Number: Logical page identifier.
b- Frame Number: Physical memory frame corresponding to the page.
c- Flags: Attributes like valid/invalid bit, read/write permissions, etc.

Example:

Page Number Frame Number

0 5

1 9

2 2

Translation Process:

1. Extract the page number from the logical address.


2. Look up the frame number in the page table.
3. Combine the frame number with the page offset to compute the physical address.

Internal and External Fragmentation

Internal Fragmentation:

1- Definition: Unused memory within an allocated unit (e.g., a page or block).


2- Occurs in Paging:

a- When a process does not fully utilize the allocated page, the unused space in
that page leads to internal fragmentation.

External Fragmentation:

1- Definition: Unused memory between allocated units that cannot be utilized due to
size constraints.
2- Occurs in Segmentation:

a- Variable-sized segments can leave gaps between allocations, which


may not fit new requests.

You might also like