Os Semester Oues Answer Key
Os Semester Oues Answer Key
ANSWER KEY
PART-A(10*2=20 Marks)
An operating system (OS) is system software that manages computer hardware, software
resources, and provides common services for computer programs. It acts as an intermediary
between users and the computer hardware.
System Call provides the interface between running program and the OS .User can
request any services from OS through System Call.
List the categories of system call:-
File management
Process Management
Mutual Exclusion
Hold and Wait
No Preemption
Circular Wait
5. What is the Purpose of Page Tables?
The purpose of page tables in an operating system is to map virtual addresses to physical
addresses in memory.A page table serves as a data structure used by the memory
management unit (MMU) to:
Translate a virtual page number (from a program's address space)
Into a physical frame number (in actual RAM)
6. Define Thrashing
Thrashing is a condition in an operating system where the system spends more time swapping
pages in and out of memory than executing actual processes.Thrashing occurs when a process
(or set of processes) requires more memory than is available, causing constant page faults and
frequent use of the swap space (disk), leading to severe performance degradation.
The file directory structure is how an operating system organizes and manages files on a
storage device. It helps users and the OS locate, access, and manage files efficiently.
Single-Level Directory
Two-Level Directory
Tree-Structured Directory
Acyclic Graph Directory
General Graph Directory
Free space management refers to the techniques used by the operating system to track and
manage unused disk blocks so they can be efficiently allocated to files or directories when
needed.
Kernel
System Libraries
System Utilities
Shell
File System
User Applications
PART-B(5*16=80Marks)
The structure of an operating system defines how different components and modules
are organized and how they interact to perform system-level tasks.
Types of OS Structures:
Monolithic Structure:
Definition: In this structure, the entire OS (kernel, device drivers, memory management, file
system, etc.) runs in a single address space.
Advantages:
Fast communication between components since everything runs in the same space.
Disadvantages:
Layered Structure:
Definition: The OS is divided into layers, where each layer only communicates with the
layer directly above or below it.
Advantages:
Easier to maintain and debug since each layer handles a specific task.
Disadvantages:
Microkernel Structure:
Definition: Only essential services (e.g., communication, scheduling) are part of the kernel,
while other functions (e.g., file systems, device drivers) run in user space.
Advantages:
Better fault isolation (if a service crashes, it doesn’t affect the whole system).
Disadvantages:
Client-Server Structure:
Definition: The system is divided into client processes and server processes, where the
clients request services, and the servers fulfill those requests.
Advantages:
Disadvantages:
Hybrid Structure:
Definition: Combines features from both monolithic and microkernel designs, seeking to
leverage the advantages of both.
Advantages:
Disadvantages:
Complexity in implementation.
The operation of an operating system refers to how it handles tasks and resources to provide
a stable and efficient computing environment.
Process Management:
Definition: The OS manages the execution of processes, ensuring that each process gets
enough CPU time and other resources.
Tasks:
Process Scheduling: Decides which process gets the CPU based on scheduling algorithms
(e.g., Round Robin, Priority Scheduling).
Memory Management:
Definition: The OS handles the allocation and deallocation of memory (RAM) for processes.
Tasks:
Virtual Memory: Uses disk space to simulate additional RAM, enabling processes to use
more memory than physically available.
Page Replacement: Manages which pages of memory should be swapped in and out of
physical memory.
Memory Protection: Ensures that processes don’t access each other's memory space.
Example: The Page Table maps virtual addresses to physical memory locations.
Definition: The OS manages files, directories, and storage devices, ensuring efficient file
creation, access, and deletion.
Tasks:
File Allocation: Manages how data is stored on the disk (e.g., contiguous, linked, indexed).
File Metadata Management: Maintains file attributes (name, size, date modified).
Device Management:
Definition: The OS controls hardware devices like disks, printers, and network interfaces.
Tasks:
Device Drivers: The OS uses device drivers to communicate with hardware devices.
Example: Disk Scheduling algorithms like FCFS, SSTF, or LOOK help manage read/write
operations to storage.
Definition: The OS ensures that unauthorized users do not access the system and its
resources.
Tasks:
Authentication: Verifies users (e.g., login process).
Example: A user account may have restricted access to specific files or devices.
User Interface:
Definition: The OS provides an interface through which users can interact with the system.
Types:
Command-Line Interface (CLI): Users interact through text-based commands (e.g., Linux
terminal).
Graphical User Interface (GUI): Users interact with graphical elements (e.g., Windows,
macOS).
Example: In Linux, the terminal is used to interact with the OS via commands, while in
Windows, you might interact using icons and windows.
Networking:
Tasks:
TCP/IP Stack: Implements communication protocols for data transmission over the internet.
Routing and Addressing: Manages the addressing and routing of data between network
devices.
1. Kernel
Definition: The core component of the OS that directly interacts with hardware.
Functions:
Types:
Monolithic Kernel
Microkernel
Hybrid Kernel
2. Process Management
Tasks:
Element: Process Control Block (PCB) holds process-related information like PID, state,
registers, etc.
3. Memory Management
Definition: Controls and coordinates the computer's memory, allocating and deallocating
memory space as needed.
Functions:
Techniques:
Paging, Segmentation
Definition: Manages how data is stored and retrieved from storage devices.
Responsibilities:
File creation, deletion, reading, writing
Directory structure management (single-level, tree, DAG, etc.)
File permissions and access control
Elements:
Directory structure
Definition: Manages all input/output devices like keyboard, mouse, printer, disk drives, etc.
Functions:
Element: Uses device drivers to abstract hardware and provide a standard interface
Functions:
Space allocation
Free space management
Disk scheduling (e.g., SSTF, LOOK)
File system integrity
Definition: Protects data and resources from unauthorized access and maintains system
integrity.
Functions:
Elements:
Types:
CLI (Command Line Interface): User types commands (e.g., Bash, PowerShell).
GUI (Graphical User Interface): User interacts via windows, icons, etc.
Function:
9. Networking Component
Functions:
Types:
Text-based (CLI)
Visual (GUI)
Functions:
Examples:
12. (a) Explain any four CPU scheduling algorithms with examples.
Description:
Example:
P1 0 ms 5 ms
P2 1 ms 3 ms
P3 2 ms 8 ms
Gantt Chart:
| P1 | P2 | P3 |
|----|----|------|
0 5 8 16
P1 = 0
P2 = 5 - 1 = 4
P3 = 8 - 2 = 6
Average = (0 + 4 + 6) / 3 = 3.33 ms
Description:
Non-preemptive.
Example:
Process Arrival Time Burst Time
P1 0 ms 6 ms
P2 1 ms 4 ms
P3 2 ms 2 ms
P4 3 ms 1 ms
Gantt Chart:
| P1 | P4 | P3 | P2 |
|----|----|----|-------|
0 6 7 9 13
P1 = 0
P4 = 6 - 3 = 3
P3 = 7 - 2 = 5
P2 = 9 - 1 = 8
Average = (0 + 3 + 5 + 8) / 4 = 4 ms
Description:
Preemptive: If a process doesn’t finish in its time slice, it goes to the back of the
queue.
Example:
Time Quantum = 2 ms
P1 0 ms 5 ms
P2 1 ms 3 ms
P3 2 ms 1 ms
Gantt Chart:
| P1 | P2 | P3 | P1 | P2 | P1 |
|-----|----|-----|----|-----|-----|
0 2 4 5 7 8 10
P1 = (0 + 4 + 7) - 0 = 11 - 5 = 6
P2 = (2 + 7) - 1 = 8 - 3 = 5
P3 = 4 - 2 = 2
Average = (6 + 5 + 2) / 3 = 4.33 ms
4. Priority Scheduling
Description:
Example (Non-Preemptive):
P1 0 ms 4 ms 3
P2 1 ms 3 ms 1
P3 2 ms 1 ms 2
Gantt Chart:
| P1 | P2 | P3 |
|-----|----|-----|
0 4 7 8
Average Waiting Time
P1 = 0
P2 = 4 - 1 = 3
P3 = 7 - 2 = 5
Average = (0 + 3 + 5) / 3 = 2.67 ms
Deadlock Avoidance
Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need.
The deadlock-avoidance algorithm dynamically examines the resource-allocation
state to ensure that there can never be a circular-wait condition.
• Resource-allocation state is defined by the number of available and allocated resources, and
the maximum demands of the processes.
Safe State
• When a process requests an available resource, system must decide if
immediateallocation leaves the system in a safe state.
System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the processes
is the systems such that for each Pi , the resources that Pi can still request can be satisfied by
currently available resources + resources held by all the Pj , with j < i.
• That is:
– If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished.
– When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and
terminate.
– When Pi terminates, Pi +1 can obtain its needed resources, and so on. Avoidance algorithms
• Single instance of a resource type. Use a resource-allocation graph
• Multiple instances of a resource type. Use the banker’s algorithm
Banker’s Algorithm
Multiple instances.
Each process must a priori claim maximum use.
When a process requests a resource it may have to wait.
When a process gets all its resources it must return them in a finite amount of time.
Let n = number of processes, and m = number of resources types.
Available: Vector of length m. If available [j] = k, there are k instances of resource
type Rj available.
Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of
resource type Rj .
Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k
instances of
Rj.
Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to
complete its
task.
Example of Banker’s Algorithm
5 processes P0 through P4;
3 resource types:
A (10 instances), B (5instances), and C (7 instances).
P0 743
P1 122
P2 600
P3 011
P4 431
• The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety criteria.
d [i,j] = Max[i,j] – Allocation [i,j]
Solution:
14.(a) Explain and compare FCFS, SSTF, C-SCAN and C-LOOK disk
scheduling algorithms with an example.
One of the responsibilities of the operating system is to use the hardware efficiently. For the
disk drives,
1. FCFS Scheduling:
The simplest form of disk scheduling is, of course, the first-come, first-served
(FCFS)algorithm. This algorithm is intrinsically fair, but it generally does not provide the
fastest service.
Consider, for example, a disk queue with requests for I/O to blocks on cylinders
I/O to blocks on cylinders 98, 183, 37, 122, 14, 124, 65, 67,
If the disk head is initially at cylinder 53, it will first move from 53 to 98, then to 183,
37, 122, 14, 124, 65, and finally to 67, for a total head movement of 640 cylinders. The
wild swing from 122 to 14 and then back to 124 illustrates the problem with this schedule. If
the requests for cylinders 37 and 14 could be serviced together, before or after the requests
for 122 and 124, the total head movement could bedecreased substantially, and performance
could be thereby improved.
2. SSTF (shortest-seek-time-first)Scheduling
Service all the requests close to the current head position, before moving the head far away to
service other requests. That is selects the request with the minimum seek time from the
current head position.
Total head movement = 236 cylinders
3.C-SCAN Scheduling
Variant of SCAN designed to provide a more uniform wait time. It moves the head
from one end of the disk to the other, servicing requests along the way.When the head
reaches theother end, however, it immediately returns to the beginning of the disk, without
servicing any requests on the return trip.
4.C-LOOK Scheduling
Both SCAN and C-SCAN move the disk arm across the full width of the disk. In
this, the arm goes only as far as the final request in each direction. Then, it reverses direction
immediately, without going all the way to the end of the disk.
14.(b) Briefly describe the services which are provided by I/O system
Kernel I/O Subsystem
Kernels provide many services related to I/O.
One way that the I/O subsystem improves the efficiency of the computer is by scheduling
I/O operations.
Another way is by using storage space in main memory or on disk, via techniques
calledbuffering, caching, and spooling.
I/O Scheduling:
To determine a good order in which to execute the set of I/O requests.
Uses:
a) It can improve overall system performance,
3. The I/O scheduler rearranges the order of the queue to improve the overall system
efficiency andthe average response time experienced by applications.
Buffering:
Buffer: A memory area that stores data while they are transferred between two devices or
between a device and an application.
Reasons for buffering:
a) To cope with a speed mismatch between the producer and consumer of a data stream.
Copy semantics: Suppose that an application has a buffer of data that it wishes to write to
disk. It calls the write () system call, providing a pointer to the buffer and an integer
specifying the number of bytes to write.
After the system call returns, what happens if the application changes the contents of
the buffer? With copy semantics, the version of the data written to disk is guaranteed to be
the version at the time of the application system call, independent of any subsequent changes
in the application's buffer.
A simple way that the operating system can guarantee copy semantics is for the
write()
system call to copy the application data into a kernel buffer before returning control to the
application. The disk write is performed from the kernel buffer, so that subsequent changes to
the application buffer have no effect.
Caching
A cache is a region of fast memory that holds copies of data. Access to the cached
copy is more efficient than access to the original Cache vs buffer: A buffer may hold the only
existing copy of a data item, whereas a cache just holds a copy on faster storage of an item
that resides elsewhere.
The os provides a control interface that enables users and system administrators ;
a) To display the queue,
15. (a) Explain in detail about how process is managed and scheduled in
Linux.
PROCESS MANAGEMENT
UNIX process management separates the creation of processes and the running of a new
program into two distinct operations.
✦ The fork system call creates a new process.
Under UNIX, a process encompasses all the information that the operating system must
maintain t track the context of a single execution of a single program.
Under Linux, process properties fall into three groups:
o process’s identity,
o environment, and
o context.
Process Identity
A process identity consists mainly of the following items:
Process ID (PID). Each process has a unique identifier. The PID is used to specify the
process to the operating system when an application makes a system call to signal, modify, or
wait for the process.
Credentials. Each process must have an associated userID and one or more group IDs that
determine the rights of a process to access system resources and files.
Personality. Personalities are primarily used by emulation libraries to request that system
calls be compatible with certain varieties of UNIX.
Namespace. Each process is associated with a specific view of the file system hierarchy,
called its namespace. Most processes share a common namespace and thus operate on a
shared file-system hierarchy. Processes and their children can, however, have different
namespaces, each with a unique file-system hierarchy—their own root directory and set of
mounted file systems.
Process Environment
The process’s environment is inherited from its parent, and is composed of two
nullterminated vectors:
✦ The argument vector lists the command-line arguments used to invoke the running
program; conventionally starts with the name of the program itself
✦ The environment vector is a list of “NAME=VALUE” pairs that associates named
environment variables with arbitrary textual values. Passing environment variables among
processes and inheriting variables by a process’s children are flexible means of passing
information to components of the user mode system software.
The environment-variable mechanism provides a customization of the operating
system that can be set on a per-process basis, rather than being configured for the system as a
whole.
Process Context
Process context is the state of the running program at any one time; it changes constantly.
Process context includes the following:
Scheduling context. The most important part of the process context is its scheduling context
—the information that the scheduler needs to suspend and restart the process.
Accounting. The kernel maintains accounting information about the resources currently
being consumed by each process and the total resources consumed by the process in its entire
lifetime so far.
File table. The file table is an array of pointers to kernel file structures representing open
files. When making file-I/O system calls, processes refer to files by an integer, known as a
file descriptor (fd), that the kernel uses to index into this table.
File-system context. The file-system context includes the process’s root directory, current
working directory, and namespace.
Signal-handler table. UNIX systems can deliver asynchronous signals to a process in
response to various external events. The signal-handler table defines the action to take in
response to a specific signal. Valid actions include ignoring the signal, terminating the
process, and invoking a routine in the process’s address space.
Virtual memory context. The virtual memory context describes the full contents of a
process’s private address space.
Process and Threads
Linux uses the same internal representation for processes and threads; a thread is simply a
new process that happens to share the same address space as its parent.
A distinction is only made when a new thread is created by the clone system call.
✦ fork creates a new process with its own entirely new process context
✦ clone creates a new process with its own identity, but that is allowed to share the data
structures of its parent
Using clone gives an application fine-grained control over exactly what is shared between
two threads.
SCHEDULING
The scheduling algorithm used for routine time-sharing tasks Completely Fair
Scheduler(CFS).
o CFS provides,
▪ load balancing
The Linux scheduler is a preemptive, priority-based algorithm with two separate priority
ranges: a
real-time range from 0 to 99 and a nice value ranging from −20 to 19. Smaller nice
valuesindicate higher priorities.
CFS introduced a new scheduling algorithm called fair scheduling that eliminates
time slices. Instead of time slices, all processes are allotted a proportion of the processor’s
time.
CFS calculates how long a process should run as a function of the total number of
runnable processes. To start, CFS says that if there are N runnable processes, then each
should be afforded 1/N of the processor’s time.
CFS then adjusts this allotment by weighting each process’s allotment by its nice
value. Processes with the default nice value have a weight of 1—their priority is unchanged.
Processes with a smaller nice value (higher priority) receive a higher weight, while processes
with a larger nice value (lower priority) receive a lower weight.CFS then runs each process
for a “time slice” proportional to the process’s weight divided by the total weight of all
runnable processes.
Real-Time Scheduling
Linux implements the two real-time scheduling classes required by POSIX.1b:
o First-come, first served (FCFS)and
o Round-robin
In both the scheduling, each process has a priority in addition to its scheduling class. The
scheduler always runs the process with the highest priority. Among processes of equal
priority, it runs the process that has been waiting longest.
Kernel Synchronization
A request for kernel-mode execution can occur in two ways:
o A running program may request an operating system service, either explicitly via a system
call, or implicitly, for example, when a page fault occurs.
o A device driver may deliver a hardware interrupt that causes the CPU to start executing a
kernel-defined handler for that interrupt. Kernel synchronization requires a framework
thatwill allow the kernel’s critical sections to run without interruption by another critical
section.
Linux uses two techniques to protect critical sections:
1. Normal kernel code is non preemptible
– when a time interrupt is received while a process is executing a kernel system service
routine, the kernel’s need_resched flag is set so that the scheduler will run once the system
call has completed and control is about to be returned to user mode.
2. The second technique applies to critical sections that occur in an interrupt service
routines.
- By using the processor’s interrupt control hardware to disable interrupts during a critical
section, the kernel guarantees that it can proceed without the risk of concurrent access of
shared data structures.
To avoid performance penalties, Linux’s kernel uses a synchronization architecture that
allows long critical sections to run without having interrupts disabled for the critical section’s
entire duration.
Interrupt service routines are separated into a top half and a bottom half.
o The top half is a normal interrupt service routine, and runs with recursive interrupts
disabled.
o The bottom half is run, with all interrupts enabled, by a miniature scheduler that ensures
that bottom halves never interrupt themselves.
o This architecture is completed by a mechanism for disabling selected bottom halves while
executing normal, foreground kernel code.
o Each level may be interrupted by code running at a higher level, but will never be
interrupted by code running at the same or a lower level.
o User processes can always be preempted by another process when a time- sharing
scheduling interrupt occurs
Symmetric Multiprocessing
• The Linux 2.0 kernel was the first stable Linux kernel to support symmetric multiprocessor
(SMP) hardware, allowing separate processes to execute in parallel on separate processors.
• In version 2.2 of the kernel, a single kernel spinlock (sometimes termed BKL for “big
kernel lock”) was created to allow multiple processes (running on different processors) to be
active in the kernel concurrently.
• Drawbacks of BKL:
• Later releases of the kernel made the SMP implementation more scalable by splitting this
single kernel spinlock into multiple locks, each of which protects only a small subset of the
kernel’s data structures.
2. native libraries(middleware),
5. Applications
1) Linux kernel
It is the heart of android architecture that exists at the root of android architecture. Linux
kernel is responsible for device drivers, power management, memory management, device
management and resource access.
2) Native Libraries
On the top of linux kernel, there are Native libraries such as WebKit, OpenGL, FreeType,
SQLite, Media, C runtime library (libc) etc. The WebKit library is responsible for browser
support, SQLite is for database, Free Type for font support, Media for playing and recording
audio and video formats.
3) Android Runtime
In android runtime, there are core libraries and DVM (Dalvik Virtual Machine) which is
responsible to run android application. DVM is like JVM but it is optimized for mobile
devices. It consumes less memory and provides fast performance.
4) Android Framework
On the top of Native libraries and android runtime, there is android framework. Android
framework includes Android API's such as UI (User Interface), telephony, resources,
locations, Content Providers (data) and package managers. It provides a lot of classes and
interfaces for android application development.
5) Applications
On the top of android framework, there are applications. All applications such as home,
contact, settings, games, browsers are using android framework that uses android runtime and
libraries. Android runtime and native libraries are using linux kernal.
Android Core Building Blocks
An android component is simply a piece of code that has a well defined life cycle e.g.
Activity, Receiver, Service etc. The core building blocks or fundamental components of
android are activities, views, intents, services, content providers, fragments and
AndroidManifest.xml.
Activity
An activity is a class that represents a single screen. It is like a Frame in AWT.
View
A view is the UI element such as button, label, text field etc. Anything that you see is a view.
Intent
Intent is used to invoke components. It is mainly used to:
• Start the service
• Launch an activity
• Display a webpage
• Broadcast a message
• Service