0% found this document useful (0 votes)
6 views31 pages

Os Semester Oues Answer Key

The document provides an answer key for an end semester examination in Operating Systems for the academic year 2024-2025. It includes definitions, explanations, and comparisons of key concepts such as operating systems, process states, deadlock conditions, page tables, and various CPU scheduling algorithms. Additionally, it discusses the structure and operation of operating systems, components of Linux systems, and the Banker's Algorithm for deadlock avoidance.

Uploaded by

vishnupriyapacet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views31 pages

Os Semester Oues Answer Key

The document provides an answer key for an end semester examination in Operating Systems for the academic year 2024-2025. It includes definitions, explanations, and comparisons of key concepts such as operating systems, process states, deadlock conditions, page tables, and various CPU scheduling algorithms. Additionally, it discusses the structure and operation of operating systems, components of Linux systems, and the Banker's Algorithm for deadlock avoidance.

Uploaded by

vishnupriyapacet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Academic year :2024-2025 Even Semester


END SEMESTER EXAMINATION APRIL/MAY 2025

ANSWER KEY

Course Code : 22CAPC402 Year/Sem:II/IV


Course Name : OPERATING SYSTEMS

PART-A(10*2=20 Marks)

ANSWER ALL THE QUESTIONS

1. Define Operating System.

An operating system (OS) is system software that manages computer hardware, software
resources, and provides common services for computer programs. It acts as an intermediary
between users and the computer hardware.

2. List Various system Calls in Operating System.

System Call provides the interface between running program and the OS .User can
request any services from OS through System Call.
List the categories of system call:-
 File management

 Process Management

 Inter process Communication

 I/O Device Management

 Information Processing & Maintenance

3. Define the Process States.


A process is a program in execution. During its lifecycle, a process moves through various
states they are
 New
 Ready
 Running
 Waiting
 Terminated
4. Name the four conditions for Deadlock.
The four necessary conditions for a deadlock to occur in an operating system are:

 Mutual Exclusion
 Hold and Wait
 No Preemption
 Circular Wait
5. What is the Purpose of Page Tables?
The purpose of page tables in an operating system is to map virtual addresses to physical
addresses in memory.A page table serves as a data structure used by the memory
management unit (MMU) to:
 Translate a virtual page number (from a program's address space)
 Into a physical frame number (in actual RAM)

6. Define Thrashing

Thrashing is a condition in an operating system where the system spends more time swapping
pages in and out of memory than executing actual processes.Thrashing occurs when a process
(or set of processes) requires more memory than is available, causing constant page faults and
frequent use of the swap space (disk), leading to severe performance degradation.

7. Summarize the File directory Structure in OS.

The file directory structure is how an operating system organizes and manages files on a
storage device. It helps users and the OS locate, access, and manage files efficiently.

 Single-Level Directory
 Two-Level Directory
 Tree-Structured Directory
 Acyclic Graph Directory
 General Graph Directory

8. Write short notes on free space management.

Free space management refers to the techniques used by the operating system to track and
manage unused disk blocks so they can be efficiently allocated to files or directories when
needed.

Common Methods of Free Space Management:

 Bit Map (Bit Vector)


 Linked List
 Grouping
 Counting

9. What are the Components of a Linux System?


A Linux system is made up of several key components that work together to provide a
functional, secure, and efficient operating environment.

 Kernel
 System Libraries
 System Utilities
 Shell
 File System
 User Applications

10.Compare iOS and Android OS

Feature iOS Android


Security Strong security, closed
Varies, more vulnerabilities
ecosystem due to open ecosystem
Performance High optimization with
Varies based on device, but
Apple hardware/software
generally fast on high-end
integration devices
User Interface Consistent and uniform
Varies by manufacturer and
across all devices user preferences
Multitasking Limited multitasking Better multitasking, split-
screen on many devices
Device Availability Limited to a few models Available on a wide range of
from Apple devices from various brands

PART-B(5*16=80Marks)

11. (a) Explain Structure and Operation in detail of Operating System.

Structure and Operation of an Operating System

The operating system (OS) is the backbone of a computer, managing hardware,


software, and various services to provide a smooth and efficient computing experience. Let’s
break down its structure and operation in detail:

1. Structure of an Operating System

The structure of an operating system defines how different components and modules
are organized and how they interact to perform system-level tasks.

Types of OS Structures:

Monolithic Structure:

Definition: In this structure, the entire OS (kernel, device drivers, memory management, file
system, etc.) runs in a single address space.

Advantages:
Fast communication between components since everything runs in the same space.

Efficient in terms of performance.

Disadvantages:

Complex code and hard to maintain.

if one part fails, it may crash the entire system.

Example: Traditional UNIX.

Layered Structure:

Definition: The OS is divided into layers, where each layer only communicates with the
layer directly above or below it.

Advantages:

Easier to maintain and debug since each layer handles a specific task.

Modifications to one layer don't affect others.

Disadvantages:

Potentially slower performance due to added layer communication overhead.

Example: THE operating system, Windows NT.

Microkernel Structure:

Definition: Only essential services (e.g., communication, scheduling) are part of the kernel,
while other functions (e.g., file systems, device drivers) run in user space.

Advantages:

Highly modular: Easy to update or add new components.

Better fault isolation (if a service crashes, it doesn’t affect the whole system).

Disadvantages:

Communication between components can be slower.

Example: Minix, QNX.

Client-Server Structure:

Definition: The system is divided into client processes and server processes, where the
clients request services, and the servers fulfill those requests.
Advantages:

Scalable and can easily distribute tasks across different machines.

Disadvantages:

Overhead in communication between client and server.

Example: Network Operating Systems like Windows Server.

Hybrid Structure:

Definition: Combines features from both monolithic and microkernel designs, seeking to
leverage the advantages of both.

Advantages:

Flexibility and modularity.

Better fault tolerance and performance.

Disadvantages:

Complexity in implementation.

2. Operation of an Operating System

The operation of an operating system refers to how it handles tasks and resources to provide
a stable and efficient computing environment.

Key Operations in an Operating System:

Process Management:

Definition: The OS manages the execution of processes, ensuring that each process gets
enough CPU time and other resources.

Tasks:

Process Scheduling: Decides which process gets the CPU based on scheduling algorithms
(e.g., Round Robin, Priority Scheduling).

Process Creation and Termination: Manages the lifecycle of processes.

Inter-Process Communication (IPC): Facilitates communication between processes using


mechanisms like message passing or shared memory.

Example: A process may be in running, waiting, or terminated states.

Memory Management:
Definition: The OS handles the allocation and deallocation of memory (RAM) for processes.

Tasks:

Virtual Memory: Uses disk space to simulate additional RAM, enabling processes to use
more memory than physically available.

Page Replacement: Manages which pages of memory should be swapped in and out of
physical memory.

Memory Protection: Ensures that processes don’t access each other's memory space.

Example: The Page Table maps virtual addresses to physical memory locations.

File System Management:

Definition: The OS manages files, directories, and storage devices, ensuring efficient file
creation, access, and deletion.

Tasks:

File Allocation: Manages how data is stored on the disk (e.g., contiguous, linked, indexed).

File Access Control: Enforces file permissions (read, write, execute).

File Metadata Management: Maintains file attributes (name, size, date modified).

Example: The inode structure stores file information in Linux.

Device Management:

Definition: The OS controls hardware devices like disks, printers, and network interfaces.

Tasks:

Device Drivers: The OS uses device drivers to communicate with hardware devices.

Device Scheduling: Decides when to allocate resources to devices.

Input/Output (I/O): Handles requests for reading/writing data to storage or peripherals.

Example: Disk Scheduling algorithms like FCFS, SSTF, or LOOK help manage read/write
operations to storage.

Security and Access Control:

Definition: The OS ensures that unauthorized users do not access the system and its
resources.

Tasks:
Authentication: Verifies users (e.g., login process).

Authorization: Grants permissions to perform certain tasks (e.g., reading a file).

Encryption: Protects data confidentiality.

Example: A user account may have restricted access to specific files or devices.

User Interface:

Definition: The OS provides an interface through which users can interact with the system.

Types:

Command-Line Interface (CLI): Users interact through text-based commands (e.g., Linux
terminal).

Graphical User Interface (GUI): Users interact with graphical elements (e.g., Windows,
macOS).

Example: In Linux, the terminal is used to interact with the OS via commands, while in
Windows, you might interact using icons and windows.

Networking:

Definition: The OS manages networking resources to ensure communication between


systems over a network.

Tasks:

TCP/IP Stack: Implements communication protocols for data transmission over the internet.

Socket Programming: Provides interfaces for network communication.

Routing and Addressing: Manages the addressing and routing of data between network
devices.

11.(b)Discuss the Various Components and elements of Operating Systems


in detail.

An Operating System (OS) is a system software that acts as an intermediary between


users and computer hardware. It provides an environment in which users can execute
programs conveniently and efficiently. The OS is made up of several components and
elements, each handling specific tasks essential for the overall functioning of the system.

1. Kernel

Definition: The core component of the OS that directly interacts with hardware.
Functions:

 Process scheduling and management


 Memory management
 File system management
 Device control and I/O
 Interrupt and system call handling

Types:

 Monolithic Kernel
 Microkernel
 Hybrid Kernel

2. Process Management

Definition: Manages processes (running programs) in the system.

Tasks:

 Creation, execution, suspension, and termination of processes.


 CPU scheduling using algorithms (e.g., FCFS, Round Robin).
 Synchronization and communication between processes (IPC).

Element: Process Control Block (PCB) holds process-related information like PID, state,
registers, etc.

3. Memory Management

Definition: Controls and coordinates the computer's memory, allocating and deallocating
memory space as needed.

Functions:

 Keeps track of memory usage.


 Allocates memory to processes.
 Swaps memory between RAM and disk (virtual memory).

Techniques:

Paging, Segmentation

Demand paging, Page replacement algorithms

4. File System Management

Definition: Manages how data is stored and retrieved from storage devices.

Responsibilities:
 File creation, deletion, reading, writing
 Directory structure management (single-level, tree, DAG, etc.)
 File permissions and access control

Elements:

File Control Block (FCB)

Directory structure

Disk scheduling and allocation methods

5. Device Management (I/O Management)

Definition: Manages all input/output devices like keyboard, mouse, printer, disk drives, etc.

Functions:

 Keeps track of all devices (device status table)


 Manages device communication through drivers
 Performs I/O scheduling and buffering

Element: Uses device drivers to abstract hardware and provide a standard interface

6. Secondary Storage Management

Definition: Manages non-volatile storage such as HDDs, SSDs.

Functions:

 Space allocation
 Free space management
 Disk scheduling (e.g., SSTF, LOOK)
 File system integrity

7. Security and Protection

Definition: Protects data and resources from unauthorized access and maintains system
integrity.

Functions:

 User authentication (login)


 Access control (file permissions, process isolation)
 Encryption and auditing

Elements:

Authentication mechanisms (passwords, biometrics)


Authorization and access control lists (ACL)

8. Command Interpreter (Shell)

Definition: Interface between the user and the OS (CLI or GUI).

Types:

CLI (Command Line Interface): User types commands (e.g., Bash, PowerShell).

GUI (Graphical User Interface): User interacts via windows, icons, etc.

Function:

Interprets user commands and communicates them to the OS for execution.

9. Networking Component

Definition: Enables communication between systems over networks.

Functions:

 Implements network protocols (TCP/IP, HTTP)


 Handles network communication, data sharing, and remote access

Element: Socket interface for inter-device communication

10. User Interface (UI)

Definition: Facilitates interaction between user and system

Types:

 Text-based (CLI)
 Visual (GUI)

Functions:

 Allow access to files, applications, and system settings


 Provide graphical tools and icons for easier navigation

11. System Utilities

Definition: Programs that perform maintenance and support tasks.

Examples:

Disk cleanup, virus scanner, system monitor


Backup tools, compilers, text editors

12. (a) Explain any four CPU scheduling algorithms with examples.

1. First-Come, First-Served (FCFS)

Description:

Processes are executed in the order they arrive.

Non-preemptive (once a process starts, it runs to completion).

Example:

Process Arrival Time Burst Time

P1 0 ms 5 ms

P2 1 ms 3 ms

P3 2 ms 8 ms

Gantt Chart:

| P1 | P2 | P3 |
|----|----|------|
0 5 8 16

Average Waiting Time:

P1 = 0

P2 = 5 - 1 = 4

P3 = 8 - 2 = 6
Average = (0 + 4 + 6) / 3 = 3.33 ms

2. Shortest Job Next (SJN) / Shortest Job First (SJF)

Description:

Executes the process with the shortest burst time next.

Non-preemptive.

Example:
Process Arrival Time Burst Time

P1 0 ms 6 ms

P2 1 ms 4 ms

P3 2 ms 2 ms

P4 3 ms 1 ms

Gantt Chart:

| P1 | P4 | P3 | P2 |
|----|----|----|-------|
0 6 7 9 13

Average Waiting Time:

P1 = 0

P4 = 6 - 3 = 3

P3 = 7 - 2 = 5

P2 = 9 - 1 = 8
Average = (0 + 3 + 5 + 8) / 4 = 4 ms

3. Round Robin (RR)

Description:

Each process gets a fixed time quantum (e.g., 2 ms).

Preemptive: If a process doesn’t finish in its time slice, it goes to the back of the
queue.

Example:

Time Quantum = 2 ms

Process Arrival Time Burst Time

P1 0 ms 5 ms

P2 1 ms 3 ms

P3 2 ms 1 ms

Gantt Chart:
| P1 | P2 | P3 | P1 | P2 | P1 |
|-----|----|-----|----|-----|-----|
0 2 4 5 7 8 10

Average Waiting Time:

P1 = (0 + 4 + 7) - 0 = 11 - 5 = 6

P2 = (2 + 7) - 1 = 8 - 3 = 5

P3 = 4 - 2 = 2
Average = (6 + 5 + 2) / 3 = 4.33 ms

4. Priority Scheduling

Description:

Processes are executed based on priority.

Lower number = higher priority.

Can be preemptive or non-preemptive.

Example (Non-Preemptive):

Process Arrival Time Burst Time Priority

P1 0 ms 4 ms 3

P2 1 ms 3 ms 1

P3 2 ms 1 ms 2

Gantt Chart:

| P1 | P2 | P3 |
|-----|----|-----|
0 4 7 8
Average Waiting Time

P1 = 0

P2 = 4 - 1 = 3
P3 = 7 - 2 = 5
Average = (0 + 3 + 5) / 3 = 2.67 ms

12.(b)Explain Bankers Algorithm for deadlock avoidance with suitable


example.

Deadlock Avoidance
Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need.
The deadlock-avoidance algorithm dynamically examines the resource-allocation
state to ensure that there can never be a circular-wait condition.
• Resource-allocation state is defined by the number of available and allocated resources, and
the maximum demands of the processes.
Safe State
• When a process requests an available resource, system must decide if
immediateallocation leaves the system in a safe state.
System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the processes
is the systems such that for each Pi , the resources that Pi can still request can be satisfied by
currently available resources + resources held by all the Pj , with j < i.
• That is:
– If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished.
– When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and
terminate.
– When Pi terminates, Pi +1 can obtain its needed resources, and so on. Avoidance algorithms
• Single instance of a resource type. Use a resource-allocation graph
• Multiple instances of a resource type. Use the banker’s algorithm
Banker’s Algorithm

Multiple instances.
Each process must a priori claim maximum use.
When a process requests a resource it may have to wait.
When a process gets all its resources it must return them in a finite amount of time.
Let n = number of processes, and m = number of resources types.
Available: Vector of length m. If available [j] = k, there are k instances of resource
type Rj available.
Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of
resource type Rj .
Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k
instances of
Rj.
Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to
complete its
task.
Example of Banker’s Algorithm
5 processes P0 through P4;
3 resource types:
A (10 instances), B (5instances), and C (7 instances).

Snapshot at time T0:


Allocation Max Available
ABC ABC ABC
P0 010 753 332
P1 200 322
P2 302 902
P3 211 222
P4 002 433

The content of the matrix Need is defined to be Max – Allocation.


Need
ABC

P0 743
P1 122
P2 600
P3 011
P4 431
• The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety criteria.
d [i,j] = Max[i,j] – Allocation [i,j]

13.(a)Explain any one continuous memory allocation techniques with


example.

First-Fit Memory Allocation


The First Fit method works by searching through memory blocks from the
beginning, looking for the first block that can fit the process.
 For any process Pn, the operating system looks through the available memory blocks.
 It allocates the first block that is free and large enough to hold the process.
In simple terms, First Fit simply finds the first available block that can fit the process and
assigns it there. It’s a straightforward and fast way to allocate memory.

Advantages of First Fit Algorithm


The First Fit algorithm in operating systems offers several benefits:
 It is straightforward to implement and easy to understand, making it ideal for systems
with limited computational power.
 Memory can be allocated quickly when a suitable free block is found at the start.
 When processes have similar memory sizes, First Fit can help minimize fragmentation
by utilizing the first available block that fits the process.
Disadvantages of First Fit Algorithm
 Despite its advantages, the First Fit algorithm has a few downsides:
 Over time, First Fit can lead to both external fragmentation, where small free memory
blocks are scattered, and internal fragmentation, where allocated memory exceeds the
process’s requirement, wasting space.
 It may not always allocate memory in the most efficient manner, leading to suboptimal
use of available memory.
 For large processes, First Fit can be less efficient, as it may need to search through
numerous smaller blocks before finding an appropriate one, which can slow down
memory allocation.
EXAMPLE:
Given memory partitions of 100K, 500K, 200K, 300K, and 600K (in order), how
would each of the First-fit, Best-fit, and Worst-fit algorithms place processes of 212K, 417K,
112K, and 426K (in order)? Which algorithm makes the most efficient use of memory?

Solution:

First-Fit: 212K is put in 500K partition.

417K is put in 600K partition.

112K is put in 288K partition (new partition 288K = 500K - 212K).

426K must wait.

13.(b) Explain first-in,first-out page replacement algorithm andoptimal


replacement algorith with an example and diagrams.

FIFO page replacement algorithm


 A simple and obvious page replacement strategy is FIFO, i.e. first-in-first-out.
 As new pages are brought in, they are added to the tail of a queue, and the page at the head
of the queue is the next victim.
Example:
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
No.of available frames = 3 (3 pages can be in memory at a time per process)

No. of page faults = 15


Drawback:
FIFO page replacement algorithm performance is not always good.
To illustrate this, consider the following example:
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
 If No. of available frames = 3 then the no. of page faults =9
 If No. of available frames =4 then the no. of page faults =10
 Here the no. of page faults increases when the no. of frames increases .This is called as
Belady’s Anomaly.
Optimal page replacement algorithm
 The Belady's anomaly lead to the search for an optimal page-replacement algorithm, which
is simply that which yields the lowest of all possible page-faults, and which does not suffer
from Belady's anomaly.
 This algorithm is simply "Replace the page that will not be used for the longest time in the
future."
Example:
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
No.of available frames = 3

No. of page faults = 9


Drawback:
It is difficult to implement as it requires future knowledge of the reference string.

14.(a) Explain and compare FCFS, SSTF, C-SCAN and C-LOOK disk
scheduling algorithms with an example.
One of the responsibilities of the operating system is to use the hardware efficiently. For the
disk drives,
1. FCFS Scheduling:
The simplest form of disk scheduling is, of course, the first-come, first-served
(FCFS)algorithm. This algorithm is intrinsically fair, but it generally does not provide the
fastest service.
Consider, for example, a disk queue with requests for I/O to blocks on cylinders
I/O to blocks on cylinders 98, 183, 37, 122, 14, 124, 65, 67,
If the disk head is initially at cylinder 53, it will first move from 53 to 98, then to 183,
37, 122, 14, 124, 65, and finally to 67, for a total head movement of 640 cylinders. The
wild swing from 122 to 14 and then back to 124 illustrates the problem with this schedule. If
the requests for cylinders 37 and 14 could be serviced together, before or after the requests
for 122 and 124, the total head movement could bedecreased substantially, and performance
could be thereby improved.
2. SSTF (shortest-seek-time-first)Scheduling
Service all the requests close to the current head position, before moving the head far away to
service other requests. That is selects the request with the minimum seek time from the
current head position.
Total head movement = 236 cylinders

3.C-SCAN Scheduling
Variant of SCAN designed to provide a more uniform wait time. It moves the head
from one end of the disk to the other, servicing requests along the way.When the head
reaches theother end, however, it immediately returns to the beginning of the disk, without
servicing any requests on the return trip.
4.C-LOOK Scheduling
Both SCAN and C-SCAN move the disk arm across the full width of the disk. In
this, the arm goes only as far as the final request in each direction. Then, it reverses direction
immediately, without going all the way to the end of the disk.

14.(b) Briefly describe the services which are provided by I/O system
Kernel I/O Subsystem
Kernels provide many services related to I/O.
 One way that the I/O subsystem improves the efficiency of the computer is by scheduling
I/O operations.
 Another way is by using storage space in main memory or on disk, via techniques
calledbuffering, caching, and spooling.
I/O Scheduling:
To determine a good order in which to execute the set of I/O requests.
Uses:
a) It can improve overall system performance,

b) It can share device access fairly among processes, and

c) It can reduce the average waiting time for 1/0 to complete.


Implementation: OS developers implement scheduling bymaintaining a queue of requests for
each device.
1. When an application issues a blocking I/O system call,

2. The request is placed on the queue for that device.

3. The I/O scheduler rearranges the order of the queue to improve the overall system
efficiency andthe average response time experienced by applications.
Buffering:
Buffer: A memory area that stores data while they are transferred between two devices or
between a device and an application.
Reasons for buffering:
a) To cope with a speed mismatch between the producer and consumer of a data stream.

b) To adapt between devices that have different data-transfer sizes.

c) To support copy semantics for application I/O.

Copy semantics: Suppose that an application has a buffer of data that it wishes to write to
disk. It calls the write () system call, providing a pointer to the buffer and an integer
specifying the number of bytes to write.
After the system call returns, what happens if the application changes the contents of
the buffer? With copy semantics, the version of the data written to disk is guaranteed to be
the version at the time of the application system call, independent of any subsequent changes
in the application's buffer.
A simple way that the operating system can guarantee copy semantics is for the
write()
system call to copy the application data into a kernel buffer before returning control to the
application. The disk write is performed from the kernel buffer, so that subsequent changes to
the application buffer have no effect.
Caching
A cache is a region of fast memory that holds copies of data. Access to the cached
copy is more efficient than access to the original Cache vs buffer: A buffer may hold the only
existing copy of a data item, whereas a cache just holds a copy on faster storage of an item
that resides elsewhere.

When the kernel receives a file I/O request,


1. The kernel first accesses the buffer cache to see whether that region of the file is already

available in main memory.


2. If so, a physical disk I/O can be avoided or deferred. Also, disk writes are accumulated in
the buffer cache for several seconds, so that large transfers are gathered to allow efficient
write schedules.
Spooling and Device Reservation:
Spool: A buffer that holds output for a device, such as a printer, that cannot accept
interleaved data streams. A printer can serve only one job at a time, several applications may
wish to print their output concurrently, without having their output mixed together

The os provides a control interface that enables users and system administrators ;
a) To display the queue,

b) To remove unwanted jobs before those jobs print,

c) To suspend printing while the printer is serviced, and

so on. Device reservation - provides exclusive access to a device


□ System calls for allocation and de-allocation
□ Watch out fordeadlock
Error Handling:
• An operating system that uses protected memory can guard against many kinds of hardware
and application errors.
• OS can recover from disk read, device unavailable, transient write failures

• Most return an error number or code when I/O request fails

System error logs hold problem reports

15. (a) Explain in detail about how process is managed and scheduled in
Linux.
PROCESS MANAGEMENT

UNIX process management separates the creation of processes and the running of a new
program into two distinct operations.
✦ The fork system call creates a new process.

✦ A new program is run after a call to execve.

Under UNIX, a process encompasses all the information that the operating system must
maintain t track the context of a single execution of a single program.
Under Linux, process properties fall into three groups:
o process’s identity,

o environment, and
o context.

Process Identity
A process identity consists mainly of the following items:
Process ID (PID). Each process has a unique identifier. The PID is used to specify the
process to the operating system when an application makes a system call to signal, modify, or
wait for the process.
Credentials. Each process must have an associated userID and one or more group IDs that
determine the rights of a process to access system resources and files.
Personality. Personalities are primarily used by emulation libraries to request that system
calls be compatible with certain varieties of UNIX.
Namespace. Each process is associated with a specific view of the file system hierarchy,
called its namespace. Most processes share a common namespace and thus operate on a
shared file-system hierarchy. Processes and their children can, however, have different
namespaces, each with a unique file-system hierarchy—their own root directory and set of
mounted file systems.
Process Environment
The process’s environment is inherited from its parent, and is composed of two
nullterminated vectors:
✦ The argument vector lists the command-line arguments used to invoke the running
program; conventionally starts with the name of the program itself
✦ The environment vector is a list of “NAME=VALUE” pairs that associates named
environment variables with arbitrary textual values. Passing environment variables among
processes and inheriting variables by a process’s children are flexible means of passing
information to components of the user mode system software.
The environment-variable mechanism provides a customization of the operating
system that can be set on a per-process basis, rather than being configured for the system as a
whole.
Process Context
Process context is the state of the running program at any one time; it changes constantly.
Process context includes the following:
Scheduling context. The most important part of the process context is its scheduling context
—the information that the scheduler needs to suspend and restart the process.
Accounting. The kernel maintains accounting information about the resources currently
being consumed by each process and the total resources consumed by the process in its entire
lifetime so far.
File table. The file table is an array of pointers to kernel file structures representing open
files. When making file-I/O system calls, processes refer to files by an integer, known as a
file descriptor (fd), that the kernel uses to index into this table.
File-system context. The file-system context includes the process’s root directory, current
working directory, and namespace.
Signal-handler table. UNIX systems can deliver asynchronous signals to a process in
response to various external events. The signal-handler table defines the action to take in
response to a specific signal. Valid actions include ignoring the signal, terminating the
process, and invoking a routine in the process’s address space.
Virtual memory context. The virtual memory context describes the full contents of a
process’s private address space.
Process and Threads
Linux uses the same internal representation for processes and threads; a thread is simply a
new process that happens to share the same address space as its parent.
A distinction is only made when a new thread is created by the clone system call.
✦ fork creates a new process with its own entirely new process context

✦ clone creates a new process with its own identity, but that is allowed to share the data
structures of its parent
Using clone gives an application fine-grained control over exactly what is shared between
two threads.
SCHEDULING

Linux supports preemptive multitasking.


The job of allocating CPU time to different tasks within an operating system. While
scheduling is normally thought of as the running and interrupting of processes, in Linux,
scheduling also includes the running of the various kernel tasks. Running kernel tasks
encompasses both tasks that are requested by a running process and tasks that execute
internally on behalf of a device driver.
Linux scheduler contains:
•A Running Queue : A running queue (rq) is created for each processor (CPU). Each run
queue contains a list of runnable processes on a given processor.
Schedule Class :It is an extensible hierarchy of scheduler modules. These modules
encapsulate scheduling policy details and are called from the scheduler core without the core
code assuming too much about them.
Load balancer: In SMP environment, each CPU has its own rq. These queues might be
unbalanced from time to time. A running queue with empty task pushes its associated CPUto
idle, which does not take full advantage of symmetric multiprocessor systems. Load balancer
is to address this issue. It is called every time the system requires scheduling tasks. If running
queues are unbalanced, load balancer will try to pull idle tasks from busiest processors to idle
processor.
Process Scheduling
Linux has two separate process-scheduling algorithms.
o Time-sharing algorithm

▪ Fair, preemptive scheduling among multiple processes.

o The other is designed for real-time tasks.

The scheduling algorithm used for routine time-sharing tasks Completely Fair
Scheduler(CFS).
o CFS provides,

▪ increased support for SMP,

▪ processor affinity and

▪ load balancing

The Linux scheduler is a preemptive, priority-based algorithm with two separate priority
ranges: a
real-time range from 0 to 99 and a nice value ranging from −20 to 19. Smaller nice
valuesindicate higher priorities.
CFS introduced a new scheduling algorithm called fair scheduling that eliminates
time slices. Instead of time slices, all processes are allotted a proportion of the processor’s
time.
CFS calculates how long a process should run as a function of the total number of
runnable processes. To start, CFS says that if there are N runnable processes, then each
should be afforded 1/N of the processor’s time.
CFS then adjusts this allotment by weighting each process’s allotment by its nice
value. Processes with the default nice value have a weight of 1—their priority is unchanged.
Processes with a smaller nice value (higher priority) receive a higher weight, while processes
with a larger nice value (lower priority) receive a lower weight.CFS then runs each process
for a “time slice” proportional to the process’s weight divided by the total weight of all
runnable processes.
Real-Time Scheduling
Linux implements the two real-time scheduling classes required by POSIX.1b:
o First-come, first served (FCFS)and

o Round-robin

In both the scheduling, each process has a priority in addition to its scheduling class. The
scheduler always runs the process with the highest priority. Among processes of equal
priority, it runs the process that has been waiting longest.
Kernel Synchronization
A request for kernel-mode execution can occur in two ways:
o A running program may request an operating system service, either explicitly via a system
call, or implicitly, for example, when a page fault occurs.
o A device driver may deliver a hardware interrupt that causes the CPU to start executing a
kernel-defined handler for that interrupt. Kernel synchronization requires a framework
thatwill allow the kernel’s critical sections to run without interruption by another critical
section.
Linux uses two techniques to protect critical sections:
1. Normal kernel code is non preemptible

– when a time interrupt is received while a process is executing a kernel system service
routine, the kernel’s need_resched flag is set so that the scheduler will run once the system
call has completed and control is about to be returned to user mode.
2. The second technique applies to critical sections that occur in an interrupt service
routines.
- By using the processor’s interrupt control hardware to disable interrupts during a critical
section, the kernel guarantees that it can proceed without the risk of concurrent access of
shared data structures.
To avoid performance penalties, Linux’s kernel uses a synchronization architecture that
allows long critical sections to run without having interrupts disabled for the critical section’s
entire duration.
Interrupt service routines are separated into a top half and a bottom half.
o The top half is a normal interrupt service routine, and runs with recursive interrupts
disabled.
o The bottom half is run, with all interrupts enabled, by a miniature scheduler that ensures
that bottom halves never interrupt themselves.
o This architecture is completed by a mechanism for disabling selected bottom halves while
executing normal, foreground kernel code.
o Each level may be interrupted by code running at a higher level, but will never be
interrupted by code running at the same or a lower level.
o User processes can always be preempted by another process when a time- sharing
scheduling interrupt occurs

Symmetric Multiprocessing
• The Linux 2.0 kernel was the first stable Linux kernel to support symmetric multiprocessor
(SMP) hardware, allowing separate processes to execute in parallel on separate processors.
• In version 2.2 of the kernel, a single kernel spinlock (sometimes termed BKL for “big
kernel lock”) was created to allow multiple processes (running on different processors) to be
active in the kernel concurrently.
• Drawbacks of BKL:

o provides a very coarse level of locking granularity,

o resulting in poor scalability to machines with many processors and processes.

• Later releases of the kernel made the SMP implementation more scalable by splitting this
single kernel spinlock into multiple locks, each of which protects only a small subset of the
kernel’s data structures.

15.(b)Explain the Design Goals and the Architecture of Android with a


neat sketch.
Android architecture or Android software stack is categorized into five parts:
 1. Linux kernel

 2. native libraries(middleware),

 3. Android Run time


 4. Application Framework

 5. Applications

1) Linux kernel

It is the heart of android architecture that exists at the root of android architecture. Linux
kernel is responsible for device drivers, power management, memory management, device
management and resource access.

2) Native Libraries

On the top of linux kernel, there are Native libraries such as WebKit, OpenGL, FreeType,
SQLite, Media, C runtime library (libc) etc. The WebKit library is responsible for browser
support, SQLite is for database, Free Type for font support, Media for playing and recording
audio and video formats.
3) Android Runtime

In android runtime, there are core libraries and DVM (Dalvik Virtual Machine) which is
responsible to run android application. DVM is like JVM but it is optimized for mobile
devices. It consumes less memory and provides fast performance.
4) Android Framework

On the top of Native libraries and android runtime, there is android framework. Android
framework includes Android API's such as UI (User Interface), telephony, resources,
locations, Content Providers (data) and package managers. It provides a lot of classes and
interfaces for android application development.
5) Applications
On the top of android framework, there are applications. All applications such as home,
contact, settings, games, browsers are using android framework that uses android runtime and
libraries. Android runtime and native libraries are using linux kernal.
Android Core Building Blocks
An android component is simply a piece of code that has a well defined life cycle e.g.
Activity, Receiver, Service etc. The core building blocks or fundamental components of
android are activities, views, intents, services, content providers, fragments and
AndroidManifest.xml.
Activity
An activity is a class that represents a single screen. It is like a Frame in AWT.
View
A view is the UI element such as button, label, text field etc. Anything that you see is a view.
Intent
Intent is used to invoke components. It is mainly used to:
• Start the service

• Launch an activity

• Display a webpage

• Display a list of contacts

• Broadcast a message

• Service

FACULTY IN-CHARGE HOD

You might also like