0% found this document useful (0 votes)
31 views15 pages

Os Assignment 01

Uploaded by

nisaiqbal000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views15 pages

Os Assignment 01

Uploaded by

nisaiqbal000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

JINNAH UNIVERSITY FOR WOMEN

NAME: NISA IQBAL

Student id: juw25546

DEPARTMENT: CS & SE

CLASS: SOFTWARE ENGINEERING

COURSE TITLE: OPERATING SYSTEM

SUBMISSION DATE: 07-OCT-2023

COURSE IN CHARGE: MISS SADIA

ASSIGNMENT: 01
QUESTION NO. 1

Consider the following set of processes, with the length of the CPU burst given in milliseconds:

Processes Burst Time Priority


P1 2 2
P2 1 1
P3 8 4
P4 4 2
P5 5 3

The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.
a. Draw four Gantt charts that illustrate the execution of these processes using the following
scheduling algorithms: FCFS, SJF, nonpreemptive priority (a larger priority number implies a
higher priority), and RR (quantum = 2).
b. What is the turnaround time of each process for each of the scheduling algorithms in part a?
c. What is the waiting time of each process for each of these scheduling algorithms?
d. Which of the algorithms results in the minimum average waiting time (over all processes)?
ANSWER: 1
QUESTION NO. 2

Discuss briefly.

1. The actions taken by a kernel to context-switch between processes


2. The role of the init process on UNIX and Linux systems in regard to process termination
ANSWER: 2

1. CONTEXT-SWITCHING BETWEEN PROCESSES:

Context-switching is a fundamental operation performed by the kernel of an operating system to


switch the CPU's execution from one process to another. Here are the key actions taken by the
kernel during a context switch:

a. Saving Process State (Context): When the kernel decides to switch from one process to
another (e.g., due to a time slice expiration or an event like I/O completion), it must save the
state of the currently running process. This state includes the values of CPU registers, program
counter, and other relevant information.
b. Loading Process State: The kernel then loads the saved state of the next process to be
executed. This includes setting up the CPU registers and program counter to the values stored
for that process. This action essentially restores the process to the point where it left off.
c. Updating Process Control Block (PCB): The kernel maintains a data structure called the Process
Control Block (PCB) for each process. It updates the PCB of both the outgoing and incoming
processes to reflect the context switch. This includes information like process state, priority,
memory pointers, etc.
d. Switching Memory Context: If the processes involved in the context switch have different
memory address spaces (as in the case of virtual memory systems), the memory management
unit's context must also be updated to map the appropriate memory pages for the new
process.
e. Actual Switch: Finally, the kernel performs the actual switch by changing the state of the CPU
scheduler to select the next process for execution, and the CPU begins executing instructions
from that process.

2. ROLE OF THE INIT PROCESS IN PROCESS TERMINATION (UNIX/LINUX):


In UNIX and Linux systems, the `init` process plays a critical role in managing the termination of
processes. Specifically:
a. Reaping Orphaned Processes: When a process terminates, it becomes a "zombie" process until
its parent process acknowledges its termination. If the parent process fails to do so (e.g., it
terminates unexpectedly), the `init` process becomes the new parent for the orphaned
process. `Init` periodically checks for terminated child processes and cleans up their resources,
preventing zombie processes from accumulating and wasting system resources.
b. Starting System Services: `init` is responsible for starting and managing various system services
and daemons during system start-up. These services are essential for the proper functioning of
the system.
c. System Shutdown: During system shutdown, `init` is responsible for coordinating the orderly
termination of all running processes and system services. It ensures that processes are
terminated gracefully, preventing data loss or corruption.
Overall, the `init` or equivalent process plays a crucial role in process management and system
initialization on UNIX and Linux systems, ensuring the orderly startup and termination of
processes.

QUESTION NO. 3

Reference Book: Operating System Concepts, 9th Edition, Silberschatz A., Peterson, J. L., &Galvin P.
C.

1. What are open source operating systems? How it differs from closed source approach?
Discuss advantages and disadvantages of both. Give at least three examples of open source
and closed source operating systems.
2. What is computing environment? Discuss different computing environments covered in
book with examples for each.
3. What is the purpose of bootstrap program for multiple operating systems installed on
partitions on single system?
4. Root directory is one of the most significant directories in Unix/Linux directory structure.
You are required to enlist three unique characteristics of Root directory which are not
associated with other directories in Unix/Linux environment.
5. Contrast between Windows and Linux operating system environments.
6. Answer the following practice exercises.
- Keeping in mind the various definitions of operating system, consider whether the operating
system should include applications such as web browsers and mail programs. Argue both
that it should and that it should not, and support your answers.
- How does the distinction between kernel mode and user mode function as a rudimentary
form of protection (security) system?
- Some CPUs provide for more than two modes of operation. What are two possible uses of
these multiple modes?
- Give two reasons why caches are useful. What problems do they solve? What problems do
they cause? If a cache can be made as large as the device for which it is caching (for
instance, a cache as large as a disk), why not make it that large and eliminate the device?
- Many SMP systems have different levels of caches; one level is local to each processing core,
and another level is shared among all processing cores. Why caching systems are designed
this way?
- Discuss, with examples, how the problem of maintaining coherence of cached data
manifests itself in the following processing environments:
a. Single-processor systems
b. Multiprocessor systems
c. Distributed systems

ANSWER: 3

1.What are open-source operating systems? How it differs from closed source approach? Discuss
advantages and disadvantages of both. Give at least three examples of open source and closed
source operating systems.

Open Source vs. Closed Source Operating Systems:

Open-Source Operating Systems: Open-source operating systems are those whose source code is
available for anyone to view, modify, and distribute. The community often collaborates to improve
and enhance these systems. Advantages include transparency, flexibility, and a large user/developer
community. Examples include Linux, FreeBSD, and OpenBSD.

Advantages of Open Source:

 Transparency: Users can inspect and modify the source code, which promotes trust and
security.
 Community Support: A large user and developer community can lead to rapid bug fixes and
feature enhancements.
 Cost: Often free to use, reducing licensing costs.

Disadvantages of Open Source:

 Limited Commercial Support: May lack official support options available in closed source
systems.
 Compatibility: Some proprietary software may not be available on open-source platforms.
 Complexity: Customization requires technical expertise.

Closed Source Operating Systems: Closed source, or proprietary, operating systems have their
source code kept confidential by the organization that develops them. They are usually distributed
as binary-only versions. Advantages include better support and integration, but users have limited
visibility into the inner workings of the system. Examples include Microsoft Windows, macOS, and
IBM z/OS.
Advantages of Closed Source:

 Vendor Support: Typically, official support and maintenance options are available.
 Compatibility: Generally better compatibility with commercial software.
 User-Friendly: Often designed with ease of use in mind.

Disadvantages of Closed Source:

 Lack of Transparency: Users have limited insight into how the system works.
 Licensing Costs: Typically, closed source operating systems involve licensing fees.
 Limited Customization: Less flexibility for customization and modification.

2.What is computing environment? Discuss different computing environments covered in book with
examples for each.

Computing Environment:

A computing environment refers to the combination of hardware and software components and
configurations that provide a specific set of capabilities for users. The book "Operating System
Concepts" covers various computing environments, including:

Batch Processing: Used for processing large volumes of data in batches without user interaction.
Examples include early mainframe systems.

Time-Sharing Systems: Allows multiple users to share a computer's resources simultaneously,


offering interactive computing. Examples include UNIX and Linux.

Multiprocessor Systems: Utilizes multiple processors to enhance performance and parallel


processing. Examples include modern servers and supercomputers.

Distributed Systems: Combines multiple computers and networks to provide a unified computing
environment. Examples include cloud computing platforms.

3.What is the purpose of bootstrap program for multiple operating systems installed on partitions on
single system?
Purpose of Bootstrap Program for Multiple Operating Systems:

The bootstrap program is responsible for booting the computer and loading the operating system.
When multiple operating systems are installed on separate partitions of a single system, the
bootstrap program's purpose is to present the user with a menu or options to choose which
operating system to load during the boot process. It allows users to select the desired operating
system from the available choices, and then it loads the selected OS.

4.Root directory is one of the most significant directories in Unix/Linux directory structure. You are
required to enlist three unique characteristics of Root directory which are not associated with other
directories in Unix/Linux environment.

Unique Characteristics of Root Directory in Unix/Linux:

Three unique characteristics of the root directory (/) in Unix/Linux are:

Single Parent: The root directory is the top-level directory and has no parent directory. It serves as
the starting point for the entire file system hierarchy.

Full Path Reference: All files and directories in Unix/Linux are referenced by their paths relative to
the root directory. For example, /home/user/documents/file.txt specifies the full path from the root
to the file.

System Critical: The root directory contains essential system files and directories, such as /bin
(executables), / etc (system configuration), and /var (variable data), making it a critical part of the
system.

5.Contrast between Windows and Linux operating system environments.

Contrast between Windows and Linux Operating Systems:

Windows and Linux are two distinct operating systems with several differences:

Kernel: Windows uses the Windows NT kernel, while Linux uses the Linux kernel.
Licensing: Windows is primarily proprietary and requires licensing fees, while Linux is open source
and often free to use.

User Interface: Windows typically has a graphical user interface (GUI) as its primary interface, while
Linux offers various GUIs (e.g., GNOME, KDE) but can also be used in a command-line interface (CLI).

Software Ecosystem: Windows has a wide range of commercial software support, while Linux relies
on open-source software and may have limited support for some commercial applications.

Filesystem: Windows uses NTFS (New Technology File System) as its primary filesystem, whereas
Linux supports multiple filesystems like ext4, XFS, and more.

Security: Linux is known for its robust security features and is often used in server environments,
while Windows has security measures like Windows Defender.

Updates Windows updates are typically managed by Microsoft, while Linux distributions offer
centralized package management systems for updates.

6.Answer the following practice exercises.

1.8 1.10 1.22 1.24


1.4 Keeping in mind the various definitions of operating system, consider whether the
operating system should include applications such as web browsers and mail programs.
Argue both that it should and that it should not, and support your answers.

Should Include Applications (Web Browsers and Mail Programs):

 User Convenience: Including web browsers and mail programs in the operating system
provides users with immediate access to essential tools. It simplifies the user experience and
reduces the need for additional installations.

 Seamless Integration: When web browsers and mail programs are tightly integrated into the
OS, it can lead to a more cohesive user experience. Features like single sign-on and
automatic updates become easier to manage.

 Security: Centralized control over these applications can enhance security measures. The OS
can enforce security policies and ensure that these apps are up-to-date with the latest
security patches.

Should Not Include Applications (Web Browsers and Mail Programs):


 Bloat and Choice: Including these applications may bloat the operating system, making it
larger and potentially slower. Users may prefer different web browsers and email clients,
and forcing pre-installed choices may limit user freedom.

 Resource Allocation: Installing additional software consumes system resources, which can
lead to slower performance on devices with limited capabilities. Users should have the
flexibility to choose which applications to install based on their needs.

 Monopoly Concerns: Pre-installing web browsers and mail programs can create a monopoly
for specific software providers, potentially stifling competition and innovation in these
domains.

1.5 5 How does the distinction between kernel mode and user mode function as a rudimentary form
of protection (security) system?

The distinction between kernel mode and user mode serves as a rudimentary form of
protection and security system by enforcing privilege levels and limiting what user
applications can do on a computer system. Here's a concise explanation of how it works:

 Kernel Mode: In kernel mode, the operating system has full control over the hardware and
can execute privileged instructions. The kernel manages system resources, hardware
interfaces, and sensitive operations. Access to critical resources and memory areas is
unrestricted in kernel mode.

 User Mode: User applications run in user mode, where they have limited access to system
resources and hardware. They cannot execute privileged instructions directly or access
certain memory areas reserved for the kernel. This separation prevents user applications
from interfering with critical system functions and compromising system stability and
security.

In essence, the distinction between kernel mode and user mode acts as a protective barrier,
ensuring that only authorized system components (in kernel mode) can perform privileged
operations, while user applications are isolated and restricted from potentially harmful
actions. This separation enhances system stability, prevents unauthorized access to sensitive
resources, and reduces the risk of system vulnerabilities and security breaches.

1.8 Some CPUs provide for more than two modes of operation. What are two possible uses of these
multiple modes?
CPUs that provide more than two modes of operation offer flexibility and enhanced security
features for various computing scenarios. Two possible uses of these multiple modes are:

1. Privilege Levels and Protection:

Multiple modes can be used to implement different privilege levels or protection domains
in the CPU. Each mode represents a different level of privilege, with higher levels having
more control and access rights than lower levels. This enables:

 Operating System Isolation: The CPU can have a mode reserved for the operating system
(e.g., kernel mode) and another for user applications (e.g., user mode). In user mode,
applications have limited access to system resources and are protected from interfering
with critical OS functions. In kernel mode, the OS has full access to hardware resources.

 Security: Extra modes can be used to create additional security layers, such as secure
modes for cryptographic operations or trusted execution environments (e.g., Intel SGX or
ARM TrustZone). These modes protect sensitive data and code from unauthorized access.

 Virtualization: Hypervisors use multiple modes to manage virtual machines (VMs). Each VM
runs in its own mode, isolating it from other VMs and the host OS. This isolation ensures that
VMs cannot interfere with each other's operation.

2. Enhanced Features and Specialized Execution:

Additional modes can be used to provide specialized execution environments with specific
features or configurations. These modes facilitate:

 System Management Mode (SMM): Some CPUs have a mode dedicated to system
management. SMM is typically used for system management tasks like power management,
hardware diagnostics, and firmware updates. It runs at a higher privilege level than the OS,
ensuring that these critical tasks can be executed reliably.

 Virtual Machine Extensions: CPUs that support virtualization often have modes specifically
designed to enhance virtualization features. For instance, Intel VT-x and AMD-V include
modes that enable efficient hardware virtualization support, allowing virtual machines to
run with minimal overhead.
 Secure Execution Environments: Some CPUs offer secure execution modes (e.g., Intel SGX,
ARM Trust Zone) where applications can execute code in a highly protected environment,
safeguarding against various attacks, including memory tampering.
These specialized modes allow CPUs to cater to a wide range of applications, from general-
purpose computing to highly secure and virtualized environments.

1.10 Give two reasons why caches are useful. What problems do they solve? What problems do
they cause? If a cache can be made as large as the device for which it is caching (for instance, a
cache as large as a disk), why not make it that large and eliminate the device?

Two Reasons Caches Are Useful:


 Latency Reduction: Caches are useful for reducing the latency of data access. They store
frequently accessed data closer to the processor, allowing for quicker retrieval compared to
fetching data from slower, primary storage devices like RAM or disks.

 Bandwidth Optimization: Caches help optimize the use of available data bandwidth. By
storing frequently used data in a cache, the system can reduce the need to fetch the same
data repeatedly from slower storage devices, which would otherwise consume significant
bandwidth.

Problems Caches Solve:

 High Latency: Caches solve the problem of high latency associated with accessing primary
storage devices, such as RAM or disks, by providing faster access to frequently used data.

 Bandwidth Bottlenecks: Caches address bandwidth bottlenecks by reducing the volume of


data traffic between the processor and primary storage, making more efficient use of
available bandwidth.
Problems Caches Cause:
 Cache Coherence: Caches can introduce cache coherence problems in multi-processor
systems, where multiple copies of data may exist in different caches, leading to potential
data inconsistencies.

 Complexity: Managing caches adds complexity to system design, including cache


replacement policies, cache coherence protocols, and ensuring data consistency.

 Resource Utilization: Large caches can consume significant resources like power and silicon
area, which may not be cost-effective or practical.

Why Not Make Caches as Large as the Device They Cache:


Making caches as large as the devices they cache, such as having a cache as large as a disk, is
impractical for several reasons:

 Cost: Large caches require significant hardware resources, leading to higher manufacturing
costs. The cost-effectiveness of a system is crucial in determining the cache size.

 Diminishing Returns: Beyond a certain point, increasing cache size does not significantly
improve performance. The principle of diminishing returns applies, where the benefit gained
from additional cache space diminishes relative to the cost.

 Resource Management: Allocating a cache as large as the device it caches can lead to
inefficient resource utilization. Not all data is frequently accessed, so dedicating such
extensive resources to caching may be wasteful.

 Latency vs. Capacity Trade-off: Larger caches may reduce latency but can also introduce
longer access times due to the increased complexity of searching for data within the cache.

1.22 Many SMP systems have different levels of caches; one level is local to each processing core,
and another level is shared among all processing cores. Why are caching systems designed this way?

SMP (Symmetric Multiprocessing) systems with multiple levels of caches, including local caches for
each processing core and a shared cache among all processing cores, are designed this way to
achieve a balance between low-latency access to frequently used data and efficient sharing of data
among multiple cores. Here's why this design is employed:

1.Latency Reduction:
 Local Caches: Each processing core has its own local cache, typically known as the L1 and L2
caches. These caches are physically close to the core, which means they provide low-latency
access to data. When a core requests data, it can often find it in its local cache without
having to access the shared memory hierarchy, resulting in faster data retrieval.

2. Cache Hierarchy
 Hierarchical Approach: The use of multiple cache levels (e.g., L1, L2, L3) forms a cache
hierarchy. Data initially fetched from the main memory can be stored in the closest cache
(L1), and if needed by another core, it can be shared more efficiently from the shared cache
(L3) before having to go to the main memory. This hierarchical approach optimizes data
access by minimizing the distance data must travel.

3.Data Sharing:
 Shared Cache: The shared cache (e.g., L3) allows efficient sharing of frequently accessed
data among all cores. In a multi-core system, processes running on different cores may need
access to the same data or may want to communicate through shared variables. Having a
shared cache ensures that this shared data is readily available to all cores, reducing the need
for expensive main memory accesses and enhancing inter-core communication.

4. Reducing Bus Traffic:


 Local vs. Shared Access: When multiple cores access the same data, they can do so from the
shared cache rather than fetching it from the main memory individually. This approach
reduces the overall bus traffic and contention, which can lead to better system performance
and reduced power consumption.

5. Cache Coherence:
 Cache Coherence Protocols: Multi-level caching systems are designed with cache coherence
protocols (e.g., MESI, MOESI) that manage the consistency of data across all levels of cache.
These protocols ensure that when one core modifies a piece of data, other cores see the
updated value, maintaining data integrity.

1.24 Discuss, with examples, how the problem of maintaining coherence of cached data manifests
itself in the following processing environments: a. Single-processor systems b. Multiprocessor
systems c. Distributed systems

Maintaining coherence of cached data is a critical issue in various processing environments,


including single-processor systems, multiprocessor systems, and distributed systems. It revolves
around ensuring that multiple copies of data held in different caches or memory locations are
consistent and up-to-date. Here's how the problem manifests itself in each of these environments,
along with examples:

a. Single-Processor Systems:
In single-processor systems, there is typically only one CPU with one cache. However, certain
scenarios can still lead to cache coherence problems:

Example: Consider a single-processor system with a CPU cache and main memory. If the CPU reads a
value from memory into its cache and another peripheral (e.g., a hardware device) writes to the
same memory location, the cached value becomes outdated. Any subsequent reads by the CPU from
its cache will yield the stale data, leading to inconsistent results.
b. Multiprocessor Systems:
In multiprocessor systems, where multiple CPUs share access to memory, the problem of cache
coherence becomes more complex due to multiple caches holding copies of the same data:

Example: Imagine a multiprocessor system with two CPUs, each having its cache. If both CPUs read a
shared memory location into their respective caches and one CPU updates the data, the other CPU's
cache becomes outdated. Without cache coherence mechanisms, the system may produce
inconsistent results when the CPUs read from their caches.

c. Distributed Systems:
Distributed systems extend the problem of cache coherence across networked machines, making it
even more challenging to maintain data consistency:

Example: In a distributed system, multiple nodes may cache copies of data from a centralized
database. If one node updates the data and other nodes still hold the old cached copies, data
inconsistency arises. For instance, consider a distributed e-commerce application where the product
price is cached on various servers. If one server updates the price, but others still serve cached
pages, customers may see incorrect prices.

Methods to Address Cache Coherence:

To address cache coherence problems, various methods and protocols have been developed,
including:

 Write-through and Write-back Caches: These cache designs determine when to update
main memory, which can impact cache coherence. Write-through caches immediately
update memory, while write-back caches do so at a later time.
 Cache Coherence Protocols: Protocols like MESI (Modified, Exclusive, Shared, Invalid) and
MOESI (Modified, Owned, Exclusive, Shared, Invalid) help maintain cache coherence in
multiprocessor systems by managing cache states and coordinating updates.
 Distributed Cache Management: Distributed systems use techniques like cache invalidation
or cache replication with synchronization mechanisms to ensure data consistency across
nodes.
 Transactional Memory: Some modern processors offer transactional memory support,
which simplifies cache coherence by providing atomic operations that ensure data integrity.

Overall, cache coherence is a crucial concern in computing environments where multiple copies of
data are held in caches, and addressing it requires careful design and synchronization mechanisms to
avoid data inconsistencies.
QUESTION NO. 4

Explain the differences in how much the following scheduling algorithms discriminate in favor of
short processes:

a. FCFS
b. RR
c. Multilevel feedback queues

ANSWER: 4

a) FCFS (First-Come, First-Served):


- FCFS is a non-preemptive scheduling algorithm, meaning that once a process starts
executing, it continues until it finishes or gets blocked.
- In FCFS, the order of execution is based on the arrival time of processes. The first process
that arrives is the first one to be scheduled.
- FCFS does not discriminate in favor of short processes. It doesn't take into account the
length of the processes when making scheduling decisions. Therefore, it may not be efficient
in terms of turnaround time for shorter processes if longer processes arrive first.
b) RR (Round Robin):
- RR is a preemptive scheduling algorithm that allocates a fixed time slice (quantum) to each
process before moving to the next one in a circular manner.
- RR gives equal opportunities to all processes, regardless of their execution time. Every
process gets a chance to execute for a fixed quantum before being put back in the queue.
- While RR doesn't explicitly favor shorter processes, it ensures that no process monopolizes
the CPU for an extended period. As a result, shorter processes are not unfairly delayed by
longer ones. However, RR may not be the most efficient for very short processes, as the
overhead of context switching between processes can become significant.
c) Multilevel Feedback Queues:
- Multilevel feedback queues are a dynamic scheduling algorithm that uses multiple queues
with different priorities.
- Processes start in a high-priority queue and move to lower-priority queues if they don't
complete within a certain time or if they require I/O operations.
- Discrimination in favor of short processes can be achieved through the use of priority-based
queues. Short processes may have higher initial priorities and can quickly execute in the
higher-priority queues.
- Longer processes may get demoted to lower-priority queues, which means they will have to
wait for their turn, favoring shorter processes

You might also like