CYcle 1 Answer Key
CYcle 1 Answer Key
1
8 What are the use of job queues, ready queues and device queues?
As a process enters a system they are put in to a job queue. This queues consist of
all jobs in the system.
The processes that are residing in main memory and are ready and waiting to
execute are kept on a list called ready queue.
The list of processes waiting for particular I/O devices kept in the device queue.
Round Robin, Shortest First Come First Serve (FCFS), Shortest Job
Example
Remaining Time First (SRTF). First (SJF) (nonpreemptive).
10 What are the various scheduling criteria for CPU scheduling?
The various scheduling criteria are,
• CPU utilization
• Throughput
• Turnaround time
• Waiting time
• Response time
PART - B (3 x 10 = 30 MARKS)
ANSWER ANY THREE QUESTIONS
1 Explain different operating system structures with neat sketch.
1 OPERATING SYSTEM STRUCTURE:
The operating systems are large and complex. A common approach is to partition the task
into small components, or modules, rather than have one monolithic system.
The structure of an operating system can be defined the following structures.
Simple structure
Layered approach
Microkernels
Modules
Hybrid systems
1.Simple structure:
The Simple structured operating systems do not have a well-defined structure. These systems will
be simple, small and limited systems.
Example: MS-DOS.
In MS-DOS, the interfaces and levels of functionality are not well separated.
In MS-DOS application programs are able to access the basic I/O routines. This causes the
entire systems to be crashed when user programs fail.
2
2.Layered approach:
A system can be made modular in many ways. One method is the layered approach, in which the
operating system is broken into a number of layers (levels). The bottom layer (layer 0) is the
hardware; the highest (layer
is the user interface. An operating-system layer is an implementation of an abstract object made up
of data and the operations that can manipulate those data. The main advantage of the layered
approach is simplicity of construction and debugging. The layers are selected so that each uses
functions (operations) and services of only lower-level layers. Each layer is implemented only with
operations provided by lower-level layers. A layer does not need to know how these operations are
implemented; it needs to know only what these operations do.
The major difficulty with the layered approach involves appropriately defining the various
layers because a layer can use only lower-level layers.
3.Microkernels:
In the mid-1980s, researchers at Carnegie Mellon University developed an operating system
called Mach that modularized the kernel using the microkernel approach.
This method structures the operating system by removing all nonessential components from
the kernel and implementing them as system and user-level programs.
Microkernel provide minimal process and memory management, in addition to a
communication facility. The main function of the microkernel is to provide communication between
the client program and the various services that are also running in user space.
The client program and service never interact directly. Rather, they communicate indirectly
by exchanging messages with the microkernel.
One benefit of the microkernel approach is that it makes extending the operating system easier. All
new services are added to user space and consequently do not require modification of the kernel.
The performance of microkernel can suffer due to increased system-function overhead.
4.Modules:
The best current methodology for operating-system design involves using loadable kernel
modules
The kernel has a set of core components and links in additional services via modules, either
3
at boot time or during run time.
The kernel provides core services while other services are implemented dynamically, as the
kernel is running.
Linking services dynamically is more comfortable than adding new features directly to the kernel,
which would require recompiling the kernel every time a change was made.
5.Hybrid Systems:
The Operating System combines different structures, resulting in hybrid systems that address
performance, security, and usability issues.
They are monolithic, because having the operating system in a single address space provides very
efficient performance. However, they are also modular, so that new functionality can be
dynamically added to the kernel.
System calls provide an interface between a program and the operating system, enabling
programs to request services from the OS kernel. System calls allow user-level applications to
perform tasks such as file operations, process management, memory management, and device I/O.
There are several types of system calls, each serving a specific purpose. Here's an overview of the
various types of system calls with examples:
These system calls manage processes, including process creation, termination, and scheduling.
o fork(): Creates a new process by duplicating the calling process. The new process is
called the child process.
o exec(): Replaces the current process with a new program.
o exit(): Terminates the calling process and returns an exit status to the parent
process.
o wait(): Makes the parent process wait for the child process to terminate before
proceeding.
o getpid(): Returns the process ID of the calling process.
o kill(): Sends a signal to a process, often used to terminate or pause it.
These system calls provide services related to file manipulation, such as opening, reading, writing,
and closing files.
4
o read(): Reads data from a file.
o write(): Writes data to a file.
o close(): Closes an open file.
o lseek(): Moves the file pointer to a specific position in the file.
o unlink(): Deletes a file from the filesystem.
o stat(): Retrieves information about a file (e.g., size, permissions, timestamps).
These system calls are responsible for managing devices like printers, hard drives, and network
interfaces.
These system calls are used to manage memory allocation, deallocation, and other memory-related
operations.
o mmap(): Maps files or devices into memory, allowing processes to access the
content directly in memory.
o brk(): Adjusts the program's data space (used for memory allocation).
o sbrk(): Increases or decreases the program's data space by a specified amount.
o munmap(): Unmaps a region of memory previously mapped by mmap().
These system calls provide mechanisms for communication between processes, including inter-
process communication (IPC).
5
These system calls are responsible for providing information about the system, process status, and
the environment.
These system calls allow processes to communicate over a network (for instance, through TCP/IP
sockets).
1. Multiprocessor Organization
Symmetric Multiprocessing (SMP): In this system, all processors share a common memory space
and are treated equally. Each processor has access to the same main memory, and all processors can
execute tasks independently or cooperatively.
Advantages:
Simple design.
6
Processors can work on different tasks, improving system throughput.
Disadvantages:
Asymmetric Multiprocessing (AMP): In this system, one processor (the master processor)
controls the system, while the other processors (slave processors) execute specific tasks assigned by
the master.
Advantages:
Disadvantages:
Example:
A typical example of a multiprocessor system is a server farm with multiple processors handling
different parts of the workload. These servers may work together to serve a web page, handle
database queries, and perform calculations
2. Multicore Organization
Definition: A multicore system refers to a single processor (CPU) that contains multiple cores on a
single chip. Each core is essentially a smaller, independent processor that can handle its own task or
thread. Multicore processors enable parallelism by allowing multiple instructions to be executed
simultaneously, improving overall performance.
Multiple Processing Units on a Single Chip: A multicore CPU contains multiple cores
that share the same physical chip. Each core has its own execution unit, cache, and registers
but shares resources like the system bus.
Parallelism: Each core can execute different instructions simultaneously, allowing
programs to run multiple threads or tasks in parallel. This is particularly beneficial for
applications that can be divided into smaller tasks, such as video editing, rendering, and
scientific computations.
Improved Energy Efficiency: Unlike adding multiple processors, multicore systems are
more power-efficient because they reduce the need for multiple separate chips.
Advantages:
7
14 Consider the following set of processes with the length of the CPU – burst time in given ms :
Process Arrival Time BurstTime Priority
P1 0 8 3
P2 1 4 2
P3 2 9 1
Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF, priority
and RR (quantum = 2) scheduling. Also calculate waiting time and turnaround time for each
scheduling algorithms.
FCFS scheduling algorithm executes processes in the order they arrive, without preemption.
GANTT CHART
| P1 | P2 | P3 |
0 8 12 21
AVG TAT=12.6MS
AVG WT=5.6
2.SJF
Process Arrival Time Burst time CT TAT WT
P1 0 8 8 8 0
P2 1 4 12 11 7
P3 2 9 21 9 10
GANTT CHART
| P1 | P2 | P3 |
0 8 12 21
AVG TAT=12.6MS
AVG WT=5.6
3.PRIORITY
PROCESS AT BT CT TAT WT
8
P1 0 8 18 18 10
P2 1 4 10 9 5
P3 2 9 21 19 10
GANTT CHART
| P1 | P2 | P3| P1| P2 | P3 | P1 | P3 | P1 | P3 |
0 2 4 6 8 10 12 14 16 18 21
AVG TAT=15.3MS
AVG WT=8.3