OS
OS
Unit -1:
1. What are the key concepts of an operating system, and how do they contribute
to system performance?
Ans -
An Operating System (OS) is a system software that acts as an interface between the user and
computer hardware. It manages hardware resources and provides services for computer
programs. Some fundamental OS concepts include multitasking, multiprogramming, multiuser,
and multithreading.
2. Multiprogramming - It is an extension to batch processing where the CPU is always kept busy.
Each process needs two types of system time: CPU time and IO time.
In a multiprogramming environment, when a process does its I/O, The CPU can start the
execution of other processes. Therefore, multiprogramming improves the efficiency of the
system.
3. Multi-user operating system is an operating system that permits several users to access a
single system running to a single operating system. These systems are frequently quite complex,
and they must manage the tasks that the various users connected to them require. Users will
usually sit at terminals or computers connected to the system via a network and other system
machines like printers.
4. Multithreading is a function of the CPU that permits multiple threads to run independently
while sharing the same process resources. A thread is a conscience sequence of instructions
that may run in the same parent process as other threads.
Ans -
Operating systems are classified based on their structure and functionality. The main types
include:
1. Batch Operating System: In the 1970s, Batch processing was viral. In this technique,
similar types of jobs were batched together and executed in time. People were used to having a
single computer, which was called a mainframe. In the Batch operating system, access is given
to more than one person; they submit their respective jobs to the system for execution.
Ans -
An OS is basically an organized set of programs that manage the computer hardware and
software resources. It acts as an intermediary between users and the computer hardware. It
ensures its operations and interactions between the application and hardware components are
efficient. Essentially, OS allows smooth functioning, providing a stable and consistent
environment for other software to be executed.
3. File System Management: It provides ordering and control of files in any place of storage.
It gives a structuring mechanism and provides access control through the concept called
permissions.
4. Device Management: OS controls the communication with its hardware and software.
This includes device drivers, management of Input/Output, and even efficient utilization of
peripheral devices such as printers and scanners etc.
5. Security & Access Control: It will check unauthorized users or processes from accessing
system resources through its implementation of user authentication, encryption, etc.
6. User Interface (UI): The OS will provide users with a means of interaction with the system,
either by a Graphical User Interface or Command-Line Interface.
Ans -
The architecture of an operating system defines how its components interact and manage
system resources. The primary components include:
1. Kernel: The core part of the OS responsible for process management, memory
management, and hardware communication. It operates in privileged mode and interacts
directly with hardware.
2. Shell: The user interface that allows interaction with the OS through commands or graphical
elements.
3. File System: Manages data storage, file organization, and retrieval operations.
4. Device Drivers: Act as translators between the OS and hardware components like printers
and network devices.
5. Process Scheduler: Controls CPU allocation by prioritizing processes and managing task
execution.
The OS architecture varies based on its design, such as monolithic (single large program),
layered (separate layers for different tasks), microkernel (minimal core functions), and hybrid (a
combination of architectures). The efficiency of these components determines the OS's overall
performance.
Ans -
System programs and system calls are essential components that facilitate communication
between the OS and applications.
1. System Programs: System programming may be defined as the act of creating System
Software by using the System Programming Languages. A system program offers an
environment in which programs may be developed and run. In simple terms, the system
programs serve as a link between the user interface (UI) and system calls. Some system
programs are only user interfaces, and others are complex. For instance, a compiler is
complicated system software.
2. System Calls: It is a method of interaction with the OS through the system programs. It is a
technique in which a computer system program requests a service from the OS kernel. The
Application Program Interface (API) helps to connect the OS functions with user programs. It
serves as a bridge between a process and the OS, enabling user-level programs to request OS
services. System calls may only be accessed using the kernel system, and any software that
consumes resources must use system calls.
Ans -
Security and protection are vital functions of an OS to prevent unauthorized access, data
corruption, and malicious attacks. The OS implements multiple security mechanisms, including:
1. User Authentication: Ensures only authorized users can access the system using
passwords, biometrics, or security tokens.
2. Access Control Lists (ACLs): Define permissions for users and processes, restricting
access to files, memory, and system resources.
5. Intrusion Detection Systems (IDS): Monitors system activities for malicious behavior
and unauthorized access attempts.
6. Process Isolation: Prevents processes from interfering with each other, ensuring system
stability and protection against malware.
Unit - 2
1. What is a Process? Explain its different states.
Ans -
The Process is the base of all computing things. Although process is relatively similar to the
computer code but, the method is not the same as computer code. A process is a "active"
entity, in contrast to the program, which is sometimes thought of as some sort of "passive"
entity.
1. New
A program which is going to be picked up by the OS into the main memory is called a new
process.
2. Ready
Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU
to be assigned. The OS picks the new processes from the secondary memory and put all of them
in the main memory.
The processes which are ready for the execution and reside in the main memory are called
ready state processes. There can be many processes present in the ready state.
3. Running
One of the processes from the ready state will be chosen by the OS depending upon the
scheduling algorithm. Hence, if we have only one CPU in our system, the number of running
processes for a particular time will always be one.
4. Block or wait
From the Running state, a process can make the transition to the block or wait state depending
upon the scheduling algorithm or the intrinsic behavior of the process.
Ans -
Process scheduling is the mechanism used by the operating system to manage the execution of
multiple processes by allocating CPU time efficiently. It ensures maximum utilization of the CPU
by switching between processes when necessary. The three primary types of schedulers are:
Ans -
Processes in an operating system can undergo various operations to manage execution
efficiently. Process Creation allows a parent process to create child processes using system calls
like fork(). Process Termination occurs when a process completes execution or is forcefully
ended. Process Suspension (or Swapping) temporarily removes a process from main memory to
free up resources. Process Resumption restores a suspended process back into memory. Process
Synchronization ensures orderly execution when multiple processes share resources to avoid
conflicts. Process Communication enables processes to exchange data using mechanisms like
Interprocess Communication (IPC), shared memory, or message passing. These operations help
in effective process management in multitasking environments.
4. What are CPU scheduling criteria, and why are they important?
Ans -
CPU scheduling criteria are the factors used to evaluate and compare different scheduling
algorithms to ensure efficient process execution. The key criteria include CPU Utilization, which
measures how effectively the CPU is kept busy; Throughput, the number of processes
completed per unit of time; Turnaround Time, the total time taken for a process from
submission to completion; Waiting Time, the time a process spends in the ready queue before
getting CPU time; Response Time, the time taken from process submission to the first response;
and Fairness, ensuring all processes get fair CPU access. These criteria are crucial for optimizing
system performance and user experience.
Ans -
The CPU Scheduling is the process by which a process is executed by the using the resources of
the CPU. The process also can wait due to the absence or unavailability of the resources. These
processes make the complete use of Central Processing Unit.
Ans -
Preemptive Scheduling -
Preemptive scheduling is a method that may be used when a process switches from a running
state to a ready state or from a waiting state to a ready state. The resources are assigned to the
process for a particular time and then removed. If the resources still have the remaining CPU
burst time, the process is placed back in the ready queue. The process remains in the ready
queue until it is given a chance to execute again.
When a high-priority process comes in the ready queue, it doesn't have to wait for the running
process to finish its burst time. However, the running process is interrupted in the middle of its
execution and placed in the ready queue until the high-priority process uses the resources.
Non-Preemptive Scheduling -
Non-preemptive scheduling is a method that may be used when a process terminates or
switches from a running to a waiting state. When processors are assigned to a process, they
keep the process until it is eliminated or reaches a waiting state. When the processor starts the
process execution, it must complete it before executing the other process, and it may not be
interrupted in the middle. When a non-preemptive process with a high CPU burst time is
running, the other process would have to wait for a long time, and that increases the process
average waiting time in the ready queue