0% found this document useful (0 votes)
6 views

OS

The document provides an overview of operating systems, detailing key concepts such as multitasking, multiprogramming, and multithreading, as well as types of operating systems like batch, time-sharing, and real-time systems. It outlines the primary services offered by an OS, including process management, memory management, and security measures. Additionally, it discusses process states, scheduling types, and various CPU scheduling algorithms, emphasizing the importance of efficient resource management and system performance.

Uploaded by

princesorout6342
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

OS

The document provides an overview of operating systems, detailing key concepts such as multitasking, multiprogramming, and multithreading, as well as types of operating systems like batch, time-sharing, and real-time systems. It outlines the primary services offered by an OS, including process management, memory management, and security measures. Additionally, it discusses process states, scheduling types, and various CPU scheduling algorithms, emphasizing the importance of efficient resource management and system performance.

Uploaded by

princesorout6342
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
You are on page 1/ 9

OS -

Unit -1:

1. What are the key concepts of an operating system, and how do they contribute
to system performance?

Ans -
An Operating System (OS) is a system software that acts as an interface between the user and
computer hardware. It manages hardware resources and provides services for computer
programs. Some fundamental OS concepts include multitasking, multiprogramming, multiuser,
and multithreading.

1. Multitasking - Multiprogramming extended the multitasking operating system concept, which


enabled various programs to run simultaneously. It provided a user with the ability to do more
than one task on the same system at a given time by sharing various system resources like CPU
time among different processes. This assures efficient use of the CPU because it shifts between
tasks quickly; hence, it seems that they are running parallel.

2. Multiprogramming - It is an extension to batch processing where the CPU is always kept busy.
Each process needs two types of system time: CPU time and IO time.

In a multiprogramming environment, when a process does its I/O, The CPU can start the
execution of other processes. Therefore, multiprogramming improves the efficiency of the
system.

3. Multi-user operating system is an operating system that permits several users to access a
single system running to a single operating system. These systems are frequently quite complex,
and they must manage the tasks that the various users connected to them require. Users will
usually sit at terminals or computers connected to the system via a network and other system
machines like printers.

4. Multithreading is a function of the CPU that permits multiple threads to run independently
while sharing the same process resources. A thread is a conscience sequence of instructions
that may run in the same parent process as other threads.

Multithreading allows many parts of a program to run simultaneously.


2. What are the different types of Operating Systems, and how do they function?

Ans -
Operating systems are classified based on their structure and functionality. The main types
include:

1. Batch Operating System: In the 1970s, Batch processing was viral. In this technique,
similar types of jobs were batched together and executed in time. People were used to having a
single computer, which was called a mainframe. In the Batch operating system, access is given
to more than one person; they submit their respective jobs to the system for execution.

2. Time-Sharing Operating System: A Time-Sharing Operating System allows multiple users


to access the system concurrently, and this occurs by allocating a small time slice or quantum to
each task. The CPU switches between the tasks so rapidly that the users feel their programs
have been running concurrently.

3. Distributed Operating System: The Distributed Operating system is not installed on a


single machine; it is divided into parts, and these parts are loaded on different machines. A part
of the distributed Operating system is installed on each machine to make their communication
possible.

4. Network Operating System (NOS): A NOS, which is an abbreviation for network


operating system, is an operating system that exists to allow computers to communicate and
share resources over a network, almost like the backbone of networking, ensuring that different
devices (like computers, printers, and even servers) can always connect and interact seamlessly.

5. Real-Time Operating System (RTOS): A Real-Time Operating System is conceived as a


design for tasks that need to be completed under very strict time constraints. In such systems,
each task has a clear deadline; failing to achieve completion by that deadline can, in turn, bring
about significant consequences.

3. What are the primary services provided by an Operating System?

Ans -
An OS is basically an organized set of programs that manage the computer hardware and
software resources. It acts as an intermediary between users and the computer hardware. It
ensures its operations and interactions between the application and hardware components are
efficient. Essentially, OS allows smooth functioning, providing a stable and consistent
environment for other software to be executed.

1. Process Management: OS manages the processes within a system, which includes


process creation, scheduling, and termination. It further encompasses process synchronization
and deadlocks.

2. Memory Management: The OS allows the allocation and deallocation of memory


according to the requirements of a program. The OS ensures the optimal usage of the RAM in
this respect. Techniques that involve paging, segmentation, and virtual memory are employed
for that.

3. File System Management: It provides ordering and control of files in any place of storage.
It gives a structuring mechanism and provides access control through the concept called
permissions.

4. Device Management: OS controls the communication with its hardware and software.
This includes device drivers, management of Input/Output, and even efficient utilization of
peripheral devices such as printers and scanners etc.

5. Security & Access Control: It will check unauthorized users or processes from accessing
system resources through its implementation of user authentication, encryption, etc.

6. User Interface (UI): The OS will provide users with a means of interaction with the system,
either by a Graphical User Interface or Command-Line Interface.

4. Explain the architecture of an Operating System and its components.

Ans -
The architecture of an operating system defines how its components interact and manage
system resources. The primary components include:

1. Kernel: The core part of the OS responsible for process management, memory
management, and hardware communication. It operates in privileged mode and interacts
directly with hardware.

2. Shell: The user interface that allows interaction with the OS through commands or graphical
elements.
3. File System: Manages data storage, file organization, and retrieval operations.

4. Device Drivers: Act as translators between the OS and hardware components like printers
and network devices.

5. Process Scheduler: Controls CPU allocation by prioritizing processes and managing task
execution.

The OS architecture varies based on its design, such as monolithic (single large program),
layered (separate layers for different tasks), microkernel (minimal core functions), and hybrid (a
combination of architectures). The efficiency of these components determines the OS's overall
performance.

5. What are System Programs and System Calls in an Operating System?

Ans -
System programs and system calls are essential components that facilitate communication
between the OS and applications.

1. System Programs: System programming may be defined as the act of creating System
Software by using the System Programming Languages. A system program offers an
environment in which programs may be developed and run. In simple terms, the system
programs serve as a link between the user interface (UI) and system calls. Some system
programs are only user interfaces, and others are complex. For instance, a compiler is
complicated system software.

2. System Calls: It is a method of interaction with the OS through the system programs. It is a
technique in which a computer system program requests a service from the OS kernel. The
Application Program Interface (API) helps to connect the OS functions with user programs. It
serves as a bridge between a process and the OS, enabling user-level programs to request OS
services. System calls may only be accessed using the kernel system, and any software that
consumes resources must use system calls.

Common system call categories include:


1. Process Control: fork(), exit(), wait()

2. File Management: open(), read(), write(), close()


3. Device Management: ioctl(), read(), write()

4. Information Maintenance: getpid(), alarm(), sleep()

6. How does an Operating System ensure security and protection?

Ans -
Security and protection are vital functions of an OS to prevent unauthorized access, data
corruption, and malicious attacks. The OS implements multiple security mechanisms, including:

1. User Authentication: Ensures only authorized users can access the system using
passwords, biometrics, or security tokens.

2. Access Control Lists (ACLs): Define permissions for users and processes, restricting
access to files, memory, and system resources.

3. Encryption: Protects data by converting it into an unreadable format using cryptographic


algorithms.

4. Firewall & Network Security: Prevents unauthorized access to networked systems by


filtering incoming and outgoing traffic.

5. Intrusion Detection Systems (IDS): Monitors system activities for malicious behavior
and unauthorized access attempts.

6. Process Isolation: Prevents processes from interfering with each other, ensuring system
stability and protection against malware.

Unit - 2
1. What is a Process? Explain its different states.

Ans -
The Process is the base of all computing things. Although process is relatively similar to the
computer code but, the method is not the same as computer code. A process is a "active"
entity, in contrast to the program, which is sometimes thought of as some sort of "passive"
entity.

1. New
A program which is going to be picked up by the OS into the main memory is called a new
process.

2. Ready
Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU
to be assigned. The OS picks the new processes from the secondary memory and put all of them
in the main memory.

The processes which are ready for the execution and reside in the main memory are called
ready state processes. There can be many processes present in the ready state.

3. Running
One of the processes from the ready state will be chosen by the OS depending upon the
scheduling algorithm. Hence, if we have only one CPU in our system, the number of running
processes for a particular time will always be one.

4. Block or wait
From the Running state, a process can make the transition to the block or wait state depending
upon the scheduling algorithm or the intrinsic behavior of the process.

2. What is Process Scheduling? Describe the different types of schedulers.

Ans -
Process scheduling is the mechanism used by the operating system to manage the execution of
multiple processes by allocating CPU time efficiently. It ensures maximum utilization of the CPU
by switching between processes when necessary. The three primary types of schedulers are:

1. Long term scheduler


Long term scheduler is also known as job scheduler. It chooses the processes from the pool
(secondary memory) and keeps them in the ready queue maintained in the primary memory.
Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long
term scheduler is to choose a perfect mix of IO bound and CPU bound processes among the
jobs present in the pool.

2. Short term scheduler


Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the ready
queue and dispatch to the CPU for the execution. A scheduling algorithm is used to select which
job is going to be dispatched for the execution. The Job of the short term scheduler can be very
critical in the sense that if it selects job whose CPU burst time is very high then all the jobs.

3. Medium term scheduler


Medium term scheduler takes care of the swapped out processes.If the running state processes
needs some IO time for the completion then there is a need to change its state from running to
waiting. Medium term scheduler is used for this purpose. It removes the process from the
running state to make room for the other processes.

3. What are the different operations that can be performed on processes?

Ans -
Processes in an operating system can undergo various operations to manage execution
efficiently. Process Creation allows a parent process to create child processes using system calls
like fork(). Process Termination occurs when a process completes execution or is forcefully
ended. Process Suspension (or Swapping) temporarily removes a process from main memory to
free up resources. Process Resumption restores a suspended process back into memory. Process
Synchronization ensures orderly execution when multiple processes share resources to avoid
conflicts. Process Communication enables processes to exchange data using mechanisms like
Interprocess Communication (IPC), shared memory, or message passing. These operations help
in effective process management in multitasking environments.

4. What are CPU scheduling criteria, and why are they important?

Ans -
CPU scheduling criteria are the factors used to evaluate and compare different scheduling
algorithms to ensure efficient process execution. The key criteria include CPU Utilization, which
measures how effectively the CPU is kept busy; Throughput, the number of processes
completed per unit of time; Turnaround Time, the total time taken for a process from
submission to completion; Waiting Time, the time a process spends in the ready queue before
getting CPU time; Response Time, the time taken from process submission to the first response;
and Fairness, ensuring all processes get fair CPU access. These criteria are crucial for optimizing
system performance and user experience.

5. Explain different CPU scheduling algorithms.

Ans -
The CPU Scheduling is the process by which a process is executed by the using the resources of
the CPU. The process also can wait due to the absence or unavailability of the resources. These
processes make the complete use of Central Processing Unit.

First Come First Serve Scheduling Algorithm


This is the first type of CPU Scheduling Algorithms. Here, in this CPU Scheduling Algorithm we
are going to learn how CPU is going to allot resources to the certain process. Here, in the First
Come First Serve CPU Scheduling Algorithm.

Shortest Job First CPU Scheduling Algorithm


This is another type of CPU Scheduling Algorithms. Here, in this CPU Scheduling Algorithm we
are going to learn how CPU is going to allot resources to the certain process. The Shortest Job is
heavily dependent on the Burst Times.

Round Robin CPU Scheduling


Round Robin is a CPU scheduling mechanism those cycles around assigning each task a specific
time slot. It is the First come, First served CPU Scheduling technique with preemptive mode.

Priority CPU Scheduling


This is another type of CPU Scheduling Algorithms. Here, in this CPU Scheduling Algorithm we
are going to learn how CPU is going to allot resources to the certain process.

6. Compare Preemptive and Non-Preemptive Scheduling.

Ans -
Preemptive Scheduling -
Preemptive scheduling is a method that may be used when a process switches from a running
state to a ready state or from a waiting state to a ready state. The resources are assigned to the
process for a particular time and then removed. If the resources still have the remaining CPU
burst time, the process is placed back in the ready queue. The process remains in the ready
queue until it is given a chance to execute again.

When a high-priority process comes in the ready queue, it doesn't have to wait for the running
process to finish its burst time. However, the running process is interrupted in the middle of its
execution and placed in the ready queue until the high-priority process uses the resources.

Non-Preemptive Scheduling -
Non-preemptive scheduling is a method that may be used when a process terminates or
switches from a running to a waiting state. When processors are assigned to a process, they
keep the process until it is eliminated or reaches a waiting state. When the processor starts the
process execution, it must complete it before executing the other process, and it may not be
interrupted in the middle. When a non-preemptive process with a high CPU burst time is
running, the other process would have to wait for a long time, and that increases the process
average waiting time in the ready queue

You might also like