UNIT-5 Fundamentals of Information Technology (Question and Answers)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

QUESTIONS & ANSWERS

FUNDAMENTALS OF INFORMATION TECHNOLOGY


UNIT – V: Operating System

1. What is an Operating system? Explain its functions briefly.


Operating System is a system software. It basically manages all the
resources of the computer. An operating system acts as an interface between the
software and different parts of the computer or the computer hardware. The
operating system is designed in such a way that it can manage the overall resources
and operations of the computer.
Operating System is a fully integrated set of specialized programs that handle all the
operations of the computer. It controls and monitors the execution of all other
programs that reside in the computer, which also includes application programs and
other system software of the computer. Examples of Operating Systems are
Windows, Linux, Mac OS, etc.
.

Operating System Functions


Operating systems (OS) are essential software that manage hardware and software
resources on a computer. Key OS functions include:
1. Process Management: The OS manages processes in a system, including their
creation, scheduling, and termination. It ensures that each process gets enough
resources (e.g., CPU time) and can execute independently without interfering
with other processes.
2. Memory Management: The OS handles the allocation and deallocation of
memory to processes, ensuring efficient use of RAM and preventing conflicts
between processes.
3. File System Management: The OS manages files on storage devices,
providing a way to store, retrieve, and organize files. It handles file
permissions, directories, and access control.
4. Device Management: The OS manages input/output devices like keyboards,
mice, printers, and disks, ensuring that programs can interact with these devices
efficiently.
5. Security and Access Control: The OS ensures data protection through user
authentication, authorization, and encryption. It protects against unauthorized
access and ensures data integrity.

2. What are the metrics used to measure the system performance of an


Operating system?
Measuring system performance in an operating system is a critical task for ensuring
that resources (like CPU, memory, disk, and network) are being used efficiently and
effectively. Several metrics and tools are used to assess performance, depending on
the specific goals and the system in question. Below are the key aspects of measuring
system performance in an operating system:

Measuring System Performance

1. Throughput: The number of tasks or processes completed within a certain


time frame.
2. Response Time: The time taken for the system to respond to a user input or
request.
3. CPU Utilization: The percentage of CPU resources being actively used, which
helps identify if the system is under or over-utilized.
4. Latency: The time delay between initiating a request and receiving a response,
especially important in network and real-time systems.
5. System Load: The amount of work the system is currently handling. It
includes both the number of processes in the queue and the amount of time
processes spend in execution.

3. Explain in detail about Assembler, Compiler and Interpreter?

a) Assembler:
A program that translates assembly language (low-level programming
language) into machine code (binary). Assemblers typically produce an object file that
is later linked into an executable.

. The process is typically divided into several stages:

 Lexical analysis: Tokenizes the assembly code into individual instructions and
operands.
 Parsing: Analyses the syntax of the assembly code to ensure it follows the
grammar of the assembly language.
 Code generation: Converts assembly instructions into corresponding machine
code (binary) instructions.
 Linking: If the assembly code references external libraries or modules, the
assembler may need to link those to produce an executable file.

Assemblers are typically provided as command-line tools, and the process involves:

1. Writing the assembly code in a text editor (e.g., hello.asm).


2. Assembling the code with an assembler like nasm, gas, or MASM to generate
an object file (e.g., hello.o or hello.obj).
3. Linking the object file into an executable binary with a linker (e.g., ld, link).
Working of Assembler
Assembler divides tasks into two passes:
Pass-1
 Define symbols and literals and remember them in the symbol table and
literal table respectively.
 Keep track of the location counter.
 Process pseudo-operations.
 Defines a program that assigns the memory addresses to the variables and
translates the source code into machine code.
Pass-2
 Generate object code by converting symbolic op-code into respective numeric
op-code.
 Generate data for literals and look for values of symbols.
 Defines a program that reads the source code two times.
 It reads the source code and translates the code into object code.

b) Compiler:
A compiler translates high-level programming languages (like C, C++, or Java) into
machine code or intermediate code (like bytecode). The translation happens before
execution, which can improve performance but requires time to compile.

Key Features of a Compiler


 It translates the whole source code at a time without omitting any part of it.
 Only reports on errors when the translated content is released.
 Produces an efficient code for execution by a machine.
 Transforms code intelligible by people into a form that can be executed on any
other specific hardware platform.
Phases of a Compiler:

a) Lexical Analysis (Tokenization): This is the first step in the compilation


process, where the source code is broken down into a series of tokens. A token
is a sequence of characters that represents a fundamental element of the
language (like a keyword, operator, or identifier).
;.
b) Syntax Analysis (Parsing): In this phase, the tokens generated during lexical
analysis are grouped and checked against the grammar rules of the
programming language to ensure they form valid syntactic structures
(statements, expressions, etc.). This phase produces a syntax tree or parse
tree.

c) Semantic Analysis: This phase checks for semantic errors, such as type
mismatches or undeclared variables. It ensures that the syntax of the program is
logically meaningful according to the language's semantics.

d) Intermediate Code Generation: The compiler generates an intermediate code,


which is often a platform-independent representation of the program. This
intermediate code is typically easier to optimize and can be converted into
machine code later.

e) Optimization: This step aims to improve the intermediate code in terms of


execution speed, memory usage, or other performance metrics. Optimizations
can be done at various levels.

f) Code Generation: In this phase, the intermediate code is converted into


machine code specific to the target architecture (e.g., x86, ARM). The output is
typically in the form of assembly code or machine code that can be executed by
the CPU.

g) Code Linking and Assembly: In this phase, the compiler generates an object
file, which is usually a machine code representation of the program that is not
yet executable. If the program contains references to external libraries or other
modules, a linker will be used to combine them into a single executable.
c) Interpreter:
An interpreter directly executes instructions written in a high-level
programming language without converting them to machine code beforehand. It
translates the code line by line, making it slower than compiled code but easier to
debug and more flexible.

How an Interpreter Works:

 Direct Execution: Unlike a compiler, which translates the entire program into
machine code before execution, an interpreter processes the code line by line.
This means that when the interpreter encounters a line of code, it translates and
executes it immediately.
 No Pre-compilation: There's no intermediate machine code file generated (as
in compiled languages). The interpreter directly runs the code, often without
creating an intermediate file at all.
 Slower Performance: Because the code is translated and executed line by line,
interpreted programs tend to run slower than compiled ones. Every time you
run the program, the interpreter must process the code from scratch.

Advantages of Using an Interpreter:

1. Easier Debugging: Since the interpreter processes the code line by line, it can
immediately give you feedback when it encounters an error. This is
particularly useful in development, as you can catch issues as you go along
rather than waiting until the entire program is compiled.
2. Portability: Interpreted code is often more portable because the interpreter
itself is the platform-dependent part, not the program. As long as the
interpreter is available for a given system, the code should work across
different environments.
3. Flexibility: Interpreters tend to provide a flexible, interactive environment. For
instance, with Python or JavaScript, you can write and run small snippets of
code in a REPL (Read-Eval-Print Loop) style without needing to compile and
execute a whole program.

Example of Interpreted Languages:

 Python: Python is an interpreted language. The Python interpreter reads the


code line by line and executes it.
 JavaScript: Typically run inside a browser, JavaScript is interpreted, although
modern JavaScript engines often use Just-in-Time (JIT) compilation techniques
to optimize performance.
 Ruby: Ruby uses an interpreter that directly executes code, similar to Python.
 PHP: PHP code is usually interpreted on the server side when a web page is
requested.

4. Explain Batch processing method of job allocation to CPU.

Batch processing in operating systems refers to the execution of a series of jobs


(or tasks) without user interaction, where the jobs are grouped together into batches
and processed sequentially. It was widely used in early computing systems and is still
used today in some specific use cases like data processing, system maintenance, and
large-scale computational tasks.
Key Characteristics of Batch Processing:

1. Non-Interactive:
o Jobs are executed without user intervention during the process. Once a
batch of jobs is submitted, the operating system handles them
automatically, and no further input is required until the jobs are
completed.
2. Sequential Execution:
o The jobs are processed one after the other, in the order they are
submitted. If a job fails, it is generally placed in a queue to be
reprocessed later.
3. Efficiency in Large-Scale Jobs:
o Batch processing is efficient for tasks that don’t require immediate user
input and are repetitive in nature, such as processing payrolls, generating
reports, and performing system backups.
4. Scheduling and Queuing:
o The jobs are typically placed in a queue, and the system executes them
one at a time. The operating system manages this queue to ensure tasks
are executed in an orderly manner.
5. Minimal User Interaction:
o Once the batch job is set up, users are not required to interact with the
system until the job completes or produces results (such as a report).

Advantages of Batch Processing:


 Efficiency:
By running multiple jobs in sequence, batch processing maximizes the use of
system resources during idle times (e.g., overnight), ensuring that the CPU, memory,
and storage are being used as efficiently as possible.
 No User Interaction:
This is ideal for tasks that are repetitive and do not require user input, such as daily
reporting, backups, or database processing. This allows users or administrators to
schedule jobs without interrupting their regular work.
 Cost-Effective:
Batch processing makes use of low-demand periods (like overnight) to perform
resource-intensive tasks, reducing system congestion during peak times and
optimizing overall system performance.
 Error Recovery:
Batch systems can be designed to handle errors in a structured manner, such as
retrying failed jobs or producing logs for troubleshooting.

5. Explain the following techniques used by the operating system to


increase CPU utilization.
a) Multiprogramming
b) Multitasking
c) Multiprocessing

a) Multiprogramming
A technique where multiple programs are loaded into memory and executed
concurrently by switching between them. The OS keeps track of multiple processes,
but only one process is actively running at any moment. It increases CPU utilization
by ensuring that the CPU is not idle.

Multiprogramming means more than one program can be active at the same time.
Before the operating system concept, only one program was to be loaded at a time
and run. These systems were not efficient as the CPU was not used efficiently. For
example, in a single-tasking system, the CPU is not used if the current program
waits for some input/output to finish. The idea of multiprogramming is to assign
CPUs to other processes while the current process might not be finished. This has
the below advantages.
1) The user gets the feeling that he/she can run multiple applications on a
single CPU even if the CPU is running one process at a time.
2) CPU is utilized better
All modern operating systems like MS Windows, Linux, etc are multiprogramming
operating systems.
Features of Multiprogramming
 Need Single CPU for implementation.
 Context switch between process.
 Switching happens when current process undergoes waiting state.
 CPU idle time is reduced.
 High resource utilization.
 High Performance.

b) Multitasking:
Multitasking in an operating system refers to the capability of executing
multiple tasks or processes concurrently, improving the efficiency and
responsiveness of the system. It allows the system to manage more than one task at a
time, which can be particularly important in environments where many processes are
running simultaneously, such as desktop computing, servers, and embedded systems.
There are different types of multitasking techniques used in operating systems,
depending on how the tasks are scheduled and executed:
Types of Multitasking:
1. Preemptive Multitasking:
o In preemptive multitasking, the operating system allocates a fixed time
slice (or quantum) to each running process, and then forcibly switches
the processor to another process after this time slice is over.
o This ensures that no single process can monopolize the CPU, providing
more balanced resource usage and system responsiveness.
o Example: Modern versions of Windows, Linux, and macOS use
preemptive multitasking.
2. Cooperative Multitasking:
o In cooperative multitasking, the running process must yield control of
the CPU voluntarily. If a process doesn't yield, it can monopolize the
CPU, causing other tasks to be delayed or even unresponsive.
o This method is less efficient and can lead to system instability, as one
poorly-behaved process could prevent others from running.
o Example: Older versions of Windows (Windows 3.x) and the classic
Mac OS used cooperative multitasking.
3. Multithreading:
o Multithreading is a form of multitasking that allows a single process to
have multiple threads of execution, each of which can run independently
but share the same resources.
o Threads are lightweight compared to processes, and multithreading
allows a program to perform multiple operations concurrently (e.g.,
downloading a file while also updating the user interface).
o Example: Modern applications, such as web browsers and text editors,
use multithreading to improve performance and responsiveness.

Key Concepts in Multitasking:


1. Process Scheduling:
o The operating system uses a scheduler to decide which process or
thread should run at any given time. Scheduling algorithms, such as
Round-Robin, Priority Scheduling, and Multilevel Feedback Queues,
determine the order of execution based on factors like priority, fairness,
and CPU time requirements.
2. Context Switching:
o Context switching occurs when the CPU switches from executing one
process or thread to another. The operating system must save the state
(context) of the current process and load the state of the next one. This
involves saving and restoring registers, program counter, and other
essential data.
3. Concurrency and Parallelism:
o Concurrency refers to the ability of an operating system to manage
multiple tasks at the same time, even if only one task is being executed
at any given moment.
o Parallelism, on the other hand, refers to the simultaneous execution of
multiple tasks. This can only happen on a system with multiple CPU
cores or processors.
4. Process vs. Thread:
o A process is an instance of a program that is being executed. It has its
own memory space, program counter, and other resources.
o A thread is the smallest unit of execution within a process. Threads
within the same process share the same memory space, making context
switching between threads faster and more efficient compared to
switching between processes.

Advantages of Multitasking:
 Better CPU Utilization: Multitasking ensures that the CPU is utilized
efficiently, as it can process multiple tasks even when one task is waiting (e.g.,
for I/O operations).
 Improved Responsiveness: In user-facing applications, multitasking allows
for more responsive interfaces (e.g., performing background tasks like saving
data while a user interacts with the interface).
 Resource Sharing: Multitasking allows multiple applications to share system
resources (like memory, CPU time, and I/O devices) in an efficient manner.
c) Multiprocessing:
Multiprocessing involves using multiple CPUs (or cores) to perform tasks in
parallel. This increases processing power, allowing multiple processes to run truly
simultaneously, improving performance for compute-heavy workloads.

Working of Multi-Processing Operating System


 Multi-processing operating system consists of multiple CPUs. Each CPU is
connected to the main memory.
 The task to be performed is divided among all the processors.
 For faster execution and improved performance, each processor is assigned a
specific task.
 Once all the tasks of each processor are completed they are compiled together in
order to produce a single output.
 The allocation of resources for each processor is handled by the operating
system. This process results in better utilization of the available resources and
improved performance.
The below diagram describes the working of multi-processing operating systems.

User Space

(User Processes)

Operating System

Process Scheduling &


Management

CPU #1 CPU #1 CPU #1 CPU #1


(Core - 1) (Core - 2) (Core - 3) (Core - 4)
The main aim of the multi-processing operating system is to increase the speed of
execution of the system. The use of a multiprocessing operating system improves the
overall performance of the system. For example, UNIX, LINUX, and Solaris are the
most widely used multi-processing operating system.

6. Explain the time sharing systems in processing the task


Time-sharing systems divide CPU time into small slices, allocating a slice to each
process in turn. This creates the illusion of simultaneous execution for multiple users
or processes. Time-sharing systems are key in interactive environments like personal
computers, where many tasks are performed simultaneously (e.g., running multiple
applications). As the system rapidly switches from one user to another, each user is
given the impression that the entire computer system is dedicated to its use, although
it is being shared among multiple users.

Examples of Time-Sharing Operating Systems:

 UNIX: One of the most famous time-sharing operating systems, designed to


handle multiple users and processes.
 Windows: Modern versions of Windows (like Windows 10 or 11) use time-
sharing to manage multiple running applications and processes.
7. Discuss briefly about the DOS, Windows and Unix/Linux
Operating Systems.
 DOS (Disk Operating System): An early OS for personal computers, DOS is
command-line-based and used to manage files, run programs, and interact with
hardware. It is now mostly obsolete, replaced by more modern OS.
o MS-DOS was a popular implementation of DOS, used primarily in the
1980s and early 1990s.

 Windows: A family of graphical OS, developed by Microsoft. It is one of the


most popular consumer operating systems for personal computers, offering a
user-friendly interface and supporting a wide range of applications.
o Windows 95, 98, XP, 7, 10, 11 represent various versions of Windows,
with improvements in GUI, security, multitasking, and network
capabilities.

 Unix/Linux:
o Unix: A powerful, multi-user, multitasking operating system originally
developed in the 1960s and 70s. It is known for its stability, security,
and scalability. Unix systems are widely used in servers, workstations,
and mainframes.
o Linux: An open-source, Unix-like OS based on the Linux kernel,
developed by Linus Torvalds in 1991. It’s used on a wide range of
devices from personal computers to servers and embedded systems.
Common Linux distributions include Ubuntu, CentOS, Fedora, and
Debian.
o MacOS: A proprietary Unix-based operating system from Apple,
known for its integration with Apple hardware and services.

You might also like