UNIT-5 Fundamentals of Information Technology (Question and Answers)
UNIT-5 Fundamentals of Information Technology (Question and Answers)
UNIT-5 Fundamentals of Information Technology (Question and Answers)
a) Assembler:
A program that translates assembly language (low-level programming
language) into machine code (binary). Assemblers typically produce an object file that
is later linked into an executable.
Lexical analysis: Tokenizes the assembly code into individual instructions and
operands.
Parsing: Analyses the syntax of the assembly code to ensure it follows the
grammar of the assembly language.
Code generation: Converts assembly instructions into corresponding machine
code (binary) instructions.
Linking: If the assembly code references external libraries or modules, the
assembler may need to link those to produce an executable file.
Assemblers are typically provided as command-line tools, and the process involves:
b) Compiler:
A compiler translates high-level programming languages (like C, C++, or Java) into
machine code or intermediate code (like bytecode). The translation happens before
execution, which can improve performance but requires time to compile.
c) Semantic Analysis: This phase checks for semantic errors, such as type
mismatches or undeclared variables. It ensures that the syntax of the program is
logically meaningful according to the language's semantics.
g) Code Linking and Assembly: In this phase, the compiler generates an object
file, which is usually a machine code representation of the program that is not
yet executable. If the program contains references to external libraries or other
modules, a linker will be used to combine them into a single executable.
c) Interpreter:
An interpreter directly executes instructions written in a high-level
programming language without converting them to machine code beforehand. It
translates the code line by line, making it slower than compiled code but easier to
debug and more flexible.
Direct Execution: Unlike a compiler, which translates the entire program into
machine code before execution, an interpreter processes the code line by line.
This means that when the interpreter encounters a line of code, it translates and
executes it immediately.
No Pre-compilation: There's no intermediate machine code file generated (as
in compiled languages). The interpreter directly runs the code, often without
creating an intermediate file at all.
Slower Performance: Because the code is translated and executed line by line,
interpreted programs tend to run slower than compiled ones. Every time you
run the program, the interpreter must process the code from scratch.
1. Easier Debugging: Since the interpreter processes the code line by line, it can
immediately give you feedback when it encounters an error. This is
particularly useful in development, as you can catch issues as you go along
rather than waiting until the entire program is compiled.
2. Portability: Interpreted code is often more portable because the interpreter
itself is the platform-dependent part, not the program. As long as the
interpreter is available for a given system, the code should work across
different environments.
3. Flexibility: Interpreters tend to provide a flexible, interactive environment. For
instance, with Python or JavaScript, you can write and run small snippets of
code in a REPL (Read-Eval-Print Loop) style without needing to compile and
execute a whole program.
1. Non-Interactive:
o Jobs are executed without user intervention during the process. Once a
batch of jobs is submitted, the operating system handles them
automatically, and no further input is required until the jobs are
completed.
2. Sequential Execution:
o The jobs are processed one after the other, in the order they are
submitted. If a job fails, it is generally placed in a queue to be
reprocessed later.
3. Efficiency in Large-Scale Jobs:
o Batch processing is efficient for tasks that don’t require immediate user
input and are repetitive in nature, such as processing payrolls, generating
reports, and performing system backups.
4. Scheduling and Queuing:
o The jobs are typically placed in a queue, and the system executes them
one at a time. The operating system manages this queue to ensure tasks
are executed in an orderly manner.
5. Minimal User Interaction:
o Once the batch job is set up, users are not required to interact with the
system until the job completes or produces results (such as a report).
a) Multiprogramming
A technique where multiple programs are loaded into memory and executed
concurrently by switching between them. The OS keeps track of multiple processes,
but only one process is actively running at any moment. It increases CPU utilization
by ensuring that the CPU is not idle.
Multiprogramming means more than one program can be active at the same time.
Before the operating system concept, only one program was to be loaded at a time
and run. These systems were not efficient as the CPU was not used efficiently. For
example, in a single-tasking system, the CPU is not used if the current program
waits for some input/output to finish. The idea of multiprogramming is to assign
CPUs to other processes while the current process might not be finished. This has
the below advantages.
1) The user gets the feeling that he/she can run multiple applications on a
single CPU even if the CPU is running one process at a time.
2) CPU is utilized better
All modern operating systems like MS Windows, Linux, etc are multiprogramming
operating systems.
Features of Multiprogramming
Need Single CPU for implementation.
Context switch between process.
Switching happens when current process undergoes waiting state.
CPU idle time is reduced.
High resource utilization.
High Performance.
b) Multitasking:
Multitasking in an operating system refers to the capability of executing
multiple tasks or processes concurrently, improving the efficiency and
responsiveness of the system. It allows the system to manage more than one task at a
time, which can be particularly important in environments where many processes are
running simultaneously, such as desktop computing, servers, and embedded systems.
There are different types of multitasking techniques used in operating systems,
depending on how the tasks are scheduled and executed:
Types of Multitasking:
1. Preemptive Multitasking:
o In preemptive multitasking, the operating system allocates a fixed time
slice (or quantum) to each running process, and then forcibly switches
the processor to another process after this time slice is over.
o This ensures that no single process can monopolize the CPU, providing
more balanced resource usage and system responsiveness.
o Example: Modern versions of Windows, Linux, and macOS use
preemptive multitasking.
2. Cooperative Multitasking:
o In cooperative multitasking, the running process must yield control of
the CPU voluntarily. If a process doesn't yield, it can monopolize the
CPU, causing other tasks to be delayed or even unresponsive.
o This method is less efficient and can lead to system instability, as one
poorly-behaved process could prevent others from running.
o Example: Older versions of Windows (Windows 3.x) and the classic
Mac OS used cooperative multitasking.
3. Multithreading:
o Multithreading is a form of multitasking that allows a single process to
have multiple threads of execution, each of which can run independently
but share the same resources.
o Threads are lightweight compared to processes, and multithreading
allows a program to perform multiple operations concurrently (e.g.,
downloading a file while also updating the user interface).
o Example: Modern applications, such as web browsers and text editors,
use multithreading to improve performance and responsiveness.
Advantages of Multitasking:
Better CPU Utilization: Multitasking ensures that the CPU is utilized
efficiently, as it can process multiple tasks even when one task is waiting (e.g.,
for I/O operations).
Improved Responsiveness: In user-facing applications, multitasking allows
for more responsive interfaces (e.g., performing background tasks like saving
data while a user interacts with the interface).
Resource Sharing: Multitasking allows multiple applications to share system
resources (like memory, CPU time, and I/O devices) in an efficient manner.
c) Multiprocessing:
Multiprocessing involves using multiple CPUs (or cores) to perform tasks in
parallel. This increases processing power, allowing multiple processes to run truly
simultaneously, improving performance for compute-heavy workloads.
User Space
(User Processes)
Operating System
Unix/Linux:
o Unix: A powerful, multi-user, multitasking operating system originally
developed in the 1960s and 70s. It is known for its stability, security,
and scalability. Unix systems are widely used in servers, workstations,
and mainframes.
o Linux: An open-source, Unix-like OS based on the Linux kernel,
developed by Linus Torvalds in 1991. It’s used on a wide range of
devices from personal computers to servers and embedded systems.
Common Linux distributions include Ubuntu, CentOS, Fedora, and
Debian.
o MacOS: A proprietary Unix-based operating system from Apple,
known for its integration with Apple hardware and services.