Operating System Module Answers.1
Operating System Module Answers.1
The Linux file system structure helps organize data in a hierarchical manner, allowing for easier management of files and resources. Some
key advantages for Tech Solutions' IT team:
o System files are typically under / (root), /bin, /sbin, /lib, while user-specific data is in /home.
o This helps prevent accidental overwriting of important system files by regular users.
o Linux uses file permissions (read, write, execute) and user/group ownership to ensure only authorized personnel can
access sensitive files.
3. Consistency:
o The standard structure ensures that admins can easily locate files and directories across different servers.
o cd /home/username/
mkdir projects
cd projects
touch project.txt
For example, allowing only the user to write and read, but others to only read:
To safely delete the directory named "completed project" and all its contents, use the following command:
Explanation:
1. Log Files:
o Common locations:
o These logs may contain error messages or other details about why the crash occurred.
2. Application-specific Logs:
o Many applications create their own log files (e.g., in /var/log/ or within their own directories). Check these for any
unusual activity or error codes.
o Check top, htop, or free to ensure there is no excessive CPU, memory, or disk usage that might have caused the crash.
For a new software project, the directory structure can be designed as follows:
/projects
└── /project_name
This bash script will back up the critical directories (/etc, /home, /var/www) to a secure location (e.g., /backups):
#!/bin/bash
BACKUP_DIR="/backups"
DATE=$(date +%F)
BACKUP_FILE="backup_$DATE.tar.gz"
if [ $? -eq 0 ]; then
else
Fi
Explanation:
This script creates a backup of /etc, /home, and /var/www in a compressed .tar.gz file.
It checks for errors in the backup process by verifying the exit status
o -type d: Finds only directories.
ls -l /path/to/file_or_directory
username: Owner.
groupname: Group.
df -h
-h: Displays sizes in a human-readable format (e.g., GB, MB).
Output includes:
o Filesystem.
o Total size.
o Used space.
o Available space.
o Mounted directory.
No Confirmation: It deletes all files and directories recursively without asking for confirmation.
Irrecoverable Deletions: Once executed, data cannot be recovered without backups.
Risk of Typos: A small typo can lead to deleting unintended directories (e.g., / or /home).
rm -ri /path/to/directory
o Use --preserve-root:
rm -rf --preserve-root /
Here is a script to find and delete empty files or temporary files (e.g., files ending with .tmp or .bak) safely:
#!/bin/bash
TARGET_DIR="/path/to/project"
LOG_FILE="/var/log/cleanup.log"
DATE=$(date +%F)
echo "Finding and deleting temporary files (.tmp, .bak)..." >> $LOG_FILE
find $TARGET_DIR -type f \( -name "*.tmp" -o -name "*.bak" \) -print -delete >> $LOG_FILE
# Log completion
How It Works
1. Defines the target directory: Replace /path/to/project with the directory to clean.
2. Logs deletions: Creates a log file (/var/log/cleanup.log) to record what was deleted.
3. Deletes empty files: Finds and deletes files with zero size (-empty).
chmod +x cleanup.sh
./cleanup.sh
This ensures unnecessary files are removed systematically and with a record for auditing purposes. Let me know if you need further details!
1. Processes valid data: Prevents errors and undefined behavior caused by invalid inputs.
3. Maintains functionality: Ensures the program operates as intended with meaningful results.
4. Guides users: Helps users provide correct inputs through clear feedback.
If the system accepts only positive numbers and a negative number is entered, it should:
#!/bin/bash
divide() {
local num1=$1
local num2=$2
return 1
fi
# Perform division
# Example Usage
#!/bin/bash
evaluate_expression() {
local expression="$1"
return 1
fi
echo "Result: $result"
# Example Usage
evaluate_expression "$expression"
#!/bin/bash
get_numeric_input() {
while true; do
break
else
fi
done
# Example Usage
get_numeric_input
#!/bin/bash
get_numeric_input_with_limit() {
local max_attempts=5
local attempts=0
return 0
else
fi
(( attempts++ ))
done
return 1
4. Processor 4: Updates and maintains the traffic database with real-time information.
Role of Processor 4
Data Consistency: Maintains an up-to-date traffic database that is used by other processors.
Real-Time Updates: Provides accurate traffic conditions for analysis and signal adjustments.
Historical Analysis: Logs traffic data for future optimization and decision-making.
1. Preventing Data Corruption: Shared resources like traffic maps and sensor data can be modified by multiple processors
simultaneously. Locks or semaphores prevent simultaneous write operations, ensuring consistency.
o Example: Processor 1 analyzing congestion and Processor 4 updating the database should not access the same
data concurrently.
2. Ensuring Resource Availability: Semaphores can control access to limited resources (e.g., signal controllers) so tasks are
queued rather than causing conflicts.
3. Avoiding Race Conditions: Locks ensure that tasks requiring the same resource are executed sequentially, avoiding
unpredictable outcomes.
4. Maintaining System Integrity: Prioritized tasks like clearing ambulance routes require guaranteed access to critical
resources, which semaphores facilitate by granting priority access.
Here’s a pseudo-code solution to reallocate tasks dynamically when one processor becomes overloaded during peak traffic
hours:
Dynamic Load Balancing with Task Reallocation
tasks = {
function check_processor_load(processor):
function find_idle_processor():
min_load = 100
idle_processor = null
load = check_processor_load(processor)
min_load = load
idle_processor = processor
return idle_processor
function reallocate_tasks():
load = check_processor_load(assigned_processor)
# If the processor is overloaded (e.g., > 80% load), reassign the task
idle_processor = find_idle_processor()
idle_processor.assign_task(task)
else:
while system_is_running:
reallocate_tasks()
5…..A
Job Details
Jobs: P, Q, R, S, T, U
Logic:
2. If a process's burst time exceeds 2 minutes, add it to the end of the queue.
P(2), Q(2), R(2), S(2), T(2), U(2), P(2), Q(2), R(2), T(2), U(2), P(2), R(1), U(2), P(2), U(2)
Completion Times:
Logic:
Sequence:
Logic:
Sequence:
Key Features:
o Low-priority tasks are executed in round-robin with adjustable time slices based on system load.
Algorithm Logic
1. Queue Assignment:
o Assign tasks to Queue 1 (High Priority) or Queue 2 (Low Priority) based on priority.
2. Queue Scheduling:
3. Dynamic Adjustments:
o Increase or decrease time quantum for Queue 2 based on batch task load.
# Initialization
TimeQuantum = 2
# Task Assignment
Queue1.append(task)
else:
Queue2.append(task)
# Scheduling Loop
while Queue1:
task = Queue1.pop(0)
execute_task(task)
execute_task(task)
Queue2.remove(task)
else:
task.remaining_time -= TimeQuantum
move_to_end(Queue2, task)
# Dynamic Adjustment
if system_load_high():
TimeQuantum = increase_time_quantum()
else:
TimeQuantum = decrease_time_quantum()
In the given scenario where a university server runs multiple services (web server, database server, file-sharing service),
multiprocessing is crucial because:
1. Parallel Execution of Independent Services: Each service can be run as a separate process, which can independently execute
on separate CPU cores. This improves server performance, as tasks like handling web requests, managing database queries, and
processing file-sharing activities can all happen concurrently without waiting for each other.
2. Fault Isolation: If one service (e.g., the database server) crashes, it won’t affect the other services (e.g., web server or file-
sharing service). Each service runs in its own process, ensuring greater stability.
3. Better Resource Utilization: Multiprocessing allows each core of the CPU to handle separate tasks, which improves overall
system throughput and ensures that the server can scale well under heavy load.
Web Servers: Apache, Nginx (runs different processes to handle multiple web requests simultaneously).
Database Servers: MySQL, PostgreSQL (handle multiple queries by running different processes for different client requests).
Operating Systems: Unix, Linux (use multiprocessing to run multiple applications and services simultaneously).
Parallel Computing Systems: Supercomputers, cloud-based systems (e.g., AWS EC2, Google Cloud Compute Engine) that use
multiprocessing to perform complex computations in parallel.
import multiprocessing
import time
time.sleep(duration)
if __name__ == "__main__":
processes = []
tasks = [("Web Server", 2), ("Database Server", 3), ("File Sharing", 1)]
processes.append(p)
Race Conditions occur when multiple threads or processes access shared resources simultaneously and the final outcome
depends on the order in which they access the resource. This can lead to unpredictable results or corruption of data.
o Example: Two processes (e.g., Web Server and Database Server) updating the same user record in a database
at the same time without synchronization can cause data inconsistency.
1. Locks:
o A lock is a synchronization primitive used to ensure that only one process or thread can access a critical section
(shared resource) at a time.
2. Semaphores:
A semaphore is a signaling mechanism used to control access to a shared resource by multiple processes or threads. It
allows a set number of processes to access the resource concurrently.
A mutex ensures that only one thread or process can access the critical section at a time. It’s similar to a lock, but a mutex
can only be released by the process that acquired it.
4. Condition Variables:
Condition variables are used for synchronization between threads or processes that need to wait for a specific condition to
be met before proceeding.
5. Atomic Operations:
o Atomic operations are indivisible operations that cannot be interrupted. Using atomic operations prevents data
corruption in shared resources.
o In some programming languages like Python, you can use built-in atomic operations for counters or variables
(e.g., atomic.add() in threading libraries).
What is Multi-Threading?
Multi-threading is a programming technique where a process is divided into multiple smaller tasks called threads, which can
run concurrently. Each thread represents a separate flow of control, allowing the program to perform multiple operations at
the same time. In the context of the bank system, multi-threading is useful for handling simultaneous deposit, withdrawal, and
balance check requests without blocking other operations.
pthread_create():
o This function is used to create a new thread. It takes several parameters, including a thread identifier, thread
attributes, the function that the thread will execute, and its arguments.
o Syntax:
o int pthread_create(pthread_t *thread, const pthread_attr_t *attr, void *(*start_routine) (void *), void *arg);
pthread_join():
This function is used to wait for a thread to finish execution. It ensures that the main thread or calling thread will block until the
specified thread terminates.
Syntax
Here’s an implementation of a simple banking system using multiple threads for deposit, withdrawal, and balance check operations:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>
pthread_mutex_t lock;
balance += deposit_amount;
return NULL;
pthread_mutex_lock(&lock);
balance -= withdraw_amount;
} else {
printf("Insufficient funds for withdrawal of $%.2f. Current Balance: $%.2f\n", withdraw_amount, balance);
pthread_mutex_unlock(&lock);
return NULL;
}
void* check_balance(void* arg) {
pthread_mutex_lock(&lock);
pthread_mutex_unlock(&lock);
return NULL;
int main() {
pthread_t threads[MAX_TRANSACTIONS];
double amounts[MAX_TRANSACTIONS];
int transaction_count = 0;
pthread_mutex_init(&lock, NULL);
if (transaction_count % 3 == 0) {
} else if (transaction_count % 3 == 1) {
} else {
pthread_join(threads[i], NULL);
pthread_mutex_destroy(&lock);
return 0;
C. Code for Banking Simulation to Detect Transaction Limit and Safely Terminate Threads
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>
// Shared data
pthread_mutex_t lock;
int transaction_count = 0;
pthread_mutex_lock(&lock);
balance += deposit_amount;
pthread_mutex_unlock(&lock);
return NULL;
pthread_mutex_lock(&lock);
balance -= withdraw_amount;
} else {
printf("Insufficient funds for withdrawal of $%.2f. Current Balance: $%.2f\n", withdraw_amount, balance);
pthread_mutex_unlock(&lock);
return NULL;
}
void* check_balance(void* arg) {
pthread_mutex_lock(&lock);
pthread_mutex_unlock(&lock);
return NULL;
if (transaction_count % 3 == 0) {
} else if (transaction_count % 3 == 1) {
} else {
usleep(1000);
transaction_count++;
return NULL;
int main() {
pthread_t threads[MAX_TRANSACTIONS];
pthread_t manager_thread;
pthread_mutex_init(&lock, NULL);
pthread_join(manager_thread, NULL);
pthread_join(threads[i], NULL);
pthread_mutex_destroy(&lock);
return 0;
To effectively measure and manage process resource consumption, the IT team can focus on the following key metrics:
1. CPU Usage:
o This metric measures the percentage of CPU time consumed by a process. It can help identify processes that
are CPU-intensive and could potentially affect the performance of other processes.
o This includes both the total memory used by the process and the breakdown of memory consumption (e.g.,
shared memory, heap memory, stack memory). High memory usage can lead to memory leaks and swapping,
which degrades system performance.
3. I/O Operations:
o This measures the amount of input and output the process performs (e.g., disk reads/writes, network activity).
Excessive I/O can lead to bottlenecks, especially in disk or network-bound applications.
4. Process Priority:
o This refers to the scheduling priority of a process. Lower-priority processes may delay high-priority ones,
especially when resources are constrained.
5. Process State:
o Monitoring the state of a process (e.g., running, sleeping, waiting for I/O) can help identify when processes are
idle or blocked, preventing the system from fully utilizing available resources.
6. Process Lifetime:
o This refers to the duration a process has been running. Long-running processes could lead to resource
contention and cause inefficiency in resource allocation.
o Tracking the amount of disk space used by a process or its data files can prevent issues such as disk exhaustion,
leading to performance degradation.
8. Context Switches:
o The number of times the CPU switches from one process to another. A high number of context switches can be
indicative of overhead due to frequent task switching.
9. Throughput:
o This measures the number of tasks or requests a process can complete in a given time period. For servers
handling requests, higher throughput is a key performance metric.
1. Database Server:
o Priority Justification: The database server often handles critical business logic and is directly tied to
application performance. For example, if the database server is slowed down, web applications that rely on
database queries will also be delayed, leading to slow page loads and poor user experience.
o Prioritization Approach: Assign higher priority to the database server to ensure quick access to database
resources. The database server may need to have a real-time or high-priority scheduling class to prevent delays
in database transactions and maintain data consistency.
2. Web Server:
o Priority Justification: While the web server is important, its performance depends heavily on the database
server. Requests that are not time-sensitive (such as static content requests) can be handled at a lower priority
to allow the database server to focus on time-sensitive operations (e.g., complex queries, transactions).
o Prioritization Approach: The web server should be assigned a lower priority than the database server, focusing
on handling user requests that can tolerate some delay (e.g., caching static content, queuing requests).
Unbalanced Load: If one process consumes too many resources (e.g., CPU or memory), it can starve other processes,
leading to inefficiencies.
Inefficient Scheduling Algorithms: Without proper prioritization, low-priority tasks may block high-priority tasks,
leading to delayed responses.
Excessive Context Switching: Frequent switching between processes can result in overhead, reducing overall system
throughput.
Memory Leaks: If processes don't properly release memory, they can consume all available memory, causing swapping
and system slowdown.
High Disk I/O: Processes that continuously read and write to disk without caching or optimizations can degrade overall
system performance.
To improve process scheduling, we need to prioritize real-time tasks while ensuring fair resource allocation for non-critical tasks.
o Real-time processes should be assigned the highest priority, ensuring they are given CPU time before non-
critical processes. These processes could include tasks like transaction processing or critical alerts.
o For non-real-time tasks, we use a round-robin scheduling algorithm. This ensures that each process gets a fair
share of CPU time, preventing any one process from hogging the CPU.
o When the system load increases, the algorithm adjusts priorities dynamically to allocate more CPU time to
high-priority processes and less to lower-priority ones.
Algorithm Steps:
1. Step 1: Identify the real-time processes that must be given priority over others (e.g., database transactions, emergency
responses).
2. Step 2: Assign the highest priority to real-time processes, ensuring they execute first.
3. Step 3: For non-real-time tasks (e.g., web server requests), use a round-robin scheduler to give each process a fixed time
slice.
4. Step 4: If a process exceeds its time slice, it’s placed at the end of the queue, and the next process is scheduled.
5. Step 5: Monitor system load and adjust priority dynamically. If the system becomes overloaded, lower the priority of non-
essential tasks to ensure critical tasks are not delayed.
Here’s an implementation where the parent process creates a child process using the fork() system call. The child process computes
the Fibonacci sequence and prints it.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
void generate_fibonacci(int n) {
int a = 0, b = 1, c;
if (i <= 1) {
} else {
c = a + b;
a = b;
b = c;
printf("\n");
if (argc != 2) {
exit(1);
int n = atoi(argv[1]);
// Fork failed
perror("Fork failed");
exit(1);
if (pid == 0) {
generate_fibonacci(n);
} else {
wait(NULL);
return 0;
B. Why Do the Parent and Child Processes Have Different Copies of the value Variable After the fork() Call?
After a fork() call, the parent and child processes have separate memory spaces. The fork() system call creates a duplicate of the
parent's memory space for the child. Both processes will have their own copies of all variables, including value, and any changes
made to a variable in one process will not affect the other process.
Parent Process: After the fork(), the parent can continue its execution. It has its own copy of the memory, including the
variables.
Child Process: The child gets its own copy of the memory, which includes a duplicate of all the parent's variables.
Changes made in the child process (e.g., modifying value) do not affect the parent process’s value, as they reside in
different memory spaces.
This is a result of copy-on-write semantics: both processes initially share the same physical memory pages, but as soon as either
process modifies the memory, a copy of that page is created for that process.
To allow both the parent and child processes to share the Fibonacci sequence, we can use shared memory. Here is a modified version
of the program where the Fibonacci sequence is written to a shared memory segment.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
#include <sys/shm.h>
#include <sys/ipc.h>
int a = 0, b = 1, c;
if (i <= 1) {
shm_ptr[i] = i;
} else {
c = a + b;
a = b;
b = c;
shm_ptr[i] = c;
if (argc != 2) {
exit(1);
int n = atoi(argv[1]);
if (shmid == -1) {
perror("shmget failed");
exit(1);
perror("shmat failed");
exit(1);
}
pid_t pid = fork(); // Create child process
if (pid == -1) {
// Fork failed
perror("Fork failed");
exit(1);
if (pid == 0) {
generate_fibonacci(n, shm_ptr);
exit(0);
} else {
// Parent process waits for child and then prints Fibonacci sequence
wait(NULL);
printf("\n");
shmdt(shm_ptr);
return 0;
Multi-threading is generally more efficient than fork() for tasks like generating the Fibonacci sequence because threads
share memory, leading to lower overhead and faster communication.
fork() involves creating a completely separate process with its own memory space, which is more expensive in terms of
both time and system resources. In contrast, threads are lighter and can share the same memory, making them more
suitable for tasks that involve frequent data exchange, like computing Fibonacci numbers.
However, multi-threading requires careful synchronization to avoid race conditions, which is less of an issue with fork() since
processes are isolated.