0% found this document useful (0 votes)
15 views18 pages

Os Lab

The document covers various topics related to process management and synchronization in Linux, including child process creation, process scheduling algorithms (FCFS, SJF, Round Robin), and deadlock recovery using Banker's algorithm. It also includes examples of threading in Python, demonstrating race conditions and process synchronization using locks and semaphores, as well as a producer-consumer problem. Additionally, it discusses capturing live RAM content and analyzing it with Volatility for forensic purposes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views18 pages

Os Lab

The document covers various topics related to process management and synchronization in Linux, including child process creation, process scheduling algorithms (FCFS, SJF, Round Robin), and deadlock recovery using Banker's algorithm. It also includes examples of threading in Python, demonstrating race conditions and process synchronization using locks and semaphores, as well as a producer-consumer problem. Additionally, it discusses capturing live RAM content and analyzing it with Volatility for forensic purposes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

1.

Child process creation using Linux basic command


What is a Child Process?
A child process is a copy of a running program that’s created by another program (called the
parent).
Think of it like this: the parent says, “Hey, I need some help,” and creates a mini version of
itself to do a job.

Basic Linux Commands You Can Try


• ps – Shows current running processes.
• top – Shows all active processes (live view).
• fork() – Not a command but a function in C to create child processes.
• kill PID – Stops a process using its ID.

2. Write a program using shell programming to implement an


address book with options given below: a) Create address
book. b) View address book. c) Insert a record. d) Delete a
record. e) Modify a record. f) Exit.

!/bin/bash
filename="addressbook.txt"
while true
do
echo ""
echo "===== Address Book Menu ====="
echo "1. Create Address Book"
echo "2. View Address Book"
echo "3. Insert a Record"
echo "4. Delete a Record"
echo "5. Modify a Record"
echo "6. Exit"
echo "=============================="
read -p "Choose an option (1-6): " choice

case $choice in
1)
> $filename
echo "Address book created (cleared if existed)!"
;;
2)
if [ -e $filename ]; then
echo "---- Address Book ----"
cat $filename
else
echo "Address book does not exist. Create it first!"
fi
;;
3)
read -p "Enter Name: " name
read -p "Enter Phone: " phone
read -p "Enter Email: " email
echo "$name | $phone | $email" >> $filename
echo "Record inserted!"
;;
4)
read -p "Enter Name to Delete: " delname
grep -v "^$delname |" $filename > temp.txt && mv temp.txt $filename
echo "Record deleted (if existed)."
;;
5)
read -p "Enter Name to Modify: " modname
grep -v "^$modname |" $filename > temp.txt && mv temp.txt $filename
read -p "Enter New Name: " newname
read -p "Enter New Phone: " newphone
read -p "Enter New Email: " newemail
echo "$newname | $newphone | $newemail" >> $filename
echo "Record modified!"
;;
6)
echo "Exiting. Bye "
break
;;
*)
echo "Invalid option. Try again!"
;;
esac
done
3. Implement process scheduling algorithms.

Scenario: We have 4 processes

Process Burst Time (BT)

P1 6

P2 8

P3 7

P4 3
Now let’s solve this with FCFS, SJF, and Round Robin.

1. FCFS – First Come First Serve


Processes come in this order: P1, P2, P3, P4
We calculate:
• Waiting Time (WT) = Time process waits in queue
• Turnaround Time (TAT) = WT + BT
Step-by-step:

Process BT WT TAT

P1 6 0 6

P2 8 6 14

P3 7 14 21

P4 3 21 24

Average Waiting Time = (0 + 6 + 14 + 21) / 4 = 10.25

Average Turnaround Time = (6 + 14 + 21 + 24) / 4 = 16.25


2. SJF – Shortest Job First
We sort by burst time:
Process BT

P4 3

P1 6

P3 7

P2 8

Step-by-step:

Process BT WT TAT

P4 3 0 3

P1 6 3 9

P3 7 9 16

P2 8 16 24

Average WT = (0 + 3 + 9 + 16) / 4 = 7.0


Average TAT = (3 + 9 + 16 + 24) / 4 = 13.0

Notice: Waiting time is much lower than FCFS. That's why SJF is better
for batch processing.

3. Round Robin (with Time Quantum = 4)


Process order: P1, P2, P3, P4
Time Quantum = 4
Step-by-step (time moves in chunks of 4):
• P1 runs 4 (2 left) → time = 4
• P2 runs 4 (4 left) → time = 8
• P3 runs 4 (3 left) → time = 12
• P4 runs 3 (done) → time = 15
• P1 runs 2 (done) → time = 17
• P2 runs 4 (done) → time = 21
• P3 runs 3 (done) → time = 24
Now, calculate completion time (CT) → then TAT = CT - Arrival (assume all
arrive at time 0), and WT = TAT - BT
Process BT CT TAT WT

P1 6 17 17 11

P2 8 21 21 13

P3 7 24 24 17

P4 3 15 15 12

Average WT = (11 + 13 + 17 + 12) / 4 = 13.25


Average TAT = (17 + 21 + 24 + 15) / 4 = 19.25
Round Robin is fair — everyone gets a chance, but waiting time is
higher than SJF.

Final Verdict:

Algorithm Avg WT Avg TAT Best For

FCFS 10.25 16.25 Simple systems

SJF 7.00 13.00 Fast batch jobs

Round Robin 13.25 19.25 Multi-user systems (fair!)


4. Implement Bankar’s algorithm for deadlock recovery.

Given:
Process Allocated Max

ABC ABC

P0 010 753

P1 200 322

P2 302 902

P3 211 222

P4 002 433
Available = [3, 3, 2]

Step 1: Calculate Need = Max - Allocated

Process Need (A B C)

P0 743

P1 122

P2 600

P3 011

P4 431

Step 2: Apply Banker's Algorithm (Safety Check)


Available = [3, 3, 2]
We look for a process whose Need ≤ Available.
• P1: Need = [1,2,2] YES!
➤ Available += Allocated of P1 = [3+2, 3+0, 2+0] → Now Available =
[5,3,2]
• P3: Need = [0,1,1] YES!
➤ Available += Allocated of P3 = [5+2, 3+1, 2+1] → Now Available =
[7,4,3]
• P4: Need = [4,3,1] YES!
➤ Available += Allocated of P4 = [7+0,4+0,3+2] → Now Available =
[7,4,5]
• P0: Need = [7,4,3] YES!
➤ Available += Allocated of P0 = [7+0,4+1,5+0] → Now Available =
[7,5,5]
• P2: Need = [6,0,0] YES!
➤ Available += Allocated of P2 = [7+3,5+0,5+2] → Final Available =
[10,5,7]

Safe Sequence:
P1 → P3 → P4 → P0 → P2
This means no deadlock will happen! The system is SAFE

5. Program to create two threads: one to increment the value of


a shared variable and second to decrement the value of
shared variable. Both the threads are executed, so the final
value of shared variable should be same as its initial value.
(Race around condition)
import threading
shared_variable = 0
iterations = 100000 Large number to increase chance of race condition

def increment():
global shared_variable
for _ in range(iterations):
shared_variable += 1
def decrement():
global shared_variable
for _ in range(iterations):
shared_variable -= 1
t1 = threading.Thread(target=increment)
t2 = threading.Thread(target=decrement)
t1.start()
t2.start()
t1.join()
t2.join()
print(f"Initial value should be: 0")
print(f"Final value is: {shared_variable}")
print(f"Difference: {shared_variable} (This should be 0 without race
conditions)")
6. Program create two threads: one to increment the value of a
shared variable and second to decrement the value of shared
variable. Both the threads make use of locks so that only one
of the threads is executing in its critical section (Process
Synchronization using mutex locks).

import threading

Shared variable and lock


shared_variable = 0
iterations = 100000 Large number to demonstrate synchronization
lock = threading.Lock() Mutex lock

def increment():
global shared_variable
for _ in range(iterations):
lock.acquire() Acquire lock before critical section
shared_variable += 1
lock.release() Release lock after critical section

def decrement():
global shared_variable
for _ in range(iterations):
lock.acquire() Acquire lock before critical section
shared_variable -= 1
lock.release() Release lock after critical section

t1 = threading.Thread(target=increment)
t2 = threading.Thread(target=decrement)
t1.start()
t2.start()
t1.join()
t2.join()
print(f"Initial value: 0")
print(f"Final value: {shared_variable}")
print("Note: With proper synchronization, the final value should always be
0")
7. Program creates two threads: one to increment the value of a shared
variable and second to decrement the value of the shared variable.
Both the threads make use of semaphore variable so that only one of
the threads is executing in its critical section. (Process
Synchronization using semaphore).

import threading
shared_variable = 0
iterations = 100000
semaphore = threading.Semaphore(1)
def increment():
global shared_variable
for _ in range(iterations):
semaphore.acquire()
shared_variable += 1
semaphore.release()
def decrement():
global shared_variable
for _ in range(iterations):
semaphore.acquire()
shared_variable -= 1
semaphore.release()
t1 = threading.Thread(target=increment)
t2 = threading.Thread(target=decrement)
t1.start()
t2.start()
t1.join()
t2.join()
print(f"Initial value: 0")
print(f"Final value: {shared_variable}")

8. Producer consumer problem demonstration

import threading
import time
import random

BUFFER_SIZE = 5
buffer = []
mutex = threading.Semaphore(1)
empty = threading.Semaphore(BUFFER_SIZE)
full = threading.Semaphore(0)

class Producer(threading.Thread):
def run(self):
global buffer
for i in range(10):
item = random.randint(1, 100)
empty.acquire()
mutex.acquire()
buffer.append(item)
print(f"Produced {item}, Buffer: {buffer}")
mutex.release()
full.release()
time.sleep(random.random())

class Consumer(threading.Thread):
def run(self):
global buffer
for i in range(10):
full.acquire()
mutex.acquire()
item = buffer.pop(0)
print(f"Consumed {item}, Buffer: {buffer}")
mutex.release()
empty.release()
time.sleep(random.random())

producer = Producer()
consumer = Consumer()

producer.start()
consumer.start()

producer.join()
consumer.join()
7. Live RAM Content Capture and Analysis with Volatility

1. Capturing Live RAM

First, let's capture the live RAM content:

On Windows (using DumpIt):


powershell
Download DumpIt from https://www.comae.com/tools/
DumpIt.exe /OUTPUT memory.dmp

On Linux (using LiME):


bash
Install LiME kernel module
sudo apt-get install lime-forensics-dkms

Load module and dump memory


sudo insmod /lib/modules/$(uname -r)/updates/dkms/lime.ko
"path=/tmp/memory.lime format=lime"

2. Volatility Analysis
After capturing memory, analyze with Volatility:
Basic System Information
bash
volatility -f memory.dmp imageinfo
volatility -f memory.dmp --profile=Win10x64_19041 pslist
Process Analysis Commands
bash
List all processes
volatility -f memory.dmp --profile=Win10x64_19041 pstree

Examine process memory


volatility -f memory.dmp --profile=Win10x64_19041 memdump -p 1234 -D
output/

Check network connections


volatility -f memory.dmp --profile=Win10x64_19041 netscan

Check for malicious code injection


volatility -f memory.dmp --profile=Win10x64_19041 malfind -p 1234

Dump process memory


volatility -f memory.dmp --profile=Win10x64_19041 procdump -p 1234 -D
output/

Advanced Analysis
bash
Check for API hooks
volatility -f memory.dmp --profile=Win10x64_19041 apihooks -p 1234

Extract command history


volatility -f memory.dmp --profile=Win10x64_19041 cmdscan
Check registry hives
volatility -f memory.dmp --profile=Win10x64_19041 hivelist
volatility -f memory.dmp --profile=Win10x64_19041 printkey -K
"Software\Microsoft\Windows\CurrentVersion\Run"

3. Automated Analysis Script

python
!/usr/bin/env python3
import os
import subprocess

def analyze_memory(memory_file, profile):


commands = {
"Process Tree": f"volatility -f {memory_file} --profile={profile}
pstree",
"Network Connections": f"volatility -f {memory_file} --
profile={profile} netscan",
"Suspicious Processes": f"volatility -f {memory_file} --
profile={profile} psxview",
"DLL List": f"volatility -f {memory_file} --profile={profile} dlllist"
}

for name, cmd in commands.items():


print(f"\n=== {name} ===")
result = subprocess.run(cmd, shell=True, capture_output=True,
text=True)
print(result.stdout)
if __name__ == "__main__":
memory_file = input("Enter memory dump path: ")
profile = input("Enter Volatility profile: ")
analyze_memory(memory_file, profile)

Key Features:

1. Memory Acquisition: Tools for both Windows and Linux systems


2. Comprehensive Analysis: Process examination, network activity, code
injection detection
3. Automation: Python script for streamlined analysis
4. Forensic Techniques: Registry analysis, command history recovery, API
hook detection

You might also like