0% found this document useful (0 votes)
12 views36 pages

DSCC File

The document outlines practical exercises in Java focusing on Remote Procedure Call (RPC), Remote Method Invocation (RMI), Lamport’s Logical Clock, and Lamport’s Mutual Exclusion Algorithm. Each practical includes an aim, course outcome, theory, algorithm, and code implementation for distributed systems. The expected outcomes for each practical have been successfully attained.

Uploaded by

ab.suman003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views36 pages

DSCC File

The document outlines practical exercises in Java focusing on Remote Procedure Call (RPC), Remote Method Invocation (RMI), Lamport’s Logical Clock, and Lamport’s Mutual Exclusion Algorithm. Each practical includes an aim, course outcome, theory, algorithm, and code implementation for distributed systems. The expected outcomes for each practical have been successfully attained.

Uploaded by

ab.suman003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Practical – 1

AIM: To write a program in Java that demonstrates the implementation of Remote Procedure Call (RPC).

Course Outcome: CO1

Software Used: Java SDK

Theory:

Algorithm:
1. Set Up the Java Project Environment:

- Install the Java Development Kit (JDK).

- Set up an Integrated Development Environment (IDE) like Eclipse or IntelliJ.

- Create a new Java project.

In src folder, right-click and select New -> Package.

Create the following packages:

• rpc

• client

• server

2. Implement the Server-Side Component:

- In the rpc package, create a new Java class and name it RPCInterface.

- In the server package, create a new class named RPCServer.

- Define an interface that specifies the remote procedures. This interface will be implemented by the server

and invoked by the client.

- Implement the server class that provides the actual implementation of the remote procedures defined in

the interface.

- Use Java sockets to listen for incoming client connections on a specific port.

- Serialize the results and send them back to the client.

3. Implement the Client-Side Component:

- In the client package, create a new class named RPCClient.

- Create a client program that establishes a connection to the server using sockets.

- Implement methods to marshal (serialize) the procedure parameters and send them to the server.

- Receive the response from the server, unmarshal (deserialize) the data, and display the results.

4. Communication and Execution:

-Ensure that both client and server agree on the data format for communication.

-Handle network exceptions and ensure that the connection is properly closed after communication.

5. Testing the RPC:

- Compile and run the server program.

- Compile and run the client program.

- Test various remote procedure calls by invoking methods from the client and observing the server's
response.

CODE:

1. rpc -> RPCInterface

package rpc;
import java.io.Serializable;
public interface RPCInterface extends Serializable {
// A remote method to add two numbers
int add(int a, int b);
}
2. server -> RPCServer
package server;
import rpc.RPCInterface;
import java.io.*;
import java.net.*;
// Server class that implements the RPCInterface
public class RPCServer implements RPCInterface {
private ServerSocket serverSocket;
// Implement the add method
@Override
public int add(int a, int b) {
return a + b;
}
// Method to start the server
public void start(int port) {
try {
serverSocket = new ServerSocket(port);
System.out.println("Server started on port " + port);
while (true) {
new ClientHandler(serverSocket.accept(), this).start();
}
} catch (IOException e) {
e.printStackTrace();
}
}
// Inner class to handle client requests
private static class ClientHandler extends Thread {
private Socket clientSocket;
private RPCServer server;
public ClientHandler(Socket socket, RPCServer server) {
this.clientSocket = socket;
this.server = server;
}
public void run() {
try (ObjectInputStream input = new ObjectInputStream(clientSocket.getInputStream());
ObjectOutputStream output = new
ObjectOutputStream(clientSocket.getOutputStream())) {
// Read method name and parameters from the client
String methodName = (String) input.readObject();
int a = input.readInt();
int b = input.readInt();
// Call the add method if requested
if (methodName.equals("add")) {
int result = server.add(a, b);
output.writeInt(result); // Send result back to the client
output.flush();
}
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
} finally {
try {
clientSocket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
// Main method to start the server
public static void main(String[] args) {
RPCServer server = new RPCServer();
server.start(8080); // Server will listen on port 8080
}
}

3. client -> RPCClient


package client;
import java.io.*;
import java.net.*;
// Client class to connect to the RPCServer
public class RPCClient {
private Socket clientSocket;
private ObjectOutputStream output;
private ObjectInputStream input;
// Connect to the server
public void connect(String ip, int port) throws IOException {
clientSocket = new Socket(ip, port);
output = new ObjectOutputStream(clientSocket.getOutputStream());
input = new ObjectInputStream(clientSocket.getInputStream());
System.out.println("Connected to server at " + ip + ":" + port);
}
public int callAdd(int a, int b) throws IOException, ClassNotFoundException {
output.writeObject("add"); // Send method name
output.writeInt(a); // Send parameters
output.writeInt(b);
output.flush();
// Get result from the server
return input.readInt();
}
// Disconnect from the server
public void disconnect() throws IOException {
input.close();
output.close();
clientSocket.close();
}
// Main method to test the client
public static void main(String[] args) {
RPCClient client = new RPCClient();
try {
System.out.println("Name : Abhay Sharma");
System.out.println("Roll no. : 10420802721");
client.connect("localhost", 8080); // Connect to server at localhost:8080
int result = client.callAdd(10, 20); // Call the 'add' method on server
System.out.println("Result of 10 + 20 = " + result); // Output result
client.disconnect(); // Disconnect from the server
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
}
}
}
OUTPUTS:

Expected Outcome attained: Yes


Practical – 2

AIM: To write a program in Java that demonstrates the implementation of Remote Method Invocation (RMI).

Course Outcome: CO2

Software Used: Java SDK

Theory:
Remote Method Invocation (RMI) is a Java-specific API that allows an object running in one Java Virtual Machine (JVM) to
invoke methods on an object running in another JVM. RMI simplifies the development of distributed applications by
allowing remote communication between objects with automatic handling of network communication, data serialization,
and object lifecycle management. RMI involves the following key components:

1. Remote Interface: Defines the methods that can be called remotely. It extends java.rmi.Remote.

2. Remote Object: The implementation of remote interface. It extends java.rmi.server.UnicastRemoteObject.

3. RMI Registry: A simple naming service that allows clients to obtain references to remote objects.

4. Stub and Skeleton: Stub acts as a proxy on the client side, forwarding method invocations to the remote

object. The skeleton on the server side receives the invocations and forwards them to the actual remote

object.

Algorithm:
1. Set Up the Java RMI Environment:
- Install the JDK and set up an IDE.
- Ensure the system network settings are configured for RMI communication.
2. Define the Remote Interface:
- Right-click on the src folder -> New -> Java Class -> Interface.
- Name the interface RMIServerInterface.
- Create an interface that extends java.rmi.Remote.
- Declare the methods that will be remotely invoked. Each method should throw
java.rmi.RemoteException.
3. Implement the Remote Object:
- Right-click the src folder -> New -> Java Class.
- Name the class RMIServerImpl.
- Right-click the src folder -> New -> Java Class.
- Name the class RMIServer.
- Implement the remote interface in a class that extends java.rmi.server.UnicastRemoteObject.
- Define the business logic inside the methods.
- Provide a constructor that handles the RemoteException.
4. Set Up the RMI Registry:
- Start the RMI registry using the command rmiregistry in the terminal.
- In the server code, bind the remote object to the RMI registry with a unique name using Naming.rebind().
5. Client Implementation:
- Use the Naming.lookup() method to find the remote object in the RMI registry.
- Right-click the src folder -> New -> Java Class.
- Name the class RMIClient.
- Invoke methods on the remote object as if it were a local object.
- Handle RemoteException and other potential exceptions.
6. Testing and Deployment:
- Compile the server and client programs.
- Run the server to register the remote object.
- Run the client to invoke methods on the remote object and display results.
CODE:
RMI CLIENT:
package client;
import rpc.RMIServerInterface;

import java.rmi.registry.LocateRegistry;
import java.rmi.registry.Registry;
public class RMIClient {
public static void main(String[] args) {
try {
// Locate the registry where the remote object is bound
Registry registry = LocateRegistry.getRegistry("localhost", 1099);

// Look up the remote object by name


RMIServerInterface stub = (RMIServerInterface) registry.lookup("HelloServer");

// Call the remote method and print the result


String response = stub.sayHello();
System.out.println("Name: Abhay Sharma\nRoll no.: 10420802721");
System.out.println("Response from server: " + response);
} catch (Exception e) {
e.printStackTrace();
}
}
}

RMI SERVER IMPL:


package server;
import rpc.RMIServerInterface;

import java.rmi.RemoteException;
import java.rmi.server.UnicastRemoteObject;
public class RMIServerImpl extends UnicastRemoteObject implements RMIServerInterface {

// Constructor must handle RemoteException


public RMIServerImpl() throws RemoteException {
super();
}
// Implement the sayHello() method
@Override
public String sayHello() throws RemoteException {
return "Hello from the RMI server!";
}
}

RMI SERVER:
import server.RMIServerImpl;

import java.rmi.registry.LocateRegistry;
import java.rmi.registry.Registry;
public class RMIServer {
public static void main(String[] args) {
try {
// Create an instance of the remote object
RMIServerImpl server = new RMIServerImpl();

// Create an RMI registry on port 1099


Registry registry = LocateRegistry.createRegistry(1099);

// Bind the remote object to the RMI registry with a name


registry.rebind("HelloServer", server);
System.out.println("RMI Server is running...");
} catch (Exception e) {
e.printStackTrace();
}
}
}

RMI INTERFACE:
package rpc;
import java.rmi.Remote;
import java.rmi.RemoteException;

public interface RMIServerInterface extends Remote {


// Method to be invoked remotely
String sayHello() throws RemoteException;
}

OUTPUTS:
Practical – 3

AIM: To implement Lamport’s Logical Clock in Java for event ordering in distributed systems.

Course Outcome: CO3

Software Used: Java SDK

Theory:

In distributed systems, it is often necessary to order events in a consistent manner across multiple processes,
even in the absence of a global clock. Lamport’s Logical Clock is a simple algorithm that provides a mechanism
for this by assigning a numerical timestamp to each event. The logical clock algorithm works as follows:

1. Initialization: Each process maintains a logical clock (counter) initialized to zero.


2. Event Occurrence: Before executing an event (including sending messages), the process increments its clock.
3. Message Sending: When a process sends a message, it includes its current logical clock value.
4. Message Receiving: Upon receiving a message, the process sets its clock to the maximum of its own clock
and the received clock, then increments it. This algorithm ensures that if an event a causally affects event b,
then the logical clock value of a is less than that of b.
ALGORITHM:

1. Setup Java Environment:


- Install JDK and set up an IDE.
- Create a new Java project.
2. Define Classes for Process and Event:
- Create a class Process with an integer field for the logical clock.
- Define a class Message with fields for the sender, receiver, and logical clock value.
3. Simulate Events:
- Implement a method sendEvent() to simulate sending a message from one process to another. The method
increments the sender's clock, packages the message with the clock value, and sends it.
- Implement a method receiveEvent() to simulate receiving a message. The method updates the receiver's
clock according to the logical clock algorithm.
4. Event Handling:
- Implement logic to handle local events and message events, updating the logical clocks accordingly.
- Maintain a log of events and their logical timestamps.
5. Testing:
- Simulate a series of events and message exchanges between processes.
- Output the logical clock values and verify the correct ordering of events.

CODE:
import java.util.Scanner;
class LamportsClock {
int logicalClock;
// Constructor to initialize the clock
public LamportsClock() {
logicalClock = 0;
}
// Function to send an event (increments the clock)
public void sendEvent() {
logicalClock++;
System.out.println("Send event occurred, updated logical clock: " + logicalClock);
}
// Function to receive an event (updates the clock based on received timestamp)
public void receiveEvent(int receivedTimestamp) {
logicalClock = Math.max(logicalClock, receivedTimestamp) + 1;
System.out.println("Receive event occurred, updated logical clock: " + logicalClock);
}
// Function to display the logical clock
public void displayClock() {
System.out.println("Current logical clock value: " + logicalClock);
}
}
public class LamportsLogicalClock {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
// Creating two processes with Lamport clocks
LamportsClock process1 = new LamportsClock();
LamportsClock process2 = new LamportsClock();
System.out.println("Name: Abhay Sharma");
System.out.println("Roll no.: 10420802721");
boolean running = true;
while (running) {
System.out.println("\nChoose an option:");
System.out.println("1. Process 1 sends event");
System.out.println("2. Process 2 sends event");
System.out.println("3. Process 1 receives event");
System.out.println("4. Process 2 receives event");
System.out.println("5. Display clocks");
System.out.println("6. Exit");
int choice = sc.nextInt();
switch (choice) {
case 1:
process1.sendEvent();
break;
case 2:
process2.sendEvent();
break;
case 3:
System.out.print("Enter the timestamp received by Process 1: ");
int receivedTimestamp1 = sc.nextInt();
process1.receiveEvent(receivedTimestamp1);
break;
case 4:
System.out.print("Enter the timestamp received by Process 2: ");
int receivedTimestamp2 = sc.nextInt();
process2.receiveEvent(receivedTimestamp2);
break;
case 5:
System.out.println("Process 1 Clock: ");
process1.displayClock();
System.out.println("Process 2 Clock: ");
process2.displayClock();
break;
case 6:
running = false;
break;
default:
System.out.println("Invalid choice, please try again.");
}
}
sc.close();
}
}

OUTPUT: Expected Outcome attained: Yes


Practical – 4

AIM: To implement a mutual exclusion service using Lamport’s Mutual Exclusion Algorithm in a distributed
system.

Course Outcome: CO2

Software Used: Java SDK

Theory:

Mutual exclusion in distributed systems ensures that multiple processes do not enter a critical section
simultaneously, which is crucial for maintaining data consistency and integrity. Lamport’s Mutual Exclusion
Algorithm is a distributed solution that uses logical clocks to manage access to the critical section. The
algorithm involves the following steps:

1. Requesting the Critical Section: A process sends a request message to all other processes, including its
current logical clock value.

2. Receiving Requests: Upon receiving a request, a process replies immediately if it is not in the critical section
and not waiting for the critical section with a higher priority. Otherwise, it defers the reply.

3. Entering the Critical Section: A process enters the critical section when it has received replies from all other
processes.

4. Releasing the Critical Section: After exiting the critical section, the process sends release messages to all
processes that it had deferred replies to.

ALGORITHM:
1. Setup Java Environment:
- Install JDK and set up an IDE.
- Create a new Java project.
2. Define Process Class:
- Implement a class Process with fields for logical clock, process ID, and state (e.g., REQUESTING,
EXECUTING, RELEASED).
- Define methods for sending request, reply, and release messages.
3. Request Handling:
- Implement logic for handling incoming request messages. If the process is in a lower priority state, send a
reply immediately; otherwise, defer the reply.
- Maintain a queue of deferred requests.
4. Critical Section Management:
- Implement a method enterCriticalSection() to enter the critical section after receiving all necessary
replies.
- Implement a method exitCriticalSection() to send release messages and handle deferred requests.
5. Testing:
- Simulate multiple processes requesting access to a shared resource.
CODE:

import java.util.ArrayList;
import java.util.LinkedList;
import java.util.Queue;
import java.util.Scanner;
class Process {
enum State { REQUESTING, EXECUTING, RELEASED }
int pid, logicalClock, replyCount;
State state;
Queue<Request> deferredRequests;
public Process(int pid) {
this.pid = pid;
this.logicalClock = 0;
this.state = State.RELEASED;
this.replyCount = 0;
this.deferredRequests = new LinkedList<>();
}
// Update logical clock
public void updateClock(int timestamp) {
logicalClock = Math.max(logicalClock, timestamp) + 1;
}
// Send a request for the critical section
public void sendRequest(ArrayList<Process> processes) {
state = State.REQUESTING;
logicalClock++;
replyCount = 0;
System.out.println("Process " + pid + " is requesting CS at time " + logicalClock);
for (Process p : processes) {
if (p.pid != this.pid) {
p.receiveRequest(new Request(this.pid, this.logicalClock));
}
}
}
// Receive a request message from another process
public void receiveRequest(Request req) {
updateClock(req.timestamp);
System.out.println("Process " + pid + " received request from Process " + req.pid + " with timestamp "
+ req.timestamp);
if (state == State.EXECUTING || (state == State.REQUESTING && (logicalClock < req.timestamp ||
(logicalClock == req.timestamp && pid < req.pid)))) {
// Defer the request
deferredRequests.add(req);
System.out.println("Process " + pid + " defers reply to Process " + req.pid);
} else {
// Reply immediately
sendReply(req.pid);
}
}
// Send a reply to a requesting process
public void sendReply(int targetPid) {
System.out.println("Process " + pid + " sends reply to Process " + targetPid);
replyCount++;
}
// Enter the critical section if all replies are received
public void enterCriticalSection(int numProcesses) {
if (replyCount == numProcesses - 1) {
state = State.EXECUTING;
System.out.println("Process " + pid + " enters the critical section");
}
}
// Exit the critical section and release resources
public void exitCriticalSection(ArrayList<Process> processes) {
System.out.println("Process " + pid + " leaves the critical section");
state = State.RELEASED;
// Send release messages
for (Request r : deferredRequests) {
sendReply(r.pid);
}
deferredRequests.clear();
}
}
class Request implements Comparable<Request> {
int pid, timestamp;
public Request(int pid, int timestamp) {
this.pid = pid;
this.timestamp = timestamp;
}
@Override
public int compareTo(Request other) {
if (this.timestamp == other.timestamp) {
return Integer.compare(this.pid, other.pid);
}
return Integer.compare(this.timestamp, other.timestamp);
}
}
public class LamportsMutualExclusion {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
ArrayList<Process> processes = new ArrayList<>();
System.out.println("Name: Abhay Sharma");
System.out.println("Roll no.: 10420802721");
// Create 3 processes
for (int i = 0; i < 3; i++) {
processes.add(new Process(i + 1));
}
boolean running = true;
while (running) {
System.out.println("Choose an option:");
System.out.println("1. Process 1 request CS\n 2. Process 2 request CS \n 3. Process 3 request CS \n 4." +
" Process 1 exit CS\n 5. Process 2 exit CS \n 6. Process 3 exit CS \n 7. Exit");
int choice = sc.nextInt();
switch (choice) {
case 1 -> processes.get(0).sendRequest(processes);
case 2 -> processes.get(1).sendRequest(processes);
case 3 -> processes.get(2).sendRequest(processes);
case 4 -> processes.get(0).exitCriticalSection(processes);
case 5 -> processes.get(1).exitCriticalSection(processes);
case 6 -> processes.get(2).exitCriticalSection(processes);
case 7 -> running = false;
default -> System.out.println("Invalid choice.");
}
// Try to enter the critical section for each process
for (Process p : processes) {
p.enterCriticalSection(processes.size());
}
}
sc.close();
}
}

OUTPUT: Expected Outcome attained: Yes


Practical – 5

AIM: To install and configure Hadoop on a Windows operating system for big data processing.

Course Outcome: CO3

Software Used: Hadoop distribution, Java SDK.

Theory:

Hadoop is an open-source framework that enables distributed storage and processing of large datasets across
clusters of computers using simple programming models. It consists of two main components:

1. Hadoop Distributed File System (HDFS): A distributed file system that provides high throughput access to
application data.

2. YARN (Yet Another Resource Negotiator): A resource management platform responsible for managing
compute resources in clusters and scheduling users' applications.

Hadoop can be installed on a variety of platforms, including Windows, although it is traditionally used on Linux
systems. The installation on Windows involves setting up Java, configuring environment variables, and ensuring
that the necessary components are correctly installed and configured.

Flowchart/Algorithm:
1. Install Java SDK:
- Download the latest version of JDK from Oracle’s website.
- Install JDK and set the JAVA_HOME environment variable in the system properties.
2. Download Hadoop:
- Visit the Apache Hadoop website and download the appropriate version of Hadoop. - Extract the downloaded archive to a suitable
directory on your system.
3. Set Environment Variables:
- Set HADOOP_HOME to the Hadoop installation directory. - Add %HADOOP_HOME%\bin to the system PATH variable.
4. Configure Hadoop:
- Navigate to the etc/hadoop directory and configure the following files:
- core-site.xml: Set the default file system and path.
- hdfs-site.xml: Configure the replication factor and data node directories.
- mapred-site.xml: Define the job tracker and task tracker.
- yarn-site.xml: Set resource manager and node manager settings.
5. Format HDFS:
- Use the command hdfs namenode
-format to format the Hadoop filesystem.
6. Start Hadoop Services:
- Start the NameNode and DataNode using start-dfs.cmd.
- Start the ResourceManager and NodeManager using start-yarn.cmd.
7. Verification: Access the Hadoop web interfaces for HDFS and YARN to
verify the installation and configuration.
Results: Hadoop successfully installed and configured on
Windows, with all components operational.
Installation and Configuration of Java SDK:
Installation and Configuration of Hadoop:
Expected Outcome Obtained: Yes
PRACTICAL - 6
Aim: Run a simple application on single node Hadoop Cluster.
Course Outcome: CO3
Software Used: Hadoop single node cluster, JAVA SDK.

Theory:
MapReduce is a programming model for processing large data sets with a distributed algorithm on a Hadoop
cluster. It consists of two main functions:
1. Map: Processes input data into key-value pairs.
2. Reduce: Aggregates the intermediate key-value pairs generated by the map function.

In a single-node Hadoop cluster, all Hadoop services run on a single machine, making it ideal for
development and testing. This setup helps developers test their code in a Hadoop environment before
deploying it to a multi-node cluster.

Flowchart/Algorithm/Code:
1. Setup Hadoop Cluster:
- Ensure that Hadoop is correctly installed and configured on the single-node cluster.
2. Write MapReduce Application:
- Implement the Mapper and Reducer classes. For example, a WordCount application would count
the occurrences of each word in a text file.
- The Mapper class processes input lines into words and outputs key-value pairs where the key is
the word, and the value is the count (initially 1).
- The Reducer class aggregates the counts for each word.
3. Prepare Input Data:
- Create an input file (e.g., a text file) containing the data to be processed.
- Use the hdfs dfs -put command to upload the input file to HDFS.
4. Configure Job:
- Define the job configuration in a Driver class, specifying the input and output paths, the Mapper
and Reducer classes, and other necessary configurations.
5. Run MapReduce Job:
- Use the hadoop jar command to run the job, specifying the JAR file containing the compiled
classes and the job configuration.
- Monitor the job progress via the Hadoop web interface or console output.
6. Retrieve Output:
- Once the job is complete, use the hdfs dfs -get command to retrieve the output from HDFS.
- Analyze the output data to verify the correctness of the results.

Code:
import java.io.IOExcep on;
import java.u l.StringTokenizer;
import org.apache.hadoop.conf.Configura on;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import
org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public
class WordCount {
public sta c class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{ private
final sta c IntWritable one = new IntWritable(1); private
Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOExcep on, InterruptedExcep on { StringTokenizer
itr = new StringTokenizer(value.toString()); while
(itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOExcep on, InterruptedExcep on {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}public sta c void main(String[] args) throws Excep on {
Configura on conf = new Configura on();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForComple on(true) ? 0 : 1);
}
}
CMD Code:
Change directory:-
cd C:\Users\vibhutigupta\Desktop\DSCC\Program_6 Compile
application:-:-
javac -classpath “%HADOOP_HOME%\share\hadoop\common\hadoop-common- *.jar;
%HADOOP_HOME%\share\hadoop\mapreduce\hadoop-mapreduce-client-core-*.jar"
-d .WordCount.java
Jar file out of all the .class files:- jar -
cvf wordcount.jar -C . .
Putting input.txt in HDFS:- hdfs
namenode -format start-
dfs.cmd
start-yarn.cmd
hadoop fs -mkdir /input hadoop fs -put
input.txt /input
Running the applica on:-
hadoop jar wordcount.jar WordCount /input/input.txt /output
Fetching the output:-
hadoop fs -get /output C:\Users\ya n\Desktop\DSCC\Program_6

Results:
This simple WordCount application demonstrates the basic functionality of a Hadoop cluster
INPUT:

OUTPUT:

Expected Outcome Attained: YES


PRACTICAL - 7
Aim: Install Google App Engine and develop a simple web application.
Course Outcome: CO3
Software Used: Google App Engine SDK, Java SDK.

Theory:
Google App Engine (GAE) is a fully managed Platform as a Service (PaaS) that allows developers to build and
deploy web applications on Google's infrastructure. It supports various programming languages and
frameworks, offering services like automatic scaling, load balancing, and security. GAE abstracts
infrastructure management, allowing developers to focus on code.

A simple web application typically consists of a front-end (user interface) and a back-end (server-side logic).
GAE provides tools to develop, test, and deploy these applications, with support for integrated services such
as databases, caching, and authentication.

Flowchart/Algorithm/Code:
1. Create a Google Cloud Account:
- Sign up for Google Cloud Platform (GCP) and create a new project.
2. Install Google Cloud SDK:
- Download and install the Google Cloud SDK, which includes the Google App Engine SDK.
- Initialize the SDK with your GCP project using gcloud init.
3. Develop the Web Application:
- Choose a programming language (e.g., Java, Python, Go) supported by GAE.
- Create a simple web application, such as a 'Hello, World!' app, using the chosen language.
- Structure the application with appropriate folders and files for front-end and back-end code.
4. Define Application Configuration:
- Create an app.yaml configuration file that specifies the runtime environment, handlers, and
other settings.
- Define routing rules, static file serving, and environment variables.
5. Local Testing:
- Use the dev_appserver.py command to run the application locally and test its functionality.
- Debug and fix any issues encountered during local testing.
6. Deployment:
- Deploy the application to GAE using the gcloud app deploy command.
- Specify the configuration file and monitor the deployment process.
7. Access and Testing:
- Access the deployed application via the provided URL.
- Test the application in the live environment to ensure it behaves as expected.
8. Monitoring and Maintenance:
- Use the Google Cloud Console to monitor the application's performance, view logs, and
manage resources.
- Update the application as needed, deploying new versions seamlessly.

Code:
CMD Code:
gcloud init
gcloud components install app-engine-java
mkdir my-java-web-app
cd my-java-web-app
In the directory create two files index.jsp and app.yaml

index.jsp
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, ini al-scale=1.0">
< tle>My Java Web App</ tle>
</head>
<body>
<h1>Welcome to My Java Web App on Google App Engine!</h1>
<p>This is a static website served using Google App Engine with Java.</p>
</body>
</html>

app.yaml
runtime: java11 # Specify the Java runtime
handlers:
- url: /
sta c_files: index.html
upload: index.html
- url: /(.*)
sta c_files: \1
upload: (.*)

CMD Code:
gcloud config set project my_program-110232
gcloud app browse

Results:
The web application is successfully developed, tested locally, and deployed on Google App Engine, making it
accessible via the cloud.

Expected Outcome Attained: YES


PRACTICAL - 8
Aim: Launch Web application using Google App Engine.
Course Outcome: CO4
Software Used: Google App Engine SDK, Java SDK.

Theory:
Launching a web application involves deploying it to a server or cloud platform where it can be accessed by
users over the internet. Google App Engine (GAE) simplifies this process by providing a managed
environment with automatic scaling, built-in security, and integrated services.

A successful launch involves ensuring that the application is properly configured, tested, and deployed, with
considerations for monitoring, performance optimization, and user experience. GAE's tools and services
facilitate these processes, offering a robust platform for web applications.

Flowchart/Algorithm/Code:
1. Develop or Use an Existing Web Application:
- Develop a web application or use an existing one.
- Ensure the application includes a front-end (UI) and back-end (server logic).

2. Configure the Application:


- Update the app.yaml file with necessary settings, including the runtime environment, handlers, and
environment variables.
- Define routing rules for various application paths.

3. Local Testing and Debugging:


- Use the Google Cloud SDK to test the application locally.
- Address any issues or bugs found during testing.

4. Deploy the Application:


- Deploy the application to GAE using the gcloud app deploy command.
- Monitor the deployment process for any errors or issues.

5. Access the Application:


- Once deployed, access the application using the URL provided by GAE.
- Test all features and functionalities to ensure they work as expected.

6. Monitoring and Performance Optimization:


- Use the Google Cloud Console to monitor the application's performance, view logs, and track
usage metrics.
- Optimize the application for better performance and user experience, making use of GAE's built-in tools and
services.

7. Maintenance and Updates:


- Regularly update the application with new features, security patches, and improvements.
- Use GAE's versioning system to manage different versions of the application.

Code:
CMD Code:
gcloud init
gcloud app create —region=asia-south
mkdir my-java-web-app
cd my-java-web-app

index.jsp
<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Hello from JSP</title>
</head>
<body>
<h1>Hello, World from Google App Engine!</h1>
<p>This is a simple JSP web application.</p>
</body>
</html>

app.yaml
runtime: java11 # Specify Java runtime
entrypoint: java -jar target/my-java-web-app-1.0.jar # Specify entry point if using a JAR file handlers:
- url: /.*
static_files: index.jsp
upload: index.jsp

CMD Code:
gcloud app deploy
gcloud app browse

Results:
The web application is successfully launched on Google App Engine and is accessible online, with all
features functioning correctly.

Expected Outcome Attained: YES


PRACTICAL - 9
Aim: Install Virtualbox / VMware Workstation with different flavours of linux on windows.
Course Outcome: CO4
Software Used: VirtualBox/VMware Workstation, Linux ISO images.

Theory:
Virtualization technology allows multiple operating systems to run on a single physical machine by creating
virtual machines (VMs). This is especially useful for testing, development, and educational purposes.
VirtualBox and VMware Workstation are popular virtualization platforms that enable users to create and
manage VMs with different operating systems, including various Linux distributions.

This setup allows users to experiment with different OS configurations, test software in different
environments, and isolate projects for security and stability.

Flowchart/Algorithm/Code:
1. Download VirtualBox/VMware Workstation:
- Visit the official websites and download the installers for VirtualBox or VMware Workstation.

2. Install Virtualization Software:


- Run the installer and follow the on-screen instructions to install the chosen virtualization software
on your Windows host.
- Ensure that hardware virtualization is enabled in the BIOS settings of your computer.

3. Download Linux ISO Images:


- Download the ISO images for the desired Linux distributions from their official websites (e.g., Ubuntu,
CentOS, Fedora).

4. Create Virtual Machines:


- Open VirtualBox or VMware Workstation and create a new virtual machine for each Linux distribution.
- Allocate appropriate resources such as CPU, RAM, and disk space based on the requirements of the Linux
distribution.

5. Install Linux Operating Systems:


- Boot each VM with the corresponding Linux ISO image.
- Follow the installation instructions for the Linux distribution, including setting up partitions, creating
user accounts, and configuring system settings.

6. Post-Installation Configuration:
- Install additional software and tools as needed.
- Configure network settings to enable internet access and communication with the host machine.
- Set up shared folders for easy file transfer between the host and VMs.

7. Testing and Usage:


- Test the VMs to ensure they are functioning correctly.
- Use the VMs to explore different Linux environments, develop software, or test applications.

Results:
Multiple Linux distributions successfully installed and running as virtual machines on a Windows host,
providing a versatile environment for testing and development.
Expected Outcome Attained: YES
PRACTICAL - 10

Aim: To simulate a cloud computing scenario using CloudSim and implement a scheduling algorithm.
Course Outcome: CO2
Software Used: Java SDK, CloudSim library.

Theory:
CloudSim is a simulation toolkit that allows modeling and simulation of cloud computing
environments. It provides support for modeling data centers, hosts, virtual machines (VMs),
cloudlets (tasks), and resource provisioning policies. CloudSim enables the testing and evaluation of
various cloud scenarios without the need for a physical cloud infrastructure.

Scheduling algorithms in cloud computing determine how tasks (cloudlets) are assigned to VMs.
These algorithms aim to optimize resource utilization, reduce response time, and balance the load
across available resources. Common scheduling algorithms include First-Come-First-Served (FCFS),
Round Robin (RR), and others.

Flowchart/Algorithm/Code:
1. Setup CloudSim Environment:
- Include the CloudSim library in a Java project.
- Set up the IDE with necessary configurations for Java development.

2. Define Data Center and Host Configurations:


- Create a data center by defining a list of hosts. Each host should have specifications
such as CPU, RAM, storage, and bandwidth.
- Define the characteristics of each host, including processing power and storage capacity.

3. Define Virtual Machines (VMs):


- Create a list of VMs with specifications such as number of CPUs, RAM, and storage.
Each VM represents a virtualized resource in the cloud.
- Set up the VM scheduler and resource allocation policy.

4. Create Cloudlets (Tasks):


- Define cloudlets representing the tasks to be executed. Specify the length, file size, and
output size for each cloudlet.
- Assign cloudlets to VMs based on the chosen scheduling algorithm.

5. Implement Scheduling Algorithm:


- Implement a scheduling policy (e.g., FCFS, RR) that determines the order in which
cloudlets are processed.
- Set up the allocation of cloudlets to VMs and manage the execution order.

6. Simulate Cloud Scenario:


- Run the CloudSim simulation, which will simulate the execution of cloudlets on the defined
VMs.
- Collect performance metrics such as execution time, waiting time, and resource utilization.

7. Result Analysis:
- Analyze the simulation results to evaluate the efficiency of the scheduling algorithm.
- Compare different scheduling algorithms to determine the most efficient one for the given
scenario.
Code:
CloudSimExample1.java
package
org.cloudbus.cloudsim.examples;
import java.text.DecimalFormat;
import
java.util.ArrayList
; import
java.util.Calendar
;
import
java.util.LinkedList
; import
java.util.List;

import org.cloudbus.cloudsim.Cloudlet;
import
org.cloudbus.cloudsim.CloudletSchedulerTimeShared;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.DatacenterBroker;
import
org.cloudbus.cloudsim.DatacenterCharacteristi
cs; import org.cloudbus.cloudsim.Host;
import
org.cloudbus.cloudsim.Log;
import
org.cloudbus.cloudsim.Pe;
import
org.cloudbus.cloudsim.Storage
;
import
org.cloudbus.cloudsim.UtilizationModel;
import
org.cloudbus.cloudsim.UtilizationModelFull
; import org.cloudbus.cloudsim.Vm;
import
org.cloudbus.cloudsim.VmAllocationPolicySimple;
import
org.cloudbus.cloudsim.VmSchedulerTimeShared;
import org.cloudbus.cloudsim.core.CloudSim;
import
org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import
org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import
org.cloudbus.cloudsim.provisioners.RamProvisionerSimpl
e;

public class CloudSimExample1 {


private static List<Cloudlet>
cloudletList; private static List<Vm>
vmlist;

public static void main(String[] args) {


Log.printLine("Starting
CloudSimExample1...");

try {
int num_user = 1;
Calendar calendar =
Calendar.getInstance(); boolean
trace_flag = false;

CloudSim.init(num_user, calendar, trace_flag);


Datacenter datacenter0 =
createDatacenter("Datacenter_0"); DatacenterBroker
broker = createBroker();
int brokerId = broker.getId();
vmlist = new ArrayList<Vm>();
int vmid = 0;
int mips = 1000;
long size =
10000; int ram =
512; long bw =
1000;
int pesNumber = 1;
String vmm = "Xen";

Vm vm = new Vm(vmid, brokerId, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerTimeShared());
vmlist.add(vm);
broker.submitVmList(vmlist);

cloudletList = new ArrayList<Cloudlet>();


int id = 0;
long length = 400000; long fileSize = 300;
long outputSize = 300;
UtilizationModel utilizationModel = new UtilizationModelFull();

Cloudlet cloudlet = new Cloudlet(id, length, pesNumber, fileSize, outputSize, utilizationModel,


utilizationModel, utilizationModel);
cloudlet.setUserId(brokerId);
cloudlet.setVmId(vmid);
cloudletList.add(cloudlet);
broker.submitCloudletList(cloudletList);

CloudSim.startSimulation()
;
CloudSim.stopSimulation()
;
List<Cloudlet> newList = broker.getCloudletReceivedList();
printCloudletList(newList);

Log.printLine("CloudSimExample1 finished!");
} catch (Exception
e) {
e.printStackTrace()
;
Log.printLine("Unwanted errors happen");
}
}

private static Datacenter createDatacenter(String


name) { List<Host> hostList = new
ArrayList<Host>(); List<Pe> peList = new
ArrayList<Pe>();

int mips = 1000;


peList.add(new Pe(0, new PeProvisionerSimple(mips)));

int hostId =
0; int ram =
2048;
long storage =
1000000; int bw =
10000;

hostList.add(new Host(hostId, new RamProvisionerSimple(ram), new BwProvisionerSimple(bw),


storage, peList, new VmSchedulerTimeShared(peList)));

String arch = "x86";


String os = "Linux";
String vmm = "Xen";
double time_zone =
10.0; double cost =
3.0;
double costPerMem = 0.05;
double costPerStorage =
0.001; double costPerBw =
0.0;

LinkedList<Storage> storageList = new LinkedList<Storage>();

DatacenterCharacteristics characteristics = new DatacenterCharacteristics(arch, os, vmm, hostList,


time_zone, cost, costPerMem, costPerStorage, costPerBw);

Datacenter datacenter =
null; try {
datacenter = new Datacenter(name, characteristics, new VmAllocationPolicySimple(hostList),
storageList, 0);
} catch (Exception
e) {
e.printStackTrace()
;
}

return datacenter;
}

private static DatacenterBroker


createBroker() { DatacenterBroker broker
= null;
try {
broker = new DatacenterBroker("Broker");
} catch (Exception
e) {
e.printStackTrace()
; return null;
}
return broker;
}

private static void printCloudletList(List<Cloudlet>


list) { int size = list.size();
Cloudlet
cloudlet; String
indent = "

Log.printLine();
Log.printLine("========== OUTPUT ==========");
Log.printLine("Cloudlet ID" + indent + "STATUS" + indent + "Data center ID" + indent + "VM ID" +
indent + "Time" + indent + "Start Time" + indent + "Finish Time");

DecimalFormat d = new
DecimalFormat("###.##"); for (int i = 0; i < size;
i++) {
cloudlet = list.get(i);
Log.print(indent + cloudlet.getCloudletId() + indent + indent);

if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS) {
Log.print("SUCCESS");
Log.printLine(indent + indent + cloudlet.getResourceId() + indent + indent + indent +
cloudlet.getVmId() + indent + indent + d.format(cloudlet.getActualCPUTime()) + indent + indent +
d.format(cloudlet.getExecStartTime()) + indent + indent + d.format(cloudlet.getFinishTime()));
}
}
}
}

Simulation.java
package examples.org.cloudbus.cloudsim.examples;

import
java.text.DecimalForma
t; import
java.util.ArrayList;
import
java.util.Calendar;
import
java.util.LinkedList;
import java.util.List;
import java.util.Random;
import org.cloudbus.cloudsim.Cloudlet;
import
org.cloudbus.cloudsim.CloudletSchedulerSpaceShare
d; import
org.cloudbus.cloudsim.CloudletSchedulerTimeShared
; import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.DatacenterBroker;
import
org.cloudbus.cloudsim.DatacenterCharacteristi
cs; import org.cloudbus.cloudsim.Host;
import
org.cloudbus.cloudsim.Log;
import
org.cloudbus.cloudsim.Pe;
import
org.cloudbus.cloudsim.Storage
;
import
org.cloudbus.cloudsim.UtilizationModel;
import
org.cloudbus.cloudsim.UtilizationModelFull
; import org.cloudbus.cloudsim.Vm;
import
org.cloudbus.cloudsim.VmAllocationPolicySimple;
import
org.cloudbus.cloudsim.VmSchedulerTimeShared;
import org.cloudbus.cloudsim.core.CloudSim;
import
org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import
org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import
org.cloudbus.cloudsim.provisioners.RamProvisionerSimpl
e;

public class Simulation {


private static List<Cloudlet>
cloudletList; private static List<Vm>
vmlist;
private static List<Vm> createVM(int userId, int
vms) { LinkedList<Vm> list = new
LinkedList<Vm>();
long size =
10000; int ram
= 512;
int mips =
1000; long bw
= 1000; int
pesNumber =
1;
String vmm = "Xen";

Vm[] vm = new Vm[vms];


for (int i = 0; i < vms; i++)
{
vm[i] = new Vm(i, userId, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerSpaceShared());
list.add(vm[i]);
}
return list;
}

private static List<Cloudlet> createCloudlet(int userId, int


cloudlets) { LinkedList<Cloudlet> list = new
LinkedList<Cloudlet>();
long length = 1000;
long fileSize = 300;
long outputSize =
300; int
pesNumber = 1;
UtilizationModel utilizationModel = new UtilizationModelFull();

Cloudlet[] cloudlet = new


Cloudlet[cloudlets]; for (int i = 0; i <
cloudlets; i++) {
Random r = new Random();
cloudlet[i] = new Cloudlet(i, length + r.nextInt(2000), pesNumber, fileSize, outputSize,
utilizationModel, utilizationModel, utilizationModel);
cloudlet[i].setUserId(userId
); list.add(cloudlet[i]);
}
return list;
}

public static void main(String[] args) {


Log.printLine("Starting
CloudSimExample6..."); try {
int num_user = 3;
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false;
CloudSim.init(num_user, calendar,
trace_flag);

Datacenter datacenter0 = createDatacenter("Datacenter_0");


Datacenter datacenter1 = createDatacenter("Datacenter_1");

DatacenterBroker broker =
createBroker(); int brokerId =
broker.getId();

vmlist = createVM(brokerId, 10);


cloudletList = createCloudlet(brokerId,
40); broker.submitVmList(vmlist);
broker.submitCloudletList(cloudletList);

CloudSim.startSimulation();
List<Cloudlet> newList = broker.getCloudletReceivedList();
CloudSim.stopSimulation();
printCloudletList(newList);

Log.printLine("Simulation finished!");
} catch (Exception
e) {
e.printStackTrace()
;
Log.printLine("The simulation has been terminated due to an unexpected error");
}
}

private static Datacenter createDatacenter(String


name) { List<Host> hostList = new
ArrayList<Host>(); List<Pe> peList1 = new
ArrayList<Pe>();
int mips = 1000;

peList1.add(new Pe(0, new


PeProvisionerSimple(mips))); peList1.add(new Pe(1,
new PeProvisionerSimple(mips))); peList1.add(new
Pe(2, new PeProvisionerSimple(mips)));
peList1.add(new Pe(3, new
PeProvisionerSimple(mips)));

List<Pe> peList2 = new ArrayList<Pe>();


peList2.add(new Pe(0, new
PeProvisionerSimple(mips))); peList2.add(new Pe(1,
new PeProvisionerSimple(mips)));

int hostId =
0; int ram =
2048;
long storage =
1000000; int bw =
10000;

hostList.add(new Host(hostId, new RamProvisionerSimple(ram), new BwProvisionerSimple(bw),


storage, peList1, new VmSchedulerTimeShared(peList1)));
hostId++;
hostList.add(new Host(hostId, new RamProvisionerSimple(ram), new BwProvisionerSimple(bw),
storage, peList2, new VmSchedulerTimeShared(peList2)));

String arch = "x86";


String os = "Linux";
String vmm = "Xen";
double time_zone =
10.0; double cost =
3.0;
double costPerMem =
0.05; double
costPerStorage = 0.1;
double costPerBw = 0.1;
LinkedList<Storage> storageList = new LinkedList<Storage>();

DatacenterCharacteristics characteristics = new DatacenterCharacteristics(


arch, os, vmm, hostList, time_zone, cost, costPerMem, costPerStorage, costPerBw);
Datacenter datacenter =
null;
try {
datacenter = new Datacenter(name, characteristics, new VmAllocationPolicySimple(hostList),
storageList, 0);
} catch (Exception
e) {
e.printStackTrace()
;
}
return datacenter;
}

private static DatacenterBroker


createBroker() { DatacenterBroker broker
= null;
try {
broker = new DatacenterBroker("Broker");
} catch (Exception
e) {
e.printStackTrace()
; return null;
}
return broker;
}

private static void printCloudletList(List<Cloudlet>


list) { int size = list.size();
Cloudlet
cloudlet; String
indent = " ";
Log.printLine();
Log.printLine("========== OUTPUT ==========");
Log.printLine("Cloudlet ID" + indent + "STATUS" + indent + "Data center ID" + indent + "VM ID" +
indent + indent + "Time" + indent + "Start Time" + indent + "Finish Time" + indent + "user id");

DecimalFormat d = new
DecimalFormat("###.##"); for (int i = 0; i < size;
i++) {
cloudlet = list.get(i);
Log.print(indent + cloudlet.getCloudletId() + indent +
indent); if (cloudlet.getCloudletStatus() ==
Cloudlet.SUCCESS) {
Log.print("SUCCESS");
Log.printLine(indent + indent + cloudlet.getResourceId() + indent + indent + indent +
cloudlet.getVmId() + indent + indent + indent +
d.format(cloudlet.getActualCPUTime()) + indent + indent + d.format(cloudlet.getExecStartTime())
+ indent + indent + indent + d.format(cloudlet.getFinishTime()) + indent + cloudlet.getUserId());
}
}
}
}
Results:
Successful simulation of a cloud scenario with cloudlets scheduled on VMs, demonstrating the
efficiency of the implemented scheduling algorithm.

You might also like