DSCC File
DSCC File
AIM: To write a program in Java that demonstrates the implementation of Remote Procedure Call (RPC).
Theory:
Algorithm:
1. Set Up the Java Project Environment:
• rpc
• client
• server
- In the rpc package, create a new Java class and name it RPCInterface.
- Define an interface that specifies the remote procedures. This interface will be implemented by the server
- Implement the server class that provides the actual implementation of the remote procedures defined in
the interface.
- Use Java sockets to listen for incoming client connections on a specific port.
- Create a client program that establishes a connection to the server using sockets.
- Implement methods to marshal (serialize) the procedure parameters and send them to the server.
- Receive the response from the server, unmarshal (deserialize) the data, and display the results.
-Ensure that both client and server agree on the data format for communication.
-Handle network exceptions and ensure that the connection is properly closed after communication.
- Test various remote procedure calls by invoking methods from the client and observing the server's
response.
CODE:
package rpc;
import java.io.Serializable;
public interface RPCInterface extends Serializable {
// A remote method to add two numbers
int add(int a, int b);
}
2. server -> RPCServer
package server;
import rpc.RPCInterface;
import java.io.*;
import java.net.*;
// Server class that implements the RPCInterface
public class RPCServer implements RPCInterface {
private ServerSocket serverSocket;
// Implement the add method
@Override
public int add(int a, int b) {
return a + b;
}
// Method to start the server
public void start(int port) {
try {
serverSocket = new ServerSocket(port);
System.out.println("Server started on port " + port);
while (true) {
new ClientHandler(serverSocket.accept(), this).start();
}
} catch (IOException e) {
e.printStackTrace();
}
}
// Inner class to handle client requests
private static class ClientHandler extends Thread {
private Socket clientSocket;
private RPCServer server;
public ClientHandler(Socket socket, RPCServer server) {
this.clientSocket = socket;
this.server = server;
}
public void run() {
try (ObjectInputStream input = new ObjectInputStream(clientSocket.getInputStream());
ObjectOutputStream output = new
ObjectOutputStream(clientSocket.getOutputStream())) {
// Read method name and parameters from the client
String methodName = (String) input.readObject();
int a = input.readInt();
int b = input.readInt();
// Call the add method if requested
if (methodName.equals("add")) {
int result = server.add(a, b);
output.writeInt(result); // Send result back to the client
output.flush();
}
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
} finally {
try {
clientSocket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
// Main method to start the server
public static void main(String[] args) {
RPCServer server = new RPCServer();
server.start(8080); // Server will listen on port 8080
}
}
AIM: To write a program in Java that demonstrates the implementation of Remote Method Invocation (RMI).
Theory:
Remote Method Invocation (RMI) is a Java-specific API that allows an object running in one Java Virtual Machine (JVM) to
invoke methods on an object running in another JVM. RMI simplifies the development of distributed applications by
allowing remote communication between objects with automatic handling of network communication, data serialization,
and object lifecycle management. RMI involves the following key components:
1. Remote Interface: Defines the methods that can be called remotely. It extends java.rmi.Remote.
3. RMI Registry: A simple naming service that allows clients to obtain references to remote objects.
4. Stub and Skeleton: Stub acts as a proxy on the client side, forwarding method invocations to the remote
object. The skeleton on the server side receives the invocations and forwards them to the actual remote
object.
Algorithm:
1. Set Up the Java RMI Environment:
- Install the JDK and set up an IDE.
- Ensure the system network settings are configured for RMI communication.
2. Define the Remote Interface:
- Right-click on the src folder -> New -> Java Class -> Interface.
- Name the interface RMIServerInterface.
- Create an interface that extends java.rmi.Remote.
- Declare the methods that will be remotely invoked. Each method should throw
java.rmi.RemoteException.
3. Implement the Remote Object:
- Right-click the src folder -> New -> Java Class.
- Name the class RMIServerImpl.
- Right-click the src folder -> New -> Java Class.
- Name the class RMIServer.
- Implement the remote interface in a class that extends java.rmi.server.UnicastRemoteObject.
- Define the business logic inside the methods.
- Provide a constructor that handles the RemoteException.
4. Set Up the RMI Registry:
- Start the RMI registry using the command rmiregistry in the terminal.
- In the server code, bind the remote object to the RMI registry with a unique name using Naming.rebind().
5. Client Implementation:
- Use the Naming.lookup() method to find the remote object in the RMI registry.
- Right-click the src folder -> New -> Java Class.
- Name the class RMIClient.
- Invoke methods on the remote object as if it were a local object.
- Handle RemoteException and other potential exceptions.
6. Testing and Deployment:
- Compile the server and client programs.
- Run the server to register the remote object.
- Run the client to invoke methods on the remote object and display results.
CODE:
RMI CLIENT:
package client;
import rpc.RMIServerInterface;
import java.rmi.registry.LocateRegistry;
import java.rmi.registry.Registry;
public class RMIClient {
public static void main(String[] args) {
try {
// Locate the registry where the remote object is bound
Registry registry = LocateRegistry.getRegistry("localhost", 1099);
import java.rmi.RemoteException;
import java.rmi.server.UnicastRemoteObject;
public class RMIServerImpl extends UnicastRemoteObject implements RMIServerInterface {
RMI SERVER:
import server.RMIServerImpl;
import java.rmi.registry.LocateRegistry;
import java.rmi.registry.Registry;
public class RMIServer {
public static void main(String[] args) {
try {
// Create an instance of the remote object
RMIServerImpl server = new RMIServerImpl();
RMI INTERFACE:
package rpc;
import java.rmi.Remote;
import java.rmi.RemoteException;
OUTPUTS:
Practical – 3
AIM: To implement Lamport’s Logical Clock in Java for event ordering in distributed systems.
Theory:
In distributed systems, it is often necessary to order events in a consistent manner across multiple processes,
even in the absence of a global clock. Lamport’s Logical Clock is a simple algorithm that provides a mechanism
for this by assigning a numerical timestamp to each event. The logical clock algorithm works as follows:
CODE:
import java.util.Scanner;
class LamportsClock {
int logicalClock;
// Constructor to initialize the clock
public LamportsClock() {
logicalClock = 0;
}
// Function to send an event (increments the clock)
public void sendEvent() {
logicalClock++;
System.out.println("Send event occurred, updated logical clock: " + logicalClock);
}
// Function to receive an event (updates the clock based on received timestamp)
public void receiveEvent(int receivedTimestamp) {
logicalClock = Math.max(logicalClock, receivedTimestamp) + 1;
System.out.println("Receive event occurred, updated logical clock: " + logicalClock);
}
// Function to display the logical clock
public void displayClock() {
System.out.println("Current logical clock value: " + logicalClock);
}
}
public class LamportsLogicalClock {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
// Creating two processes with Lamport clocks
LamportsClock process1 = new LamportsClock();
LamportsClock process2 = new LamportsClock();
System.out.println("Name: Abhay Sharma");
System.out.println("Roll no.: 10420802721");
boolean running = true;
while (running) {
System.out.println("\nChoose an option:");
System.out.println("1. Process 1 sends event");
System.out.println("2. Process 2 sends event");
System.out.println("3. Process 1 receives event");
System.out.println("4. Process 2 receives event");
System.out.println("5. Display clocks");
System.out.println("6. Exit");
int choice = sc.nextInt();
switch (choice) {
case 1:
process1.sendEvent();
break;
case 2:
process2.sendEvent();
break;
case 3:
System.out.print("Enter the timestamp received by Process 1: ");
int receivedTimestamp1 = sc.nextInt();
process1.receiveEvent(receivedTimestamp1);
break;
case 4:
System.out.print("Enter the timestamp received by Process 2: ");
int receivedTimestamp2 = sc.nextInt();
process2.receiveEvent(receivedTimestamp2);
break;
case 5:
System.out.println("Process 1 Clock: ");
process1.displayClock();
System.out.println("Process 2 Clock: ");
process2.displayClock();
break;
case 6:
running = false;
break;
default:
System.out.println("Invalid choice, please try again.");
}
}
sc.close();
}
}
AIM: To implement a mutual exclusion service using Lamport’s Mutual Exclusion Algorithm in a distributed
system.
Theory:
Mutual exclusion in distributed systems ensures that multiple processes do not enter a critical section
simultaneously, which is crucial for maintaining data consistency and integrity. Lamport’s Mutual Exclusion
Algorithm is a distributed solution that uses logical clocks to manage access to the critical section. The
algorithm involves the following steps:
1. Requesting the Critical Section: A process sends a request message to all other processes, including its
current logical clock value.
2. Receiving Requests: Upon receiving a request, a process replies immediately if it is not in the critical section
and not waiting for the critical section with a higher priority. Otherwise, it defers the reply.
3. Entering the Critical Section: A process enters the critical section when it has received replies from all other
processes.
4. Releasing the Critical Section: After exiting the critical section, the process sends release messages to all
processes that it had deferred replies to.
ALGORITHM:
1. Setup Java Environment:
- Install JDK and set up an IDE.
- Create a new Java project.
2. Define Process Class:
- Implement a class Process with fields for logical clock, process ID, and state (e.g., REQUESTING,
EXECUTING, RELEASED).
- Define methods for sending request, reply, and release messages.
3. Request Handling:
- Implement logic for handling incoming request messages. If the process is in a lower priority state, send a
reply immediately; otherwise, defer the reply.
- Maintain a queue of deferred requests.
4. Critical Section Management:
- Implement a method enterCriticalSection() to enter the critical section after receiving all necessary
replies.
- Implement a method exitCriticalSection() to send release messages and handle deferred requests.
5. Testing:
- Simulate multiple processes requesting access to a shared resource.
CODE:
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.Queue;
import java.util.Scanner;
class Process {
enum State { REQUESTING, EXECUTING, RELEASED }
int pid, logicalClock, replyCount;
State state;
Queue<Request> deferredRequests;
public Process(int pid) {
this.pid = pid;
this.logicalClock = 0;
this.state = State.RELEASED;
this.replyCount = 0;
this.deferredRequests = new LinkedList<>();
}
// Update logical clock
public void updateClock(int timestamp) {
logicalClock = Math.max(logicalClock, timestamp) + 1;
}
// Send a request for the critical section
public void sendRequest(ArrayList<Process> processes) {
state = State.REQUESTING;
logicalClock++;
replyCount = 0;
System.out.println("Process " + pid + " is requesting CS at time " + logicalClock);
for (Process p : processes) {
if (p.pid != this.pid) {
p.receiveRequest(new Request(this.pid, this.logicalClock));
}
}
}
// Receive a request message from another process
public void receiveRequest(Request req) {
updateClock(req.timestamp);
System.out.println("Process " + pid + " received request from Process " + req.pid + " with timestamp "
+ req.timestamp);
if (state == State.EXECUTING || (state == State.REQUESTING && (logicalClock < req.timestamp ||
(logicalClock == req.timestamp && pid < req.pid)))) {
// Defer the request
deferredRequests.add(req);
System.out.println("Process " + pid + " defers reply to Process " + req.pid);
} else {
// Reply immediately
sendReply(req.pid);
}
}
// Send a reply to a requesting process
public void sendReply(int targetPid) {
System.out.println("Process " + pid + " sends reply to Process " + targetPid);
replyCount++;
}
// Enter the critical section if all replies are received
public void enterCriticalSection(int numProcesses) {
if (replyCount == numProcesses - 1) {
state = State.EXECUTING;
System.out.println("Process " + pid + " enters the critical section");
}
}
// Exit the critical section and release resources
public void exitCriticalSection(ArrayList<Process> processes) {
System.out.println("Process " + pid + " leaves the critical section");
state = State.RELEASED;
// Send release messages
for (Request r : deferredRequests) {
sendReply(r.pid);
}
deferredRequests.clear();
}
}
class Request implements Comparable<Request> {
int pid, timestamp;
public Request(int pid, int timestamp) {
this.pid = pid;
this.timestamp = timestamp;
}
@Override
public int compareTo(Request other) {
if (this.timestamp == other.timestamp) {
return Integer.compare(this.pid, other.pid);
}
return Integer.compare(this.timestamp, other.timestamp);
}
}
public class LamportsMutualExclusion {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
ArrayList<Process> processes = new ArrayList<>();
System.out.println("Name: Abhay Sharma");
System.out.println("Roll no.: 10420802721");
// Create 3 processes
for (int i = 0; i < 3; i++) {
processes.add(new Process(i + 1));
}
boolean running = true;
while (running) {
System.out.println("Choose an option:");
System.out.println("1. Process 1 request CS\n 2. Process 2 request CS \n 3. Process 3 request CS \n 4." +
" Process 1 exit CS\n 5. Process 2 exit CS \n 6. Process 3 exit CS \n 7. Exit");
int choice = sc.nextInt();
switch (choice) {
case 1 -> processes.get(0).sendRequest(processes);
case 2 -> processes.get(1).sendRequest(processes);
case 3 -> processes.get(2).sendRequest(processes);
case 4 -> processes.get(0).exitCriticalSection(processes);
case 5 -> processes.get(1).exitCriticalSection(processes);
case 6 -> processes.get(2).exitCriticalSection(processes);
case 7 -> running = false;
default -> System.out.println("Invalid choice.");
}
// Try to enter the critical section for each process
for (Process p : processes) {
p.enterCriticalSection(processes.size());
}
}
sc.close();
}
}
AIM: To install and configure Hadoop on a Windows operating system for big data processing.
Theory:
Hadoop is an open-source framework that enables distributed storage and processing of large datasets across
clusters of computers using simple programming models. It consists of two main components:
1. Hadoop Distributed File System (HDFS): A distributed file system that provides high throughput access to
application data.
2. YARN (Yet Another Resource Negotiator): A resource management platform responsible for managing
compute resources in clusters and scheduling users' applications.
Hadoop can be installed on a variety of platforms, including Windows, although it is traditionally used on Linux
systems. The installation on Windows involves setting up Java, configuring environment variables, and ensuring
that the necessary components are correctly installed and configured.
Flowchart/Algorithm:
1. Install Java SDK:
- Download the latest version of JDK from Oracle’s website.
- Install JDK and set the JAVA_HOME environment variable in the system properties.
2. Download Hadoop:
- Visit the Apache Hadoop website and download the appropriate version of Hadoop. - Extract the downloaded archive to a suitable
directory on your system.
3. Set Environment Variables:
- Set HADOOP_HOME to the Hadoop installation directory. - Add %HADOOP_HOME%\bin to the system PATH variable.
4. Configure Hadoop:
- Navigate to the etc/hadoop directory and configure the following files:
- core-site.xml: Set the default file system and path.
- hdfs-site.xml: Configure the replication factor and data node directories.
- mapred-site.xml: Define the job tracker and task tracker.
- yarn-site.xml: Set resource manager and node manager settings.
5. Format HDFS:
- Use the command hdfs namenode
-format to format the Hadoop filesystem.
6. Start Hadoop Services:
- Start the NameNode and DataNode using start-dfs.cmd.
- Start the ResourceManager and NodeManager using start-yarn.cmd.
7. Verification: Access the Hadoop web interfaces for HDFS and YARN to
verify the installation and configuration.
Results: Hadoop successfully installed and configured on
Windows, with all components operational.
Installation and Configuration of Java SDK:
Installation and Configuration of Hadoop:
Expected Outcome Obtained: Yes
PRACTICAL - 6
Aim: Run a simple application on single node Hadoop Cluster.
Course Outcome: CO3
Software Used: Hadoop single node cluster, JAVA SDK.
Theory:
MapReduce is a programming model for processing large data sets with a distributed algorithm on a Hadoop
cluster. It consists of two main functions:
1. Map: Processes input data into key-value pairs.
2. Reduce: Aggregates the intermediate key-value pairs generated by the map function.
In a single-node Hadoop cluster, all Hadoop services run on a single machine, making it ideal for
development and testing. This setup helps developers test their code in a Hadoop environment before
deploying it to a multi-node cluster.
Flowchart/Algorithm/Code:
1. Setup Hadoop Cluster:
- Ensure that Hadoop is correctly installed and configured on the single-node cluster.
2. Write MapReduce Application:
- Implement the Mapper and Reducer classes. For example, a WordCount application would count
the occurrences of each word in a text file.
- The Mapper class processes input lines into words and outputs key-value pairs where the key is
the word, and the value is the count (initially 1).
- The Reducer class aggregates the counts for each word.
3. Prepare Input Data:
- Create an input file (e.g., a text file) containing the data to be processed.
- Use the hdfs dfs -put command to upload the input file to HDFS.
4. Configure Job:
- Define the job configuration in a Driver class, specifying the input and output paths, the Mapper
and Reducer classes, and other necessary configurations.
5. Run MapReduce Job:
- Use the hadoop jar command to run the job, specifying the JAR file containing the compiled
classes and the job configuration.
- Monitor the job progress via the Hadoop web interface or console output.
6. Retrieve Output:
- Once the job is complete, use the hdfs dfs -get command to retrieve the output from HDFS.
- Analyze the output data to verify the correctness of the results.
Code:
import java.io.IOExcep on;
import java.u l.StringTokenizer;
import org.apache.hadoop.conf.Configura on;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import
org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public
class WordCount {
public sta c class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{ private
final sta c IntWritable one = new IntWritable(1); private
Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOExcep on, InterruptedExcep on { StringTokenizer
itr = new StringTokenizer(value.toString()); while
(itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOExcep on, InterruptedExcep on {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}public sta c void main(String[] args) throws Excep on {
Configura on conf = new Configura on();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForComple on(true) ? 0 : 1);
}
}
CMD Code:
Change directory:-
cd C:\Users\vibhutigupta\Desktop\DSCC\Program_6 Compile
application:-:-
javac -classpath “%HADOOP_HOME%\share\hadoop\common\hadoop-common- *.jar;
%HADOOP_HOME%\share\hadoop\mapreduce\hadoop-mapreduce-client-core-*.jar"
-d .WordCount.java
Jar file out of all the .class files:- jar -
cvf wordcount.jar -C . .
Putting input.txt in HDFS:- hdfs
namenode -format start-
dfs.cmd
start-yarn.cmd
hadoop fs -mkdir /input hadoop fs -put
input.txt /input
Running the applica on:-
hadoop jar wordcount.jar WordCount /input/input.txt /output
Fetching the output:-
hadoop fs -get /output C:\Users\ya n\Desktop\DSCC\Program_6
Results:
This simple WordCount application demonstrates the basic functionality of a Hadoop cluster
INPUT:
OUTPUT:
Theory:
Google App Engine (GAE) is a fully managed Platform as a Service (PaaS) that allows developers to build and
deploy web applications on Google's infrastructure. It supports various programming languages and
frameworks, offering services like automatic scaling, load balancing, and security. GAE abstracts
infrastructure management, allowing developers to focus on code.
A simple web application typically consists of a front-end (user interface) and a back-end (server-side logic).
GAE provides tools to develop, test, and deploy these applications, with support for integrated services such
as databases, caching, and authentication.
Flowchart/Algorithm/Code:
1. Create a Google Cloud Account:
- Sign up for Google Cloud Platform (GCP) and create a new project.
2. Install Google Cloud SDK:
- Download and install the Google Cloud SDK, which includes the Google App Engine SDK.
- Initialize the SDK with your GCP project using gcloud init.
3. Develop the Web Application:
- Choose a programming language (e.g., Java, Python, Go) supported by GAE.
- Create a simple web application, such as a 'Hello, World!' app, using the chosen language.
- Structure the application with appropriate folders and files for front-end and back-end code.
4. Define Application Configuration:
- Create an app.yaml configuration file that specifies the runtime environment, handlers, and
other settings.
- Define routing rules, static file serving, and environment variables.
5. Local Testing:
- Use the dev_appserver.py command to run the application locally and test its functionality.
- Debug and fix any issues encountered during local testing.
6. Deployment:
- Deploy the application to GAE using the gcloud app deploy command.
- Specify the configuration file and monitor the deployment process.
7. Access and Testing:
- Access the deployed application via the provided URL.
- Test the application in the live environment to ensure it behaves as expected.
8. Monitoring and Maintenance:
- Use the Google Cloud Console to monitor the application's performance, view logs, and
manage resources.
- Update the application as needed, deploying new versions seamlessly.
Code:
CMD Code:
gcloud init
gcloud components install app-engine-java
mkdir my-java-web-app
cd my-java-web-app
In the directory create two files index.jsp and app.yaml
index.jsp
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, ini al-scale=1.0">
< tle>My Java Web App</ tle>
</head>
<body>
<h1>Welcome to My Java Web App on Google App Engine!</h1>
<p>This is a static website served using Google App Engine with Java.</p>
</body>
</html>
app.yaml
runtime: java11 # Specify the Java runtime
handlers:
- url: /
sta c_files: index.html
upload: index.html
- url: /(.*)
sta c_files: \1
upload: (.*)
CMD Code:
gcloud config set project my_program-110232
gcloud app browse
Results:
The web application is successfully developed, tested locally, and deployed on Google App Engine, making it
accessible via the cloud.
Theory:
Launching a web application involves deploying it to a server or cloud platform where it can be accessed by
users over the internet. Google App Engine (GAE) simplifies this process by providing a managed
environment with automatic scaling, built-in security, and integrated services.
A successful launch involves ensuring that the application is properly configured, tested, and deployed, with
considerations for monitoring, performance optimization, and user experience. GAE's tools and services
facilitate these processes, offering a robust platform for web applications.
Flowchart/Algorithm/Code:
1. Develop or Use an Existing Web Application:
- Develop a web application or use an existing one.
- Ensure the application includes a front-end (UI) and back-end (server logic).
Code:
CMD Code:
gcloud init
gcloud app create —region=asia-south
mkdir my-java-web-app
cd my-java-web-app
index.jsp
<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Hello from JSP</title>
</head>
<body>
<h1>Hello, World from Google App Engine!</h1>
<p>This is a simple JSP web application.</p>
</body>
</html>
app.yaml
runtime: java11 # Specify Java runtime
entrypoint: java -jar target/my-java-web-app-1.0.jar # Specify entry point if using a JAR file handlers:
- url: /.*
static_files: index.jsp
upload: index.jsp
CMD Code:
gcloud app deploy
gcloud app browse
Results:
The web application is successfully launched on Google App Engine and is accessible online, with all
features functioning correctly.
Theory:
Virtualization technology allows multiple operating systems to run on a single physical machine by creating
virtual machines (VMs). This is especially useful for testing, development, and educational purposes.
VirtualBox and VMware Workstation are popular virtualization platforms that enable users to create and
manage VMs with different operating systems, including various Linux distributions.
This setup allows users to experiment with different OS configurations, test software in different
environments, and isolate projects for security and stability.
Flowchart/Algorithm/Code:
1. Download VirtualBox/VMware Workstation:
- Visit the official websites and download the installers for VirtualBox or VMware Workstation.
6. Post-Installation Configuration:
- Install additional software and tools as needed.
- Configure network settings to enable internet access and communication with the host machine.
- Set up shared folders for easy file transfer between the host and VMs.
Results:
Multiple Linux distributions successfully installed and running as virtual machines on a Windows host,
providing a versatile environment for testing and development.
Expected Outcome Attained: YES
PRACTICAL - 10
Aim: To simulate a cloud computing scenario using CloudSim and implement a scheduling algorithm.
Course Outcome: CO2
Software Used: Java SDK, CloudSim library.
Theory:
CloudSim is a simulation toolkit that allows modeling and simulation of cloud computing
environments. It provides support for modeling data centers, hosts, virtual machines (VMs),
cloudlets (tasks), and resource provisioning policies. CloudSim enables the testing and evaluation of
various cloud scenarios without the need for a physical cloud infrastructure.
Scheduling algorithms in cloud computing determine how tasks (cloudlets) are assigned to VMs.
These algorithms aim to optimize resource utilization, reduce response time, and balance the load
across available resources. Common scheduling algorithms include First-Come-First-Served (FCFS),
Round Robin (RR), and others.
Flowchart/Algorithm/Code:
1. Setup CloudSim Environment:
- Include the CloudSim library in a Java project.
- Set up the IDE with necessary configurations for Java development.
7. Result Analysis:
- Analyze the simulation results to evaluate the efficiency of the scheduling algorithm.
- Compare different scheduling algorithms to determine the most efficient one for the given
scenario.
Code:
CloudSimExample1.java
package
org.cloudbus.cloudsim.examples;
import java.text.DecimalFormat;
import
java.util.ArrayList
; import
java.util.Calendar
;
import
java.util.LinkedList
; import
java.util.List;
import org.cloudbus.cloudsim.Cloudlet;
import
org.cloudbus.cloudsim.CloudletSchedulerTimeShared;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.DatacenterBroker;
import
org.cloudbus.cloudsim.DatacenterCharacteristi
cs; import org.cloudbus.cloudsim.Host;
import
org.cloudbus.cloudsim.Log;
import
org.cloudbus.cloudsim.Pe;
import
org.cloudbus.cloudsim.Storage
;
import
org.cloudbus.cloudsim.UtilizationModel;
import
org.cloudbus.cloudsim.UtilizationModelFull
; import org.cloudbus.cloudsim.Vm;
import
org.cloudbus.cloudsim.VmAllocationPolicySimple;
import
org.cloudbus.cloudsim.VmSchedulerTimeShared;
import org.cloudbus.cloudsim.core.CloudSim;
import
org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import
org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import
org.cloudbus.cloudsim.provisioners.RamProvisionerSimpl
e;
try {
int num_user = 1;
Calendar calendar =
Calendar.getInstance(); boolean
trace_flag = false;
Vm vm = new Vm(vmid, brokerId, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerTimeShared());
vmlist.add(vm);
broker.submitVmList(vmlist);
CloudSim.startSimulation()
;
CloudSim.stopSimulation()
;
List<Cloudlet> newList = broker.getCloudletReceivedList();
printCloudletList(newList);
Log.printLine("CloudSimExample1 finished!");
} catch (Exception
e) {
e.printStackTrace()
;
Log.printLine("Unwanted errors happen");
}
}
int hostId =
0; int ram =
2048;
long storage =
1000000; int bw =
10000;
Datacenter datacenter =
null; try {
datacenter = new Datacenter(name, characteristics, new VmAllocationPolicySimple(hostList),
storageList, 0);
} catch (Exception
e) {
e.printStackTrace()
;
}
return datacenter;
}
Log.printLine();
Log.printLine("========== OUTPUT ==========");
Log.printLine("Cloudlet ID" + indent + "STATUS" + indent + "Data center ID" + indent + "VM ID" +
indent + "Time" + indent + "Start Time" + indent + "Finish Time");
DecimalFormat d = new
DecimalFormat("###.##"); for (int i = 0; i < size;
i++) {
cloudlet = list.get(i);
Log.print(indent + cloudlet.getCloudletId() + indent + indent);
if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS) {
Log.print("SUCCESS");
Log.printLine(indent + indent + cloudlet.getResourceId() + indent + indent + indent +
cloudlet.getVmId() + indent + indent + d.format(cloudlet.getActualCPUTime()) + indent + indent +
d.format(cloudlet.getExecStartTime()) + indent + indent + d.format(cloudlet.getFinishTime()));
}
}
}
}
Simulation.java
package examples.org.cloudbus.cloudsim.examples;
import
java.text.DecimalForma
t; import
java.util.ArrayList;
import
java.util.Calendar;
import
java.util.LinkedList;
import java.util.List;
import java.util.Random;
import org.cloudbus.cloudsim.Cloudlet;
import
org.cloudbus.cloudsim.CloudletSchedulerSpaceShare
d; import
org.cloudbus.cloudsim.CloudletSchedulerTimeShared
; import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.DatacenterBroker;
import
org.cloudbus.cloudsim.DatacenterCharacteristi
cs; import org.cloudbus.cloudsim.Host;
import
org.cloudbus.cloudsim.Log;
import
org.cloudbus.cloudsim.Pe;
import
org.cloudbus.cloudsim.Storage
;
import
org.cloudbus.cloudsim.UtilizationModel;
import
org.cloudbus.cloudsim.UtilizationModelFull
; import org.cloudbus.cloudsim.Vm;
import
org.cloudbus.cloudsim.VmAllocationPolicySimple;
import
org.cloudbus.cloudsim.VmSchedulerTimeShared;
import org.cloudbus.cloudsim.core.CloudSim;
import
org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import
org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import
org.cloudbus.cloudsim.provisioners.RamProvisionerSimpl
e;
DatacenterBroker broker =
createBroker(); int brokerId =
broker.getId();
CloudSim.startSimulation();
List<Cloudlet> newList = broker.getCloudletReceivedList();
CloudSim.stopSimulation();
printCloudletList(newList);
Log.printLine("Simulation finished!");
} catch (Exception
e) {
e.printStackTrace()
;
Log.printLine("The simulation has been terminated due to an unexpected error");
}
}
int hostId =
0; int ram =
2048;
long storage =
1000000; int bw =
10000;
DecimalFormat d = new
DecimalFormat("###.##"); for (int i = 0; i < size;
i++) {
cloudlet = list.get(i);
Log.print(indent + cloudlet.getCloudletId() + indent +
indent); if (cloudlet.getCloudletStatus() ==
Cloudlet.SUCCESS) {
Log.print("SUCCESS");
Log.printLine(indent + indent + cloudlet.getResourceId() + indent + indent + indent +
cloudlet.getVmId() + indent + indent + indent +
d.format(cloudlet.getActualCPUTime()) + indent + indent + d.format(cloudlet.getExecStartTime())
+ indent + indent + indent + d.format(cloudlet.getFinishTime()) + indent + cloudlet.getUserId());
}
}
}
}
Results:
Successful simulation of a cloud scenario with cloudlets scheduled on VMs, demonstrating the
efficiency of the implemented scheduling algorithm.