0% found this document useful (0 votes)
17 views44 pages

Deep Learning Lab

The document discusses installing and using Python and R on Windows systems. It provides step-by-step instructions on downloading and installing Python and verifying the installation. It also discusses downloading and installing Rtools to compile R packages from source code and verifying its installation. The document also provides overviews of Python and R for data analysis, machine learning, and statistical modeling.

Uploaded by

DSEC-MCA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views44 pages

Deep Learning Lab

The document discusses installing and using Python and R on Windows systems. It provides step-by-step instructions on downloading and installing Python and verifying the installation. It also discusses downloading and installing Rtools to compile R packages from source code and verifying its installation. The document also provides overviews of Python and R for data analysis, machine learning, and statistical modeling.

Uploaded by

DSEC-MCA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 44

EX NO :1 Study and usage of python and Rtool

Python is a widely used high-level programming language. To write and execute code in
python, we first need to install Python on our system.

Installing Python on Windows takes a series of few easy steps.

Step 1 − Select Version of Python to Install

Python has various versions available with differences between the syntax and working of
different versions of the language. We need to choose the version which we want to use or
need. There are different versions of Python 2 and Python 3 available.

Step 2 − Download Python Executable Installer

On the web browser, in the official site of python (www.python.org), move to the Download
for Windows section.

All the available versions of Python will be listed. Select the version required by you and
click on Download. Let suppose, we chose the Python 3.9.1 version.

On clicking download, various available executable installers shall be visible with different
operating system specifications. Choose the installer which suits your system operating
system and download the instlaller. Let suppose, we select the Windows installer(64 bits).

The download size is less than 30MB.


Step 3 − Run Executable Installer

We downloaded the Python 3.9.1 Windows 64 bit installer.

Run the installer. Make sure to select both the checkboxes at the bottom and then click Install
New.

On clicking the Install Now, The installation process starts.


The installation process will take few minutes to complete and once the installation is
successful, the following screen is displayed.

Step 4 − Verify Python is installed on Windows

To ensure if Python is succesfully installed on your system. Follow the given steps −
 Open the command prompt.
 Type ‘python’ and press enter.
 The version of the python which you have installed will be displayed if the python is
successfully installed on your windows.

Step 5 − Verify Pip was installed

Pip is a powerful package management system for Python software packages. Thus, make
sure that you have it installed.

To verify if pip was installed, follow the given steps −

 Open the command prompt.


 Enter pip –V to check if pip was installed.
 The following output appears if pip is installed successfully.

We have successfully installed python and pip on our Windows system.

Certainly, here are some simple points about the

study and usage of Python:


 Python is a high-level programming language known for its simplicity and readability.
 Python is beginner-friendly and has a straightforward syntax, making it an excellent
choice for those new to programming.
 Python is a versatile language used for a wide range of applications, including web
development, data analysis, scientific computing, artificial intelligence, and more.
 Python is an interpreted language, meaning you can write and run code line by line,
making it easy for testing and debugging.
 Python has a large and active community, and there are many libraries and
frameworks available, which means you can leverage existing code for your projects.
 Python is open source, which means it's free to use, and you can access its source
code.
 Python is the go-to language for data science and machine learning, with libraries like
NumPy, pandas, scikit-learn, and TensorFlow.

Using Rtools on Windows


For R versions R-4.0.0 to to R-4.1.3, R for Windows uses a toolchain bundle called Rtools
4.0. This version of Rtools is based on msys2, which makes easier to build and maintain R
itself as well as the system libraries needed by R packages on Windows. The latest builds of
Rtools 4.0 contain 3 toolchains:

 C:\rtools40\mingw32: the 32-bit gcc-8-3.0 toolchain for R 4.0 - 4.1


 C:\rtools40\mingw64: the 64-bit gcc-8-3.0 toolchain for R 4.0 - 4.1
 C:\rtools40\ucrt64: a 64-bit gcc-10.3.0 ucrt toolchain (note the officially supported
toolchain for R >= 4.2.0 is available here: RTools 4.2)

The msys2 documentation gives an overview of the supported environments in msys2 and a
comparison of MSVCRT and UCRT. The main difference between upstream msys2 and
rtools4 is that our toolchains and libraries are configured for static linking, whereas upstream
msys2 prefers dynamic linking. The references at the bottom of this document contain more
information.

Rtools 4.0 has been maintained by Jeroen Ooms. Older editions were put together by
Prof. Brian Ripley and Duncan Murdoch.

Installing Rtools

Note that Rtools is only needed build R packages with C/C++/Fortran code from source. By
default, R for Windows installs the precompiled “binary packages” from CRAN, for which
you do not need Rtools.

To use rtools, download the installer from CRAN:

 On Windows 64-bit: rtools40-x86_64.exe (includes both i386 and x64 compilers).


Permanent url: rtools40-x86_64.exe.
 On Windows 32-bit: rtools40-i686.exe (i386 compilers only). Permanent
url: rtools40-i686.exe.

Note for RStudio users: you need at least RStudio version 1.2.5042 to work with rtools4.
Putting Rtools on the PATH

After installation is complete, you need to perform one more step to be able to compile R
packages: we put the location of the Rtools make utilities (bash, make, etc) on the PATH. The
easiest way to do so is by creating a text file .Renviron in your Documents folder which
contains the following line:

PATH="${RTOOLS40_HOME}\usr\bin;${PATH}"

You can do this with a text editor, or from R like so (note that in R code you need to escape
backslashes):

write('PATH="${RTOOLS40_HOME}\\usr\\bin;${PATH}"', file = "~/.Renviron", append =


TRUE)

Restart R, and verify that make can be found, which should show the path to your Rtools
installation.

Sys.which("make")
## "C:\\rtools40\\usr\\bin\\make.exe"

Now try to install an R package from source:


install.packages("jsonlite", type = "source")

If this succeeds, you’re good to go! See the links below to learn more about rtools4 and the
Windows build infrastructure.

Certainly, here are some simple points about the study and usage of R, commonly referred to
as R language or R tool:
 Statistical and Data Analysis: R is a programming language and environment for
statistical computing and data analysis.
 Open Source: R is open source and freely available, which means you can download
and use it without cost.
 Rich Ecosystem: R has a rich ecosystem of packages and libraries for various data
analysis and visualization tasks.
 Data Visualization: R is known for its powerful data visualization capabilities, with
packages like ggplot2 for creating highly customizable graphs and charts.
 Data Manipulation: R provides tools for data manipulation and transformation,
making it suitable for cleaning and preparing data.
 Statistics and Machine Learning: R is widely used for statistical analysis and machine
learning tasks, with packages like stats and caret.
 Statistical Modeling: R supports various statistical modeling techniques, including
linear and nonlinear modeling, time-series analysis, and more.
 R Markdown: R Markdown allows you to create dynamic documents that combine R
code, visualizations, and narrative text.
 Community Support:R has an active and supportive user community, with forums and
resources for learning and problem-solving.
 Cross-Platform Compatibility: R runs on multiple platforms, including Windows,
macOS, and Linux.
EX NO :2 Implement a classifier for the sales data.

AIM:

To implement a classifier for the sales data. Read the training data from a .CSV file.

PROCEDURE:

Step 1:download packages from the RTools website and follow the installation instructions.

RTools is typically used for building and installing R packages that include compiled

code.

Step 2:Set Up Your Working Directory to Create a folder where you will store your R scripts

and data files.

Step 3:Write Your R Script in your new script file.

Step 4:Run Your Script using a Rtool console and use the source() function to run your script

Step 5:Review Output and Check the output in the R console or RStudio console.
PROGRAM:

# Load the data


icecream <- read.csv("http://bit.ly/2OMHgFi")

# Attach objects in the database; can be accessed by simply giving their names
attach(icecream)

# View the data and see the structure of this dataframe


icecream
str(icecream)

# Checking for null values per each variable


sum(is.na(icecream$icecream_sales))
sum(is.na(icecream$income))
sum(is.na(icecream$price))
sum(is.na(icecream$temperature))
sum(is.na(icecream$country))
sum(is.na(icecream$season))

# All packages that are used on this study have been installed; eg install.packages("dplyr") for
dplyr package to investigate data
# Load packages dplyr and ggplot2 for investigating and visualising the data respectively
library(dplyr)
library(ggplot2)

# Just glimpse of how the data looks like and check the classes of the variables
glimpse(icecream)

class(icecream$icecream_sales)
class(icecream$price)
class(icecream$income)
class(icecream$temperature)
class(icecream$season)
class(icecream$country)

# Summarise variables in this dataset


summary(icecream)

# Summary of descriptive statistics of sales by country


icecream %>%
group_by(country) %>%
summarise(count=n(),
mu = mean(icecream_sales), pop_med = median(icecream_sales),
sigma = sd(icecream_sales), pop_iqr = IQR(icecream_sales),
pop_min = min(icecream_sales), pop_max = max(icecream_sales),
pop_q1 = quantile(icecream_sales, 0.25), # first quartile, 25th percentile
pop_q3 = quantile(icecream_sales, 0.75)) # third quartile, 75th percentile

#Summary of descriptive statistics of sales grouped by season


icecream %>%
group_by(season) %>%
summarise(count=n(),
mu = mean(icecream_sales), pop_med = median(icecream_sales),
sigma = sd(icecream_sales), pop_iqr = IQR(icecream_sales),
pop_min = min(icecream_sales), pop_max = max(icecream_sales),
pop_q1 = quantile(icecream_sales, 0.25), # first quartile, 25th percentile
pop_q3 = quantile(icecream_sales, 0.75)) # third quartile, 75th percentile

# Scatter plots for all combinations of variables


pairs(icecream)

#Analysis for the sales; Density plot of sales, box plot of sales per country, box plot of sales
per season
ggplot(icecream, aes(x=icecream_sales)) + geom_density() + xlab("Ice cream Sales") +
ylab("Density") + ggtitle("Density plot of Ice cream Sales")
ggplot(data = icecream, aes(x = country, y = icecream_sales)) + geom_boxplot() +
xlab("Country") + ylab("Ice cream sales") + ggtitle("Box plot of ice cream sales")
ggplot(data = icecream, aes(x = season, y = icecream_sales)) + geom_boxplot() +
xlab("Season") + ylab("Ice cream sales") + ggtitle("Box plot of ice cream sales")

#A histogram of the ice cream sales instead of density plot of Sales


hist(icecream$icecream_sales)

#Analysis for the Income; Density plot of Income, box plot of Income per country, box plot
of Income per season
ggplot(icecream, aes(x=income)) + geom_density() + xlab("Income") + ylab("Density") +
ggtitle("Density plot of Income")
ggplot(data = icecream, aes(x = country, y = income)) + geom_boxplot() + xlab("Season") +
ylab("Income") + ggtitle("Box plot of Income per Season")
ggplot(data = icecream, aes(x = season, y = income)) + geom_boxplot() + xlab("Season") +
ylab("Income") + ggtitle("Box plot of Income per Season")

#Analysis for the Price; Density plot of Price, box plot of Price per country
ggplot(icecream, aes(x=price)) + geom_density() + xlab("Price") + ylab("Density") +
ggtitle("Density plot of Price")
ggplot(data = icecream, aes(x = country, y = price)) + geom_boxplot() + xlab("Country") +
ylab("Price") + ggtitle("Box plot of Price")

#Analysis of Temperature; box plot of temperature per country


ggplot(data = icecream, aes(x = country, y = temperature)) + geom_boxplot() +
xlab("Country") + ylab("Temperature") + ggtitle("Box plot of Temperature")

#Size of observations in each category of country variable


by(icecream$icecream_sales, icecream$country, length)
#Hypothesis test
#The box plots show how the medians of the two distributions compare,
#but we can also compare the means of the distributions which the last line in that code does
by adding the means on boxplot using stat_summary().
#We see some differences in means and test if this difference is statistically significant.

ggplot(data = na.omit(icecream),
aes(x = country, y= icecream_sales, colour=country)) +
geom_boxplot() + xlab("Country") +
ylab("Ice ceam sales") +
ggtitle("Box plot of ice cream sales") +
stat_summary(fun.y=mean, colour="darkred", geom="point", shape=1, size=3)

#Collection of a simple random sample of size 100 from the icecream dataset, which is
assigned to samp1
sales <- icecream$icecream_sales
samp1 <- sample(sales, 100)
mean(samp1) # mean of the sample distribution for sales
glimpse(samp1)
hist(samp1)

# For use of inference(), I have installed "statsr" package with the following command;
install.packages("statsr")
library(statsr)
inference(y= icecream_sales, x = country, data = icecream,
statistic = c("mean"),
type = c("ht"),
null = 0,
alternative = c("twosided"), method = c("theoretical"), conf_level = 0.95,
order = c("A","B"))

#Computation of the correlation coefficient between pairs of our numerical variables, this
returns the correlation matrix
icecream %>%
select(icecream_sales, income, price, temperature) %>%
cor() %>%
knitr::kable(
digits = 3,
caption = "Correlations between icecream_sales, income, price and temperature", booktabs
= TRUE
)

#Visualize the association between the outcome variable with each of the explanatory
variables
library(ggplot2)
icecream <- read.csv("http://bit.ly/2OMHgFi")
p1 <- ggplot(icecream, aes(x = income, y = icecream_sales)) +
geom_point() +
labs(x = "Income (in £)", y = "Ice cream sales (in £)", title = "Relationship between ice
cream sales and income") +
geom_smooth(method = "lm", se = FALSE)
p2 <- ggplot(icecream, aes(x = price, y = icecream_sales)) +
geom_point() +
labs(x = "Price (in £)", y = "Ice cream sales (in £)", title = "Relationship between ice cream
sales and price") +
geom_smooth(method = "lm", se = FALSE)
p3 <- ggplot(icecream, aes(x = temperature, y = icecream_sales)) +
geom_point() +
labs(x = "Temperature (in Celsius °C)", y = "Ice cream sales (in £)", title = "Relationship
between ice cream sales and temperature") +
geom_smooth(method = "lm", se = FALSE)
library(gridExtra)
grid.arrange(p1, p2, p3)

# Scatter plot; relationship between temperature and ice cream sales by country, adding x, y
axis labels, title and a different regression line for each country
ggplot(data = icecream, aes(x = temperature, y = icecream_sales, colour = country)) +
geom_point() +
xlab("Temperature (in °C)") + ylab("Sales of ice cream (in £)") +
ggtitle("Ice cream Sales vs Temperature by Country") +
geom_smooth(method = "lm", se = FALSE)

# Scatter plot; relationship between income and ice cream sales by country, adding x, y axis
labels, title and a different regression line for each country
ggplot(data = icecream, aes(x = income, y = icecream_sales, colour = country)) +
geom_point() +
xlab("Income (in £)") + ylab("Sales of ice cream (£)") +
ggtitle("Ice cream Sales vs Income by Country") +
geom_smooth(method = "lm", se = FALSE)

# Scatter plot; relationship between price and ice cream sales by country, adding x, y axis
labels, title and a different regression line for each country
ggplot(data = icecream, aes(x = price, y = icecream_sales, colour = country)) + geom_point()
+
xlab("Price (in £)") + ylab("Sales of ice cream (in £)") +
ggtitle("Ice cream Sales vs Price by Country") +
geom_smooth(method = "lm", se = FALSE)

#Fit the model; multiple regression model


icecream <- read.csv("http://bit.ly/2OMHgFi")
Sales_model <- lm(icecream_sales ~ income + price + temperature + country + season, data
= icecream)
summary(Sales_model)

#Computation of the t-critical value


qt(0.025, df=992)

#Confidence intervals of coefficients on explanatory variables at a 90% confidence level


confint(Sales_model, level = 0.90)
# Model understanding
country_a_20 <- data.frame(income = 20000, price = 3, temperature = 20, country = "A",
season = "Winter")
predict(Sales_model, country_a_20, interval = "prediction", level = 0.95)
country_b_30 <- data.frame(income = 30000, price = 3, temperature = 20, country = "B",
season = "Winter")
predict(Sales_model, country_b_30, interval = "prediction", level = 0.95)

country_ice_temp1 <- data.frame(income = 30000, price = 3, temperature = 20, country =


"B", season = "Winter")
predict(Sales_model, country_ice_temp1, interval = "prediction", level = 0.95)
country_ice_temp2 <- data.frame(income = 30000, price = 3.5, temperature = 22, country =
"B", season = "Winter")
predict(Sales_model, country_ice_temp2, interval = "prediction", level = 0.95)

#Testing conditions, first condition is linearity


par(mfrow=c(1,2))
plot(Sales_model$residuals ~ icecream$income)
plot(Sales_model$residuals ~ icecream$price)
plot(Sales_model$residuals ~ icecream$temperature)

#Second condition; nearly normally distributed error terms


hist(Sales_model$residuals)
qqnorm(Sales_model$residuals)
qqline(Sales_model$residuals)

#Constant variability of residuals


plot(Sales_model$residuals ~ Sales_model$fitted)

#Independent residuals
plot(Sales_model$residuals)

# Prediction
pred <- data.frame(income = 30000, price = 3, temperature = 23, country = "A", season =
"Spring")
predict(Sales_model, pred, interval = "prediction", level = 0.95)
OUTPUT:

RESULT:
Thus, the program to implement a classifier for the sales data are successfully
executed and the output was verified.
EX NO :3 Develop a predictive model for predicting house prices.

AIM:

To develop a predictive model for predicting house prices

PROCEDURE:

Step 1:download packages from the RTools website and follow the installation instructions.

RTools is typically used for building and installing R packages that include compiled

code.

Step 2:Set Up Your Working Directory to Create a folder where you will store your R scripts

and data files.

Step 3:Write Your R Script in your new script file.

Step 4:Run Your Script using a Rtool console and use the source() function to run your script

Step 5:Review Output and Check the output in the R console or RStudio console.
PROGRAM:

# Load necessary libraries


library(ggplot2)
library(dplyr)
# Load the dataset (e.g., mtcars dataset)
data("mtcars")
house_data <- as.data.frame(mtcars)
# Build a linear regression model
model <- lm(mpg ~ hp + wt + qsec, data = house_data)
# Make predictions on the dataset
predictions <- predict(model, newdata = house_data)
# Define price ranges (customize as needed)
price_ranges <- cut(predictions, breaks = c(0, 15, 25, 35, 10), labels = c("Very Low", "Low",
"Medium", "High"))

# Count the number of houses in each price range


price_counts <- table(price_ranges)

# Create a pie chart


pie(price_counts, main = "House Price Distribution", labels = c("Very Low", "Low",
"Medium", "High"))
OUTPUT:

RESULT:
Thus, the program to develop a predictive model for predicting house prices are
successfully executed and the output was verified.
EX NO :4 Implement the FIND-S algorithm. Verify that it successfully
produces the trace in for the Enjoy sport example.

AIM:
Implement and demonstrate the FIND-S algorithm for finding the most specific
hypothesis based on a given set of training data samples. Read the training data from
a .CSV file.

PROCEDURE:

Step 1: Install Jupyter Notebook you can do so by running one of the following commands in
your command prompt or terminal:

 Using pip: pip install jupyter


 Using conda (if you use Anaconda): conda install jupyter

Step 2: Launch Jupyter Notebook using Run on command: jupyter notebook

Step 3: Access Jupyter Notebook through Your web browser will open, displaying the
Jupyter Notebook dashboard.

Step 4: To navigate Your .ipynb File In the Jupyter Notebook dashboard, click on the folder
containing your .ipynb file.

Step 5: Open Your .ipynb File Click on the .ipynb file you want to run. It will open in a new
tab.

Step 6: Running Cells in Jupyter Notebooks consist of cells. Click on a code cell to select it.
Press Shift + Enter on your keyboard or click the "Run" button in the toolbar

Step 7: Close Jupyter Notebook When you're done, you can close the Jupyter Notebook tab
in your web browser.

Step 8: Shut Down the Jupyter Notebook Server to Go back the command prompt or
terminal where you started Jupyter Notebook and press Ctrl + C to shut down the Jupyter
Notebook server.
PROGRAM:

FindS.ipynb

import csv
hypo=[]
data=[]
with open('SP.csv') as csv_file:
fd = csv.reader(csv_file)
print("\nThe given training examples are:")
for line in fd:
print(line)
if line[-1]== "Yes":
data.append(line)
print("\nThe positive examples are: playing sports");
for x in data:
print(x);
row= len(data);
col=len(data[0]);
for j in range(col):
hypo.append(data[0][j]);

for i in range(row):
for j in range(col):
if hypo[j]!=data[i][j]:
hypo[j]='?'

print("\nThe maximally specific Find-s hypothesis for the given training examples
is");
print(hypo)
OUTPUT:
The given training examples are:
['Sky', 'Airtemp', 'Humidity', 'Wind', 'Water', 'Forecast', 'WaterSport']
['Sunny', 'Warm', 'Normal', 'Strong', 'Warm', 'Same', 'Yes']
['Sunny', 'Warm', 'High', 'Strong', 'Warm', 'Same', 'Yes']
['Cloudy', 'Cold', 'High', 'Strong', 'Warm', 'Change', 'No']
['Sunny', 'Warm', 'High', 'Strong', 'Cool', 'Change', 'Yes']

The positive examples are: playing sports


['Sunny', 'Warm', 'Normal', 'Strong', 'Warm', 'Same', 'Yes']
['Sunny', 'Warm', 'High', 'Strong', 'Warm', 'Same', 'Yes']
['Sunny', 'Warm', 'High', 'Strong', 'Cool', 'Change', 'Yes']

The maximally specific Find-s hypothesis for the given training examples is
['Sunny', 'Warm', '?', 'Strong', '?', '?', 'Yes']

RESULT:
Thus, the program to implement and demonstrate the FIND-S algorithm for finding
the most specific hypothesis are successfully executed and the output was verified.

EX NO :5 Implement a decision tree algorithm for sales


prediction/classification in retail sector

AIM:

To implement a decision tree algorithm for sales prediction/ classification in retail sector.

PROCEDURE:

Step 1:download packages from the RTools website and follow the installation instructions.

RTools is typically used for building and installing R packages that include compiled

code.

Step 2:Set Up Your Working Directory to Create a folder where you will store your R scripts

and data files.

Step 3:Write Your R Script in your new script file.

Step 4:Run Your Script using a Rtool console and use the source() function to run your script

Step 5:Review Output and Check the output in the R console or RStudio console.
PROGRAM:

# Install and load necessary libraries


install.packages("rpart")

install.packages("rpart.plot")
library(rpart)
library(rpart.plot)

# Load the mtcars dataset


data(mtcars)

# Assume 'mpg' as the target variable (sales prediction) and 'cyl', 'hp', and 'wt' as features
# You should replace these variables with your actual dataset columns
target_variable <- "mpg"
feature_variables <- c("cyl", "hp", "wt")

# Create the decision tree model


tree_model <- rpart(formula(paste(target_variable, "~", paste(feature_variables, collapse = "
+ "))), data = mtcars)

# Visualize the decision tree


rpart.plot(tree_model, type = 2, extra = 1, under = TRUE, cex = 0.8)

# Make predictions (you can replace the newdata with your test data)
predictions <- predict(tree_model, newdata = mtcars, type = "class")

# Display the predictions


cat("Predictions:\n", predictions)
# Display the decision tree rules (text representation)
summary(tree_model)

OUTPUT

RESULT:
Thus, the program to implement a decision tree algorithm for sales prediction/
classification in retail sector are successfully executed and the output was verified.
EX NO :6 Implement back propagation algorithm for stock prices
prediction

AIM:

To implement back propagation algorithm for stock prices prediction

PROCEDURE:

Step 1:download packages from the RTools website and follow the installation instructions.

RTools is typically used for building and installing R packages that include compiled

code.

Step 2:Set Up Your Working Directory to Create a folder where you will store your R scripts

and data files.

Step 3:Write Your R Script in your new script file.

Step 4:Run Your Script using a Rtool console and use the source() function to run your script

Step 5:Review Output and Check the output in the R console or RStudio console.
PROGRAM:

# Install and load required packages


install.packages("quantmod")
install.packages("neuralnet")
library(quantmod)
library(neuralnet)

# Define the stock symbol and start/end dates


stock_symbol <- "AAPL" # Apple Inc. stock symbol
start_date <- "2020-01-01"
end_date <- "2023-01-01"

# Fetch stock price data from Yahoo Finance


getSymbols(stock_symbol, from = start_date, to = end_date)

# Extract the adjusted closing prices


stock_data <- Ad(get(stock_symbol))

# Create a data frame with Date and Price columns


data <- data.frame(Date = index(stock_data), Price = as.numeric(stock_data))

# Normalize the stock prices to the range [0, 1]


data$Price <- (data$Price - min(data$Price)) / (max(data$Price) - min(data$Price))

# Define the neural network model


model <- neuralnet(Price ~ Date, data = data, hidden = c(5, 5, 5), linear.output = TRUE)

# Generate predictions
predictions <- predict(model, data)

# Denormalize the predictions to the original scale


predictions <- predictions * (max(data$Price) - min(data$Price)) + min(data$Price)

# Plot the actual and predicted stock prices


plot(data$Date, data$Price, type = "l", col = "blue", xlab = "Date", ylab = "Price")
lines(data$Date, predictions, col = "red")
legend("topright", legend = c("Actual", "Predicted"), col = c("blue", "red"), lty = 1)
OUTPUT:

RESULT:
Thus, the program to implement back propagation algorithm for stock prices
prediction are successfully executed and the output was verified.

EX.NO: 7 Implement clustering algorithm for Insurance fraud

AIM:

To implement clustering algorithm for insurance fraud.

PROCEDURE:

Step 1: Install Jupyter Notebook you can do so by running one of the following commands in
your command prompt or terminal:

 Using pip: pip install jupyter


 Using conda (if you use Anaconda): conda install jupyter

Step 2: Launch Jupyter Notebook using Run on command: jupyter notebook

Step 3: Access Jupyter Notebook through Your web browser will open, displaying the
Jupyter Notebook dashboard.

Step 4: To navigate Your .ipynb File In the Jupyter Notebook dashboard, click on the folder
containing your .ipynb file.

Step 5: Open Your .ipynb File Click on the .ipynb file you want to run. It will open in a new
tab.

Step 6: Running Cells in Jupyter Notebooks consist of cells. Click on a code cell to select it.
Press Shift + Enter on your keyboard or click the "Run" button in the toolbar

Step 7: Close Jupyter Notebook When you're done, you can close the Jupyter Notebook tab
in your web browser.

Step 8: Shut Down the Jupyter Notebook Server to Go back the command prompt or
terminal where you started Jupyter Notebook and press Ctrl + C to shut down the Jupyter
Notebook server.
PROGRAM:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from sklearn.datasets import load_breast_cancer

# Load the dataset


data = load_breast_cancer()
X = data.data
feature_names = data.feature_names
df = pd.DataFrame(X, columns=feature_names)

scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

n_clusters = 2

kmeans = KMeans(n_clusters=n_clusters, random_state=42)


df['Cluster'] = kmeans.fit_predict(X_scaled)

from sklearn.decomposition import PCA

pca = PCA(n_components=2)
X_pca = pca.fit_transform(X_scaled)
df['PCA1'] = X_pca[:, 0]
df['PCA2'] = X_pca[:, 1]

plt.scatter(df[df['Cluster'] == 0]['PCA1'], df[df['Cluster'] == 0]['PCA2'], label='Cluster 0')


plt.scatter(df[df['Cluster'] == 1]['PCA1'], df[df['Cluster'] == 1]['PCA2'], label='Cluster 1')
plt.legend()
plt.xlabel('PCA1')
plt.ylabel('PCA2')
plt.title('K-Means Clustering')
plt.show()
OUTPUT:

RESULT:
Thus, the program to implement clustering algorithm for insurance fraud are
successfully executed and the output was verified.
EX.NO:8 Implement clustering algorithm for identifying cancerous data

AIM:

To implement clustering algorithm for identifying cancerous data Read the training data

PROCEDURE:

Step 1: Install Jupyter Notebook you can do so by running one of the following commands in
your command prompt or terminal:

 Using pip: pip install jupyter


 Using conda (if you use Anaconda): conda install jupyter

Step 2: Launch Jupyter Notebook using Run on command: jupyter notebook

Step 3: Access Jupyter Notebook through Your web browser will open, displaying the
Jupyter Notebook dashboard.

Step 4: To navigate Your .ipynb File In the Jupyter Notebook dashboard, click on the folder
containing your .ipynb file.

Step 5: Open Your .ipynb File Click on the .ipynb file you want to run. It will open in a new
tab.

Step 6: Running Cells in Jupyter Notebooks consist of cells. Click on a code cell to select it.
Press Shift + Enter on your keyboard or click the "Run" button in the toolbar

Step 7: Close Jupyter Notebook When you're done, you can close the Jupyter Notebook tab
in your web browser.

Step 8: Shut Down the Jupyter Notebook Server to Go back the command prompt or
terminal where you started Jupyter Notebook and press Ctrl + C to shut down the Jupyter
Notebook server.
PROGRAM:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from sklearn.datasets import load_breast_cancer

# Load the dataset


data = load_breast_cancer()
X = data.data
feature_names = data.feature_names
df = pd.DataFrame(X, columns=feature_names)

scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

n_clusters = 2

kmeans = KMeans(n_clusters=n_clusters, random_state=42)


df['Cluster'] = kmeans.fit_predict(X_scaled)

from sklearn.decomposition import PCA

pca = PCA(n_components=2)
X_pca = pca.fit_transform(X_scaled)
df['PCA1'] = X_pca[:, 0]
df['PCA2'] = X_pca[:, 1]
plt.scatter(df[df['Cluster'] == 0]['PCA1'], df[df['Cluster'] == 0]['PCA2'], label='Cluster 0')
plt.scatter(df[df['Cluster'] == 1]['PCA1'], df[df['Cluster'] == 1]['PCA2'], label='Cluster 1')
plt.legend()
plt.xlabel('PCA1')
plt.ylabel('PCA2')
plt.title('K-Means Clustering')
plt.show()
OUTPUT:

RESULT:
Thus, the program to implement clustering algorithm for identifying cancerous are
successfully executed and the output was verified.
EX.NO: 9 Apply reinforcement learning and develop a game of your own

AIM:
To apply reinforcement learning and develop a game of your own

PROCEDURE:

Step 1: Install Jupyter Notebook you can do so by running one of the following commands in
your command prompt or terminal:

 Using pip: pip install jupyter


 Using conda (if you use Anaconda): conda install jupyter

Step 2: Launch Jupyter Notebook using Run on command: jupyter notebook

Step 3: Access Jupyter Notebook through Your web browser will open, displaying the
Jupyter Notebook dashboard.

Step 4: To navigate Your .ipynb File In the Jupyter Notebook dashboard, click on the folder
containing your .ipynb file.

Step 5: Open Your .ipynb File Click on the .ipynb file you want to run. It will open in a new
tab.

Step 6: Running Cells in Jupyter Notebooks consist of cells. Click on a code cell to select it.
Press Shift + Enter on your keyboard or click the "Run" button in the toolbar

Step 7: Close Jupyter Notebook When you're done, you can close the Jupyter Notebook tab
in your web browser.

Step 8: Shut Down the Jupyter Notebook Server to Go back the command prompt or
terminal where you started Jupyter Notebook and press Ctrl + C to shut down the Jupyter
Notebook server.
PROGRAM:

import pygame
import random
import numpy as np

# Constants
WIDTH, HEIGHT = 600, 400
FPS = 30

# Colors
WHITE = (255, 255, 255)
RED = (255, 0, 0)
GREEN = (0, 255, 0)

# Define the GridWorld


class GridWorld:
def __init__(self, rows, cols, start):
self.rows = rows
self.cols = cols
self.grid = np.zeros((rows, cols))
self.state = start
self.all_states = [(i, j) for i in range(rows) for j in range(cols)]

def get_possible_actions(self):
row, col = self.state
actions = ['up', 'down', 'left', 'right']
if row == 0:
actions.remove('up')
if row == self.rows - 1:
actions.remove('down')
if col == 0:
actions.remove('left')
if col == self.cols - 1:
actions.remove('right')
return actions

def step(self, action):


row, col = self.state
if action == 'up':
row -= 1
elif action == 'down':
row += 1
elif action == 'left':
col -= 1
elif action == 'right':
col += 1
# Ensure the agent stays within the grid
row = max(0, min(row, self.rows - 1))
col = max(0, min(col, self.cols - 1))

self.state = (row, col)

reward = 0
if self.state == (2, 2): # Goal state
reward = 1
elif self.state in [(1, 2), (2, 1)]: # Penalty states
reward = -1

done = False
if reward != 0:
done = True

return self.state, reward, done

def reset(self):
self.state = (0, 0)
return self.state

# Define the Q-learning agent


class QLearningAgent:
def __init__(self, actions):
self.actions = actions
self.q_values = {}

def get_q_value(self, state, action):


return self.q_values.get((state, action), 0.0)

def choose_action(self, state, epsilon):


if random.uniform(0, 1) < epsilon:
return random.choice(self.actions)
else:
q_values = [self.get_q_value(state, a) for a in self.actions]
max_q = max(q_values)
return random.choice([a for a, q in zip(self.actions, q_values) if q == max_q])

def update_q_value(self, state, action, value):


self.q_values[(state, action)] = value

# Pygame setup
pygame.init()
screen = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Reinforcement Learning Game")
clock = pygame.time.Clock()
# Initialize the environment and agent
environment = GridWorld(rows=3, cols=3, start=(0, 0))
agent = QLearningAgent(actions=['up', 'down', 'left', 'right'])

# Training parameters
num_episodes = 1000
alpha = 0.1
gamma = 0.9
epsilon = 0.1

# Main game loop


running = True
for episode in range(num_episodes):
state = environment.reset()
done = False

while not done:


for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False

action = agent.choose_action(state, epsilon)


next_state, reward, done = environment.step(action)

best_next_action = max(agent.get_q_value(next_state, a) for a in agent.actions)


new_q_value = (1 - alpha) * agent.get_q_value(state, action) + alpha * (reward +
gamma * best_next_action)
agent.update_q_value(state, action, new_q_value)

state = next_state

# Draw the grid


screen.fill(WHITE)
for i in range(environment.rows):
for j in range(environment.cols):
pygame.draw.rect(screen, GREEN, (j * WIDTH / environment.cols, i * HEIGHT /
environment.rows,
WIDTH / environment.cols, HEIGHT / environment.rows), 1)
if (i, j) == environment.state:
pygame.draw.rect(screen, RED, (j * WIDTH / environment.cols, i * HEIGHT /
environment.rows,
WIDTH / environment.cols, HEIGHT / environment.rows))

pygame.display.flip()
clock.tick(FPS)

# Quit Pygame
pygame.quit()
OUTPUT:

RESULT:
Thus, the program to apply reinforcement learning and develop a game of your own
are successfully executed and the output was verified.
EX.NO: 10 Develop a traffic signal control system using reinforcement
learning technique

AIM:

To develop a traffic signal control system using reinforcement learning technique

PROCEDURE:

Step 1:download packages from the RTools website and follow the installation instructions.

RTools is typically used for building and installing R packages that include compiled

code.

Step 2:Set Up Your Working Directory to Create a folder where you will store your R scripts

and data files.

Step 3:Write Your R Script in your new script file.

Step 4:Run Your Script using a Rtool console and use the source() function to run your script

Step 5:Review Output and Check the output in the R console or RStudio console.
PROGRAM:

library(ggplot2)
# Define a simple traffic simulation environment
# In a real-world scenario, you would need a more complex environment.
simulate_traffic <- function(num_time_steps) {
traffic_data <- data.frame(
time_step = 1:num_time_steps,
road_1_traffic = sample(0:30, num_time_steps, replace = TRUE),
road_2_traffic = sample(0:40, num_time_steps, replace = TRUE)
)
return(traffic_data)
}

# Implement a basic RL agent to control traffic signals


control_traffic <- function(traffic_data) {
traffic_data$signal_state <- rep("Red", nrow(traffic_data))

for (i in 1:nrow(traffic_data)) {
if (traffic_data$road_1_traffic[i] < 20 && traffic_data$road_2_traffic[i] < 30) {
traffic_data$signal_state[i] <- "Green"
}
}

return(traffic_data)
}

# Simulate and control traffic


num_time_steps <- 100
traffic_data <- simulate_traffic(num_time_steps)
controlled_traffic_data <- control_traffic(traffic_data)

# Visualize traffic data


ggplot(controlled_traffic_data, aes(x = time_step, y = road_1_traffic, color = signal_state)) +
geom_line() +
geom_line(aes(y = road_2_traffic), linetype = "dashed") +
scale_color_manual(values = c("Red" = "red", "Green" = "green")) +
labs(title = "Traffic Signal Control Simulation", x = "Time Step", y = "Traffic Volume") +
theme_minimal()
OUTPUT:

RESULT:
Thus, the program to develop a traffic signal control system using reinforcement
learning technique are successfully executed and the output was verified.

You might also like