0% found this document useful (0 votes)
26 views23 pages

Unit 4

The document outlines various Python programs utilizing deep learning techniques for tasks such as sentiment analysis, automated essay scoring, predictive text generation, medical diagnosis assistance, time series forecasting, and music generation. Each program includes a clear aim, algorithmic steps, and code implementation. The results indicate successful execution and verification of the outputs for each program.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views23 pages

Unit 4

The document outlines various Python programs utilizing deep learning techniques for tasks such as sentiment analysis, automated essay scoring, predictive text generation, medical diagnosis assistance, time series forecasting, and music generation. Each program includes a clear aim, algorithmic steps, and code implementation. The results indicate successful execution and verification of the outputs for each program.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

EX.

NO:
SENTIMENT ANALYSIS FROM TEXT
DATE:

AIM:

To write a python program using a deep learning model to analyze and predict sentiment from
text data.

ALGORITHM:

Step 1: Start

Step 2: Load the IMDb dataset with a vocabulary size of 20,000 words.

Step 3: Preprocess the data by padding sequences to a fixed length of 200 and convert labels to
categorical format.

Step 4: Define a Sequential model with an Embedding layer, SpatialDropout1D, LSTM layer,
and Dense output layer.

Step 5: Compile the model using categorical cross-entropy loss, Adam optimizer, and accuracy
as the evaluation metric.

Step 6: Train the model on the training dataset for 5 epochs with a batch size of 64.

Step 7: Evaluate the model on the test dataset and print the accuracy.

Step 8: Define a function to predict sentiment for a given review and display the output. Stop
the program.

PROGRAM:

import numpy as np

import pandas as pd

from keras.datasets import imdb

from keras.models import Sequential

from keras.layers import Dense, LSTM, Embedding, SpatialDropout1D

from keras.preprocessing.sequence import pad_sequences

CSE/NM/NN&DL / 513322104001/PUNITHA U
from keras.utils import to_categorical

max_features = 20000 # Number of unique words to consider

max_length = 200 # Maximum length of each review

embedding_dim = 128 # Dimension of the embedding layer

(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=max_features)

X_train = pad_sequences(X_train, maxlen=max_length)

X_test = pad_sequences(X_test, maxlen=max_length)

y_train = to_categorical(y_train)

y_test = to_categorical(y_test)

model = Sequential()

model.add(Embedding(input_dim=max_feature,output_dim=embedding_dim,input_length=max_lengt
h))

model.add(SpatialDropout1D(0.2))

model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))

model.add(Dense(2, activation='softmax')) # 2 classes: positive and negative

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

batch_size = 64

epochs = 5

model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size, verbose=1)

loss, accuracy = model.evaluate(X_test, y_test, verbose=1)

print(f"Test Accuracy: {accuracy:.4f}")

def predict_sentiment(review):

word_index = imdb.get_word_index() # Get the word index

review = [word_index.get(word, 0) + 3 for word in review.split() if word in word_index] # Use get


to handle unknown words

review = pad_sequences([review], maxlen=max_length)

prediction = model.predict(review)

CSE/NM/NN&DL / 513322104001/PUNITHA U
sentiment = np.argmax(prediction, axis=1)

return "Positive" if sentiment[0] == 1 else "Negative"

sample_review = "The movie was fantastic! I loved it."

print(f"Review: {sample_review}")

print(f"Predicted Sentiment: {predict_sentiment(sample_review)}")

OUTPUT:
Test Accuracy: 0.8421
Review: The movie was fantastic! I loved it.
Downloading data from
https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb_word_index.json
1641221/1641221 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 674ms/step
Predicted Sentiment: Positive

RESULT:

Thus, the write a python program using a deep learning model to analyze and predict
sentiment from text data was executed successfully and output verified.

CSE/NM/NN&DL / 513322104001/PUNITHA U
EX.NO:
AUTOMATED EASY SCORING SYSTEM
DATE:

AIM:

To write a python program using gradient learning and backpropagation to score essays.

ALGORITHM:

Step 1: Start

Step 2: Create a dataset with essays and their corresponding scores.

Step 3: Split the dataset into training and testing sets.

Step 4: Convert the text data into numerical features using TF-IDF vectorization.

Step 5: Define and compile a neural network model for regression.

Step 6: Train the model on the training data and evaluate it on the test data.

Step 7: Define a function to predict the score of a given essay and display the result. Stop the
program.

PROGRAM:

import numpy as np

import pandas as pd

from sklearn.model_selection import train_test_split

from sklearn.feature_extraction.text import TfidfVectorizer

from keras.models import Sequential

from keras.layers import Dense

from keras.optimizers import Adam

data = {

'essay': [

"This is a great essay. It has a clear structure and good arguments.",

CSE/NM/NN&DL / 513322104001/PUNITHA U
"The essay is poorly written and lacks coherence.",

"An average essay with some good points but also many flaws.",

"Excellent work! Very insightful and well-organized.",

"This essay is not good. It needs a lot of improvement."

],

'score': [4, 1, 3, 5, 2] # Scores from 1 to 5

df = pd.DataFrame(data)

X_train, X_test, y_train, y_test = train_test_split(df['essay'], df['score'], test_size=0.2,


random_state=42)

vectorizer = TfidfVectorizer(max_features=1000)

X_train_tfidf = vectorizer.fit_transform(X_train).toarray()

X_test_tfidf = vectorizer.transform(X_test).toarray()

model = Sequential([

Dense(64, input_dim=X_train_tfidf.shape[1], activation='relu'),

Dense(32, activation='relu'),

Dense(1, activation='linear') # Regression output

])

model.compile(loss='mean_squared_error', optimizer=Adam(learning_rate=0.001))

model.fit(X_train_tfidf, y_train, epochs=100, batch_size=5, verbose=1)

loss = model.evaluate(X_test_tfidf, y_test, verbose=1)

print(f"Test Loss: {loss:.4f}")

def predict_score(essay):

essay_tfidf = vectorizer.transform([essay]).toarray()

return model.predict(essay_tfidf)[0][0]

sample_essay = "This essay is well-structured and presents strong arguments."

predicted_score = predict_score(sample_essay)

CSE/NM/NN&DL / 513322104001/PUNITHA U
print(f"Predicted Score: {predicted_score:.2f}")

OUTPUT:
Epoch 100/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 79ms/step - loss: 0.0012
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 155ms/step - loss: 1.6250
Test Loss: 1.6250
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 86ms/step
Predicted Score: 3.56

RESULT:

Thus, The write a python program using gradient learning and backpropagation to score essays
was executed successfully and output verified.

CSE/NM/NN&DL / 513322104001/PUNITHA U
EX.NO:
PREDICTIVE TEXT GENERATION
DATE:

AIM:

To write a python program using a system that generates predictive text using deep
feedforward networks.

ALGORITHM:

Step 1: Start

Step 2: Tokenize the input text data and create a word index.

Step 3: Generate input sequences for training by padding them to the same length.

Step 4: Split the sequences into input (X) and output (y) values and convert y to categorical
format.

Step 5: Define, compile, and train an LSTM-based neural network model.

Step 6: Create a function to predict the next word given a seed text.

Step 7: Use the trained model to predict and display the next word for a given input. Stop the
program.

PROGRAM:
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, Dense, LSTM
data = ["hello world", "hello there", "how are you", "hello how are you"]
tokenizer = Tokenizer()
tokenizer.fit_on_texts(data)

CSE/NM/NN&DL / 513322104001/PUNITHA U
total_words = len(tokenizer.word_index) + 1
input_sequences = []
for line in data:
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
input_sequences.append(token_list[:i+1])
max_sequence_length = max(len(seq) for seq in input_sequences)
input_sequences = pad_sequences(input_sequences, maxlen=max_sequence_length, padding='pre')
X, y = input_sequences[:, :-1], input_sequences[:, -1]
y = tf.keras.utils.to_categorical(y, num_classes=total_words)

model = Sequential([
Embedding(total_words, 10, input_length=max_sequence_length-1),
LSTM(100),
Dense(total_words, activation='softmax')
])

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])


model.fit(X, y, epochs=100, verbose=1)

def predict_next_word(seed_text, model, tokenizer, max_sequence_length):


token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_length-1, padding='pre')
predicted_probs = model.predict(token_list, verbose=0)
predicted_index = np.argmax(predicted_probs)
for word, index in tokenizer.word_index.items():
if index == predicted_index:
return word
return ""
seed_text = "hello"
predicted_word = predict_next_word(seed_text, model, tokenizer, max_sequence_length)
print(f"Predicted next word: {predicted_word}")

CSE/NM/NN&DL / 513322104001/PUNITHA U
OUTPUT:
Predicted next word: how

RESULT:

Thus , The write a python program using a system that generates predictive text using deep
feed forward networks was executed successfully and output verified.

CSE/NM/NN&DL / 513322104001/PUNITHA U
EX.NO:
MEDICAL DIAGNOSIS ASSISTANT
DATE:

AIM:

To write a python program using a create a tool to assist in medical diagnosis using deep
learning techniques.

ALGORITHM:

Step 1: Start

Step 2: Load the diabetes dataset and assign column names.

Step 3: Preprocess the data by scaling the input features.

Step 4: Split the data into training and testing sets.

Step 5: Define, compile, and train a neural network model for binary classification.

Step 6: Evaluate the model on the test data and display the accuracy.

Step 7: Define a function to predict diabetes for a new patient and display the result. Stop the
program.

PROGRAM:

import numpy as np

import pandas as pd

from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Dense

from tensorflow.keras.optimizers import Adam

from sklearn.model_selection import train_test_split

from sklearn.preprocessing import StandardScaler

url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"

CSE/NM/NN&DL / 513322104001/PUNITHA U
columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI',
'DiabetesPedigreeFunction', 'Age', 'Outcome']

data = pd.read_csv(url, header=None, names=columns)

X, y = data.drop('Outcome', axis=1), data['Outcome']

X_scaled = StandardScaler().fit_transform(X)

X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=42)

model = Sequential([

Dense(32, input_dim=X_train.shape[1], activation='relu'),

Dense(16, activation='relu'),

Dense(1, activation='sigmoid')

])

model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])

model.fit(X_train, y_train, epochs=100, batch_size=10, validation_data=(X_test, y_test), verbose=0)

loss, accuracy = model.evaluate(X_test, y_test, verbose=0)

print(f"Model accuracy: {accuracy*100:.2f}%")

def predict_diabetes(patient_data):

patient_data_scaled = StandardScaler().fit(X).transform([patient_data])

return "Diabetic" if model.predict(patient_data_scaled)[0][0] > 0.5 else "Non-Diabetic"

new_patient = [2, 120, 70, 35, 0, 25, 0.4, 40]

print(predict_diabetes(new_patient))

CSE/NM/NN&DL / 513322104001/PUNITHA U
OUTPUT:

Model accuracy: 73.38%


1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 72ms/step
Non-Diabetic

RESULT:

Thus, The write a python program using a create a tool to assist in medical diagnosis using
deep learning techniques was executed successfully and output verified.

CSE/NM/NN&DL / 513322104001/PUNITHA U
EX.NO: TIME SERIES FORECASTING USING RNN
DATE:

AIM:

To write a python program using RNNs for forecasting weather, stock prices, or other time-
series data.

ALGORITHM:

Step 1: Start the program.

Step 2: Generate a sine wave dataset with 100 data points using np.linspace() and
np.sin().

Step 3: Define a sequence length (window size) of 10 for input sequences.

Step 4: Prepare training data by splitting the dataset into input sequences (X) and
corresponding target values (Y).

Step 5: Reshape X into a format suitable for an RNN model using .reshape(-1,
seq_length, 1).

Step 6: Build a SimpleRNN model with one recurrent layer of 10 units and a dense output
layer.

Step 7: Compile the model using the Adam optimizer and mean squared error (MSE) loss
function.

Step 8: Train the model for 100 epochs using .fit() without displaying progress.

Step 9: Predict values using the trained model, plot actual vs. predicted data using
matplotlib, and display the graph.

Step 10: Stop the program.

PROGRAM:

import numpy as np

import tensorflow as tf

CSE/NM/NN&DL / 513322104001/PUNITHA U
import matplotlib.pyplot as plt

def generate_data(seq_length=100):

x = np.linspace(0, 50, seq_length)

y = np.sin(x)

return y

data = generate_data()

seq_length = 10 # Window size

X, Y = [], []

for i in range(len(data) - seq_length):

X.append(data[i:i + seq_length])

Y.append(data[i + seq_length])

X = np.array(X).reshape(-1, seq_length, 1) # Reshape for RNN

Y = np.array(Y)

model = tf.keras.Sequential([

tf.keras.layers.SimpleRNN(10, activation='relu', return_sequences=False),

tf.keras.layers.Dense(1)

])

model.compile(optimizer='adam', loss='mse')

model.fit(X, Y, epochs=100, verbose=0)

predicted = model.predict(X)

plt.figure(figsize=(10, 5))

plt.plot(data, label="Actual Data")

plt.plot(range(seq_length, len(predicted) + seq_length), predicted, label="Predicted Data")

plt.legend()

plt.show()

CSE/NM/NN&DL / 513322104001/PUNITHA U
OUTPUT:

RESULT:

Thus , The write a python program using RNNs for forecasting weather, stock prices, or other
time-series datawas executed successfully and output verified.

CSE/NM/NN&DL / 513322104001/PUNITHA U
EX.NO: MUSIC GENERATION USING RNN
DATE:

AIM:

To write a python program using a RNN to generate original music compositions.

ALGORITHM:

Step 1: Start the program.

Step 2: Generate a random melody sequence using predefined MIDI notes from the C Major
scale.

Step 3: Convert the melody into a numerical format using a mapping of notes to indexes.

Step 4: Prepare input-output training sequences for the RNN model with a sequence length of
5.

Step 5: Define and train a SimpleRNN model to predict the next note based on previous notes.

Step 6: Generate a new melody by predicting notes iteratively and mapping indexes back to
MIDI notes.

Step 7: Convert the generated melody into a MIDI file and save it as "generated_music.mid".

Step 8: Stop the program.

PROGRAM:

import numpy as np

import tensorflow as tf

from music21 import stream, note

def generate_melody_data(seq_length=50):

notes = [60, 62, 64, 65, 67, 69, 71, 72] # C Major Scale MIDI notes

data = np.random.choice(notes, seq_length)

return data

CSE/NM/NN&DL / 513322104001/PUNITHA U
data = generate_melody_data()

seq_length = 5 # Window size

X, Y = [], []

notes = [60, 62, 64, 65, 67, 69, 71, 72]

note_to_index = {note: i for i, note in enumerate(notes)}

index_to_note = {i: note for note, i in note_to_index.items()}

for i in range(len(data) - seq_length):

X.append([note_to_index[note] for note in data[i:i + seq_length]])

Y.append(note_to_index[data[i + seq_length]])

X = np.array(X).reshape(-1, seq_length, 1) # Reshape for RNN

Y = np.array(Y)

model = tf.keras.Sequential([

tf.keras.layers.SimpleRNN(64, activation='relu', return_sequences=False),

tf.keras.layers.Dense(8, activation='softmax') # 8 possible notes

])

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')

model.fit(X, Y, epochs=2, verbose=0)

def generate_melody(seed, length=50):

result = list(seed)

for _ in range(length):

prediction = model.predict(np.array([result[-seq_length:]]).reshape(1, seq_length, 1))

next_note_index = np.argmax(prediction)

next_note = index_to_note[next_note_index]

result.append(next_note)

return result

seed_sequence = [note_to_index[n] for n in data[:seq_length]]

CSE/NM/NN&DL / 513322104001/PUNITHA U
melody = generate_melody(seed_sequence)

stream_obj = stream.Stream()

for midi_note in melody:

stream_obj.append(note.Note(midi_note))

stream_obj.write('midi', fp="generated_music.mid")

print("Music generated: 'generated_music.mid'")

OUTPUT:

RESULT:

Thus, The write a python program using a RNN to generate original music compositions was
executed successfully and output verified.

CSE/NM/NN&DL / 513322104001/PUNITHA U
EX.NO: CHATBOT FOR CUSTOMER SERVICE USING RNN
DATE:

AIM:

To write a python program using a chatbot using deep recurrent networks for handling
customer queries.

ALGORITHM:

Step 1: Start the program.

Step 2: Define sample queries and responses for the chatbot.

Step 3: Tokenize the queries and convert them into padded sequences.

Step 4: Build and train an LSTM-based neural network model to classify responses.

Step 5: Create a function to process user input and predict the most suitable response.

Step 6: Continuously take user input, predict a response, and display it.

Step 7: Stop the program when the user enters "exit" or "quit".

PROGRAM:

import numpy as np

import tensorflow as tf

from tensorflow.keras.preprocessing.text import Tokenizer

from tensorflow.keras.preprocessing.sequence import pad_sequences

queries = ["hi", "hello", "hey", "how are you", "what's up", "bye", "goodbye", "see you"]

responses = ["Hello!", "Hi there!", "Hey!", "I'm good!", "Not much, you?", "Goodbye!", "Take care!",
"See you soon!"]

tokenizer = Tokenizer()

tokenizer.fit_on_texts(queries)

X = pad_sequences(tokenizer.texts_to_sequences(queries), padding='post')

CSE/NM/NN&DL / 513322104001/PUNITHA U
Y = np.arange(len(responses))

model = tf.keras.Sequential([

tf.keras.layers.Embedding(input_dim=len(tokenizer.word_index) + 1, output_dim=8),

tf.keras.layers.LSTM(16),

tf.keras.layers.Dense(len(responses), activation='softmax')

])

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')

model.fit(X, Y, epochs=100, verbose=0) # Reduced epochs for better generalization

def chatbot_response(user_input):

tokens = pad_sequences(tokenizer.texts_to_sequences([user_input]), maxlen=X.shape[1])

prediction = np.argmax(model.predict(tokens))

return responses[prediction]

while True:

user = input("You: ")

if user.lower() in ["exit", "quit"]:

print("Chatbot: Goodbye!"); break

print("Chatbot:", chatbot_response(user))

CSE/NM/NN&DL / 513322104001/PUNITHA U
OUTPUT:

RESULT:

Thus, The write a python program using a chatbot using deep recurrent networks for
handling customer queries was executed successfully and output verified.

CSE/NM/NN&DL / 513322104001/PUNITHA U
EX.NO: LANGUAGE PROCESSING FOR SOCIAL MEDIA
USING RNN
DATE:

AIM:

To write a python program using a tool to process and analyze natural language from social
media posts using RNNs and autoencoders.

ALGORITHM:

Step 1: Start the program.

Step 2: Define sample social media posts and label them as positive or negative.

Step 3: Tokenize the posts and convert them into padded sequences.

Step 4: Build and train an LSTM-based neural network model for sentiment classification.

Step 5: Create a function to process new text input and predict its sentiment.

Step 6: Continuously take user input, analyze sentiment, and display the result.

Step 7: Stop the program when the user enters "exit" or "quit".

PROGRAM:

import numpy as np

import tensorflow as tf

from tensorflow.keras.preprocessing.text import Tokenizer

from tensorflow.keras.preprocessing.sequence import pad_sequences

posts = ["I love this!", "Worst experience ever!", "Feeling happy", "I hate waiting", "Great service!",
"Too expensive"]

labels = [1, 0, 1, 0, 1, 0]

tokenizer = Tokenizer()

tokenizer.fit_on_texts(posts)

X = pad_sequences(tokenizer.texts_to_sequences(posts), padding='post')

CSE/NM/NN&DL / 513322104001/PUNITHA U
model = tf.keras.Sequential([

tf.keras.layers.Embedding(len(tokenizer.word_index)+1, 8, input_length=X.shape[1]),

tf.keras.layers.LSTM(16),

tf.keras.layers.Dense(1, activation='sigmoid')

])

model.compile(optimizer='adam', loss='binary_crossentropy')

model.fit(X, np.array(labels), epochs=100, verbose=0)

def analyze_post(text):

tokens = pad_sequences(tokenizer.texts_to_sequences([text]), maxlen=X.shape[1])

return "Positive" if model.predict(tokens) > 0.5 else "Negative"

while True:

user = input("Enter post: ")

if user.lower() in ["exit", "quit"]:

break

print("Sentiment:", analyze_post(user))

OUTPUT:

RESULT:
Thus, The write a python program using a tool to process and analyze natural language from
social media posts using RNNs and auto encoders was executed successfully and output verified.

CSE/NM/NN&DL / 513322104001/PUNITHA U

You might also like