Practical
Practical
Code:-
import numpy as np
from keras.layers import Input, Dense
from keras.models import Model
# Once trained, you can use the encoder part to get the encoded
representation of the input data
encoder = Model(input_layer, encoded)
encoded_data = encoder.predict(data)
# You can also use the decoder part to reconstruct the input data from the
encoded representation
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(encoded_input, decoder_layer(encoded_input))
reconstructed_data = decoder.predict(encoded_data)
output:-
Practical - 6
Write a program to implement basic reinforcement learning algorithm to teach a
bot to reach its destination.
Code:-
import numpy as np
# Define actions
ACTIONS = ['UP', 'DOWN', 'LEFT', 'RIGHT']
# Define rewards
REWARDS = {
(3, 4): 100, # Reward for reaching the destination
(1, 1): -10, # Penalty for entering a specific state
(2, 2): -5 # Penalty for entering a specific state
}
# Initialize Q-values
Q_values = np.zeros((GRID_SIZE, GRID_SIZE, len(ACTIONS)))
# Define parameters
LEARNING_RATE = 0.1
DISCOUNT_FACTOR = 0.9
EPISODES = 1000
EPSILON = 0.1
if action == 'UP':
next_state = (max(state[0] - 1, 0), state[1])
elif action == 'DOWN':
next_state = (min(state[0] + 1, GRID_SIZE - 1), state[1])
elif action == 'LEFT':
next_state = (state[0], max(state[1] - 1, 0))
elif action == 'RIGHT':
next_state = (state[0], min(state[1] + 1, GRID_SIZE - 1))
reward = 0
if next_state in REWARDS:
reward = REWARDS[next_state]
# Example usage
input_size = 3
hidden_size = 4
output_size = 2
n_steps = 3
X, y = prepare_data(data, n_steps)
# Make predictions
predictions = model.predict(X, verbose=0)