We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3
Que 1-: Write a program for digits recognition using tensorflow.
Ans-: Here's a simple program for digit recognition using TensorFlow:
import tensorflow as tf from tensorflow import keras # Load the MNIST dataset (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Normalize pixel values between 0 and 1 x_train = x_train.astype("float32") / 255 x_test = x_test.astype("float32") / 255 # Reshape the images to 28x28 pixels x_train = x_train.reshape((-1, 28, 28, 1)) x_test = x_test.reshape((-1, 28, 28, 1)) # Build the model architecture model = keras.Sequential([ keras.layers.Conv2D(32, (3,3), activation="relu", input_shape=(28,28,1)), keras.layers.MaxPooling2D((2,2)), keras.layers.Flatten(), keras.layers.Dense(10, activation="softmax") ]) # Compile the model model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]) # Train the model model.fit(x_train, y_train, epochs=5, validation_data=(x_test, y_test)) # Evaluate the model on the test set test_loss, test_acc = model.evaluate(x_test, y_test) print("Test accuracy:", test_acc) # Make predictions on a single image import numpy as np image = x_test[0] prediction = np.argmax(model.predict(image.reshape((-1,28,28,1)))) print("Predicted digit:", prediction) This program loads the MNIST dataset of handwritten digits, normalizes the pixel values, and builds a convolutional neural network (CNN) to recognize the digits. The CNN architecture consists of a convolutional layer, a max pooling layer, a flattening layer, and a fully connected layer with softmax activation. The model is trained on the training set for 5 epochs, and then evaluated on the test set. Finally, the program makes a prediction on a single image from the test set. Que 2-: What are Activation Functions? Give Examples and Explain. Ans-: Activation functions are mathematical functions used in artificial neural networks that help to introduce non-linearity into the output of a neuron. These functions are applied to the weighted sum of the inputs and biases of a neuron, known as the activation, to determine whether or not the neuron should be activated and its output passed on to the next layer in the network. There are several different types of activation functions, each with their own properties and use cases. Here are some examples: Sigmoid function: The sigmoid function takes an input value and returns a value between 0 and 1. It is often used as an activation function in binary classification problems, where the output of the neural network should be either 0 or 1. However, it can suffer from the vanishing gradient problem, which can make it difficult to train deeper networks. ReLU function: The rectified linear unit (ReLU) function takes an input value and returns either the input value (if it is positive) or 0 (if it is negative). It is a commonly used activation function in deep learning because it is easy to compute and avoids the vanishing gradient problem. However, it can suffer from the dying ReLU problem, where neurons can become permanently inactive during training. Tanh function: The hyperbolic tangent (tanh) function takes an input value and returns a value between -1 and 1. It is similar to the sigmoid function, but has a steeper gradient, which can make it more effective for training deeper networks. However, like the sigmoid function, it can suffer from the vanishing gradient problem. Softmax function: The softmax function takes a vector of inputs and normalizes them into a probability distribution, where each element of the vector represents the probability of a particular class. It is commonly used as the activation function in the output layer of a neural network for multi-class classification problems. Activation functions are a crucial component of neural networks, as they help to introduce non-linearity into the network's output and enable it to model complex relationships between inputs and outputs. The choice of activation function can have a significant impact on the performance of a neural network, so it is important to select an appropriate function for the task at hand. Que 3-: Explain Higer order Tensor with the help of example. Ans-: A tensor is a mathematical object that represents a collection of numbers, arranged in a specific way, with respect to a set of coordinate axes. A higher-order tensor is a tensor with more than two axes. For example, a 3-dimensional vector can be represented as a tensor of order 1, a matrix can be represented as a tensor of order 2, and a tensor of order 3 can be visualized as a cube with numbers assigned to each point in the cube. To understand the concept of a higher-order tensor, let's consider an example of a color image. A color image can be represented as a tensor of order 3, where the first two dimensions represent the spatial dimensions of the image (height and width), and the third dimension represents the color channels (red, green, and blue). Let's assume we have an image of dimensions 100 x 100 x 3. This means that the image has 100 pixels in height, 100 pixels in width, and three color channels. We can represent this image as a 3D tensor of shape (100, 100, 3), where each element of the tensor represents the intensity of a particular color channel at a particular pixel location. We can also have higher-order tensors, for example, a video can be represented as a tensor of order 4, where the first three dimensions represent the spatial dimensions of the video, and the fourth dimension represents time. In summary, a higher-order tensor is a tensor with more than two axes, and it can be used to represent complex data structures such as images, videos, and audio signals.