0% found this document useful (0 votes)
22 views30 pages

CSE3008

Exam paper

Uploaded by

vareddy73382
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
22 views30 pages

CSE3008

Exam paper

Uploaded by

vareddy73382
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 30
VIT-AP sae UNIVERSITY ‘Apply Knowledge. Improve Life!” (QUESTION PAPER Name of the Examination: WINTER 2022-2023 — FAT Course Code: CSE 3008 Course Title; Introduction to Machine Learning Set number: | ate otexam: [9/05/2023 (pw) Duration: 120 Min ‘Total Marks: 60 cE) Instructions: 1, Assume data wherever necessary. 2. Any assumptions made should be clearly stated. QL. Consider the follow Artificial Neural Network. Input values x1, and x2, randomly assigned weights are wl, w2, w3, w4, w5, w6, w7 and w8. Target values o1 = 0.05 and 02 = 0.95. Bias values bl and b2. Use the sigmoid activation function. Learning rate a= 0.5. Calculate the error at node w5, wl and find the updated weights for both w5, wl. (10M) Q2. Consider the following diagram. "Rainy" and "Sunny" are hidden states, "work", "shop" and "clean" are observations. Find the most likely path of hidden states {Rainy and Sunny} using the Viterbi algorithm. Show the best score for each Viterbi path in the given observation "Walk-Shop-Clean", (ism) Page 1 of 2 3, Mr. X is happy someday and angry on other days. We can only observe when he smiles, frowns, laughs, or yells but not his actual emotional state. Let us start on day 1 in the happy state, There can be only one state transition per day. It can be either to happy state or angry state, The HMM is shown below 5 (10m) rupee 82 ehro02 ‘Assume that qe is the state on day t and otis the observation on day t. Answer the following questions; (@) What is P(q2 = Happy)? (b) What is P(o2 = frown)? (c) What is P(q2 = Happy | o2 = frown)? (@) What is P(o; = frown 02 = frown 03 = frown o4 = frown os = frown, 41 = Happy q2 = Angry q3 = Angry qa= Angry qs = Angry) if'n=[0.7, 0.3]? (Qé. Consider the image below represented as a 6X6 matrix and one 3X3 convolution filter. Perform linear operation to extract features from the input image (Feature Map with Pooling) and find the output volume's size. ‘o[afoloyayo ‘ofafofotito. folato. olatofotito ato ‘ofola tatoo Lofota) fololatitoro. olololotolo. Input image (asm) QS. Suppose we have weight, height and’l-Shirt size of some customers and we need to predict the T-Shirt size of the new customer given only height and weight. The information is given below (om ‘Height ‘1st [IS | 188 [160 | 160 | 163 | 163 | 60 | iss | 16s | 165 | 168 | 16s | 108 Wat fs [9 |@ |S [wo lela jaja pats teyaye Tshirts MM MM PM pM PM oe fee ft fe ye Given new customer height 161 algorithm with k =5 using Bu sm and weight of 61 kg, find the t shirt size using KN ian Distance ano. | Mote Tcomesres] yO, | yr, | psoMeres| ans ar 5 2 2,2,3,12 | 126 1 10 a4 5 2 | 2,2,3,12 1,2,6 1 15 Page 2 of 2 @ VIT-AP use: UNIVERSITY ‘Apply Knowledge. Improve Life!” QUESTION PAPER Name of the Examination: WINTER 2022-2023 — FAT Course Code: CSE3008 Course Title: Introduction to Machine Learning Setnumber: 9 Date of exam: 16/05/2023 (An) (Bs) Duration: 120 Mins Total Marks: 60 Instructions: 1, Assume data wherever necessary. 2. Any assumptions made should be clearly stated. QI. Explain the difference between Verti Bi Approach and Forward Propagation Approach. (5m) 2. Given inputs 0.05 and 0.10, the neural network must give an output 0.01 and 0.99 respectively. ‘A. Apply the forward propagation to predict the value using logistic function. B. Apply Squared Error to caleulate the loss. C. Apply backward propagation to update the parameters for single iteration. D. Calculate the error at w8. (asm) 3. What is CNN? Why should we use CNN? Explain five different layers in CNN in details along suitable diagrams? What are the generalized dimensions used in CNN? Also explain various application of CNN. (asm) Q4. The diagram below represents @ Markov chain where there are three states representing the ‘weather of the day (cloudy, rainy, and sunny). And there are transition probabilities representing the ‘weather of the next day given the weather of the current day. ‘There are three different states such as cloudy, rain, and sunny. The following represent the transition probabilities based on the above diagram: Page 1 of 2 * Ifsunny today, then tomorrow: ‘* 50% probability for sunny ‘+ 10% probability for rainy ‘+ 40% probability for cloudy ‘+ Ifrainy today, then tomorrow: ‘+ 10% probability for sunny © 60% probability for rainy + 30% probability for cloudy + Ifcloudy today, then tomorrow: ‘+ 40% probability for sunny ‘= 50% probability for rainy ‘+ 10% probability for cloudy Using this Markov chain, what is the probability that the Wednesday will be cloudy if today is sunny. ‘The following are different transitions that can result in a cloudy Wednesday given today (Monday) is sunny. sm) QS. Build a ID3 Decision Tree to classify the below given data. (om) Example AL a As lass/Label 1 Hot High No 2 Hot. High ‘No [3 Hot. ‘High Yes 4 Coot Normal Yes 5 Cool ‘Normal Yes 6 ‘Cool High No 7 Hot. High No 8 Hot Normal Yes 3 ‘Cool ‘Normal Yes 10 ‘Coot High Yes QP MAPPING Module PO PEO QNo. | Number |COM*PPEd | Nyspoed Mapped | PSO Mapped | Marks Qi 5 1 1 2 1 5 a2 5 1 1 2 1 15 3 6 1 ia 2 1 15 a4 6 2 4 - 1 15 as 2 2 4 - 7 10 Page 2 of 2 <> VIT-AP UNIVERSITY Apply Knowledge. Improve Life!” QUESTION PAPER Name of the Examination: WINTER 2022-2023 — FAT Course Code: CSE 3008 Course Title: Introduction to Machine Learning Set number: 3, Date of Exam: |} ]05 202.3, (GD) Duration: 120 min Total Marks:60 6) Instructions: Answer all the questions QL. Find the weights required to perform the following classification using perceptron network. ‘The vectors (1,1, -1, -1) and (1, 1,1, -1) are belonging to the class having target value +1 and the vectors (-1, -1, -1,1) and (-1, -1,1,1) are belonging to the class having target value -1. Assume earning rate 1 and initial weights 0. 10M Q2. Draw the Back propagation network, using given initial weights between input and hidden layer as well as hidden and output layer. Find the error at each of the hidden units for this BPN if it is presented with the input pattem [-1, 1] and the target output being +1. Use a learning rate a = 0.25 and bipolar sigmoidal activation function. The initial weights between input and hidden layer are [¥%1 ¥% %]=[0-6 — 0.10.3] and [2 Yo2 Yool=[-0.3 0.4 0.5] The initial weights between hidden and output layer arel™, W2 Wo] =[0.4 0.1-0.2], (Note Yy represents the weight between i* input neuron to j™ hidden neuron and ™s represents the weight between k" hidden neuron to output neuron). 15M Q3. Consider the fig below: SS | Poa ¥ [0 [PCOIsy State [POS AJA] 009 Apo} ox A_| os a Ap. oF B— [oor BA [oor Boor (0) nil probs BB 09 Bt} 09 (b) Transition probs (©) Bmsision probe Suppose that we have binary states (labeled A and B) and binary observations (labeled 0 and 1) and the initial, transition, and emission probabilities as in the given table. Using the forward algorithm, compute the probability that we observe the sequence Ol = = 1 and 03=0. ASM 0, 02 Q4. For the Fig in Q3, Use the Viterbi algorithm to compute the most likely sequence of states for the observe sequence O1 =0, 02 = 1,03 =0 10M. QS. Given the training data in the table below, predict the class of the given sample using Naive Bayes classification, X: age<=30, income=medium, student=yes, credit-rating=fair 10M. RD Glos: buys_computen ee eee 1 no 2 no 3 ve 4 yes, 5 do yes yes 6 >40 yes no 7 Bh yes ws 8 <=30—s medium noi no 9 <=30 yes fair yes 10 >40 ys fi yes no <=30 yes excellent yes R no excelent «yes B yes fair yes 4 no exellent no Module | CO PEO PSO Q.No. | Number | Mapped | Mapped | Mapped | Mapped ae a. fs COs 123 1234910, | 1,23 | 10 11,12 @ {5 cos «123 12349,10,| 123 [15 1412 @ |6 C06 123 123,490, | 123 iS 14,12 a (6 €06 123 12,3,49,10, | 123 | 10 _ 14,12 @ (4 COs 123 12,349,10, | 1.23 | 10 11,12 VIT-AP UNIVERSITY Apply Knowledge. Improve Life!” QUESTION PAPER Name of the Examination: WINTER 2022-2023 - FAT Course Code: CSE3008 Course Title: Introduction to Machine Learning. Set number: 2 Date of Exam: \4losho3 (PR) C a) Duration: 120 Minutes Total Marks: 60 Instructions: 1. Assume data wherever necessary. 2. Any assumptions made should be clearly stated. QI. An IPL team management wants to predict the current match result based on the performance of the opening batsman with his recent records. Now, the player has scored 40 runs from 32 balls. Guess the match results using kNN algorithm with the dataset given in Table. Runs | Balls | Match Result 7m | 43 win eo | 3 Win 82 6 Loss 56 38 win a 51 Loss | 59 28 win ao | 29 win | 7 53 Loss 2 | 36 win [ss | 4 Loss (20m) Q2. Consider there are two hidden neurons with three inputs {X1=0, X2=1, X3=1} the expected output . The initial weights and bias values are given below. Assume that the neurons have a sigmold activation function, and perform a forward pass and a backward pass on the network with a learning rate of 0.5 (1 Epoch). Bias bs= 0.3 | be= 0.2 (asm) Page 1of2 3. Assume that you train a muttilayer feed-forward network on a GPU, The input image is of size SX6X1 and the image is represented as a pixel value with black Corresponding to ‘1 and white corresponding to ‘0’. Obtain the feature map at the output of the convolutional layer (conv) using the feature detector filter shown below of size 3x3, Also, wite the size of the output feature map of layer 1 (conv) with a stride value of 1, iF ataaai 1011041 Feature Detector 221142 101101 o {6 a 10a0e1 ilo o tiaiai [ofa fa Input image (asm) (Q4. A. Identify the basic components ofthe Hidden Markov Model from the below image. © ES (sm) 8. Explain the working principle of the Viterbi algorithm with a suttable example, (sm) Q5. Once a day (eg., at noon), the weather is observed as one of the states 1: Rainy state 2: Cloudy State 3: Sunny, The state transition probabilities are. 04 03 0,3 a-[oa 0.6 oa O1 O14 oO. Given that the weather on day 1 (t= 1) Is sunny (state 3), what is the probability that the weather for the next 7 days will be "sunny - sunny ~ rain- rain sunny ~ cloudy - sunny"? (20m) = MAPPING Q.No. Fst CO Mapped meas 7 ate

You might also like