CSE3008
CSE3008
| @: | 2 | a v |v] | % @o {1 0 0 o 7] @ [1 0 0 0 @, [03 [03 |o2 |o2 @ [02 fos [03 [oz a, [01 [02 [04 [03 @, [02 for os |o2 “@; [04 [03s [01 |o15 @; [04 [03 [02 [or "Table 4: Transition Probability Table 2: Emmision Probability QP MAPPING ano. | Module | co mapped we m7 aoe g | PSOMapped | Marks ai 7 a 124 124 10 @ 5 a 124 124 30 a 5 4 2345611 | 2.345611 15 aa 6 6 124 124 10 Os 6 6 34569 34569 5<) VIT-AP SS=zz: UNIVERSITY Apply Knowledge. Improve Life!” QUESTION PAPER Name of the Examination: WINTER 2022-2023 — FAT Course Code: CSE 3008 Course Title: Introduction to Machine Learning Set number: Dateoftxam: 18/05/2023 (fv) (B;) Duration: 120 min Total Marks: 60 Instructions: 1. Assume data wherever necessary. 2. Any assumptions made should be clearly stated, QL. Write the Candidate Elimination Algorithm. Apply the algorithm for the following data to learn the concept and to find the version space. (Sm) Example | Size [Colour | Shape | Class/Label 1 Big_| Red | Circle | Yes 2__| Smatl_| Red | Triangle} No 3 Small_| Red | Circle Yes__| 4 Big | Blue | Circle No 5 Small Blue Circle. _Yes 2. Consider a dataset that contains two variables: height (cm) & weight (kg). Each point is classified as normal or underweight, (10m) Weightix2) [Heightly2)[ Class $1 167 | Underweight 2 182 Normal 69 176 Normal, 64 173 Normal win) | aes ? 65 172 Normal $6 174 [ underweight | 58 169 Normal 37 173 ‘Normal 3s. 170 Normal Based on the above data, you need to classify the following set as normal or underweight using the KNN algorithm (k=3 and 5). Now, we have a new data point (x1, yl), and we need to determine its class. Q3. Consider the given Artificial Neural Network with input, weight and bias values. (1m) Actual output Y=0.03 and learning rate=0,02.a, Apply forward pass to predict the value of Y. b. Apply squared error function to compute the loss. c. Apply backward pass to update the parameters for single iteration. d. Write the updated WI and W2 parameters. Q4. Consider the image below represented as a 6X6 matrix and two 3x3 convolution filters. Perform linear operation to extract features from the input image (Feature Map) and find the (18m) output volume's size. jelo,il2r7 T]oTo ale }215,/ 8/9/32 01 fo alailo \2 vi2{siil3 alot 7 Joja{siai7is }4 2 1 6 2 8 — Verticat Horizontal (2 ta [sil 9 rene Input image Q5. Consider the climate conditions data’s are given below. Initial probability P(‘High’)=0.6, P(‘Low’)=0.4. Low | High Rain | Dry ‘Low 03 07 Low 0.6 04 High [0.2 08 High [04 06 ‘Transition Probability a. Draw IMM model in graphical format. b. Find the probability of a hidden state sequence of observations {‘Dry’, ‘Rain’} (6m) Observation Probability (6m) ‘QP MAPPING Q.No. | Module Number | CO Mapped vas = Mane | PSO Mapped | Marks a i Cor Por | PEO! PSO 5 @ 7 05 P03, POs | PEO PSO2 i @ 3 Coa Pos) PEOS PSO 5 or 5 ‘coe POS PEO3 PSOT 18 e [6 08 P03, POS | PHOT PSOT 7VIT-AP UNIVERSITY ipply Knowledge. Improve Life!” QUESTION PAPER Name of the Examination: WINTER 2022-2023 - FAT Course Code: CSE 3008 Course Title: Introduction to Machine Learning, Set mumber: 1 DateotExam: (8/05/20r3 CA) Duration: 120 min Total Marks: 60 (bh) QL. Apply the KNN algorithm to classify the test data based on given training dataset with two features petal length and sepal length given below. (10) petal length [sepal length | Class (x1) x2) Ex 2 Yes _ 2 5. No 7 2 Yes 1 12 No 3.8 3 Yes 2 9 No 1.5 9 Yes Apply k=3, 5, 7 and classify the test data (petal length = 4.5 and sepal length = 5.7). Q2. Consider the Artificial Neural Network with the following values. (20) X1=0.15, x2=0.20, w11=0.15, w12-0.20, w21=0.25, w22=0.30, w4=0.40, w5=0.45, b1=0.35 to hy and he and bias b2 to 01 =0.60. Assume the actual output=0.01. Find the predicted output using sigmoid. Find the error and through backpropagation, find out the updated weights of w4, and wS provided leaming rate =0.5. nm) Q3. Illustrate the convolutional neural network with three hidden layers. Number of inputs is 5, Each hidden layer contains 4 neurons, output layer contains 2 neurons. Find the parameters between hidden layers, output layer, and total number of parameters of this entire network, (10)Q4. The tea shop owner wants to predict the maximum probability of state sequence for 3 days using Viterbi algorithm. The tea sales probability and states are given below. The observation, values for three days is 2, 3, 1. (10) P(t Tealsunny)] 0.8 [P(1 Tea|Rainy)] 0.1) erage] jes) (perce a QS. State the properties of markov chain process. Explain the HMM working principle with neat diagram for 5 states and corresponding observed values. (10) QP Mapping Module co PO PEO PSO Q-No. | Number | Mapped | Mapped Mapped Mapped eiarec| a 2 1 123,12 2 I 10 @ 5 4 123,12 2 1 [20 @ 3 4 123,12 2 1 10 @ 6 6 1,2,3,12 4 1 10 @ 6 6 123,12 a d 10nee Apply Knowledge. Improve Lifel® QUESTION PAPER WINTER 2022-2023 — FAT Course Code: CSE3008 Course Title: Introduction to Machine Learning Set number: 2 pate of exam: 19)05/2023 (Av) C 6) Duration: 120 mins Total Marks:60 Instructions: 1. Assume data wherever necessary. 2, Any assumptions made should be clearly stated. QI. Consider the glven Artificial neural network with weights and bias values. The input and target values for this problem are x1 = 1, xs= 4, xs= 5 and ts=0.1 and ty=0.05. Moreover, the learning. rate is a = 0.01. Moreover, the activation function for the given problem Is sigmoid activation function, Here, ws=0.1, wa =0.2, ws=0.3, WiH0.4, Ws =0.5, We=0.6, W7=0.7, we=0.8, Wo=0.9, Wie=0.1 and bi=b2=0.5. (15M) Apply the forward propagation to predict the output values for O; and Ox. |i, Apply Mean Squared Error to calculate the loss. iii, Apply backward propagation to update the parameters w: and wr. @ Dre tye Lv; ‘Assume that A person is locked Inside the room on the day was Sunny. The caretaker carried ‘an umbrella inside the room the next day. The person would like to know the weather outsi Page 1 of 2the room on this second day. Moreover, itis considered that the probabilities for carrying an umbrella on sunny day is 0.2, rainy day is 0.6 and foggy day is 0.4. (10M) Q3.__ Implement the Perceptron Rule to design a tworinput binary XOR Logic Gate. It is assumed that all the initial weights are 1, the learning rate is 1.5 and the threshold is 3. (LOM) Q4. Calculate Following Probabilities for The Bayesian Belief Network (10M) ele Pete) a7 fos evel oa [02 [0° | a6 | 04 Marks ‘pti score ®e [09 | 01 He | 05 | 05 # 02] Gamission = °° [ozs[o2s @ foal a6 [06] 04 m [as] 01 ) Calculate the probability that in spite of the exam level being difficult, the student having a low iQ level and a low Aptitude Score, manages to pass the exam and secure admission to the university ') calculate the probability that the student has a High IQlevel and Aptitude Score, the exam being easy yet fails to pass and does not secure admission to the university. Q5. patient visits three days in a row, and the doctor discovers that the patient feels normal on the first day, cold on the second day, and dizzy on the third day. The doctor has a question with Viterbl algorithm: what is the most likely sequence of health conditions of the patient ‘that would explain these observations? (15M) PEO Module | Q.No | Kimber | CO Mapped Mapee | P50 Mapped | Marks a 5 4 2 a 5 @ 6 6 2 1 70 o 5 4 2 1 10 ae 4 5 2 i 70 5 6 6 2 z | Page 2 of 2G9) VIT-AP =: UNIVERSITY ‘Apply Knowledge. Improve Life!” QUESTION PAPER Name of the Examination: WINTER 2022-2023 — FAT Course Code: ¢SE3008 Course Title: Introduction to Machine Learning Set number: | 2, ate oféxam: (5/05/2083 (fv) Duration: 120 minutes Total Marks: 60 Ce) Instructions: 1. Assume data wherever necessary, 2. Any assumptions made should be clearly stated. QL. Explain briefly and formalize the logistic regression cost function as convex and provide an approach that can solve the problem to fit parameter 6 using Gradient Descent, where @ is input for the cost function. (10 )m Q2. 2) Explain the domain of problems under which artificial neural network is most appropriate approach. illustrate two example to support the answer. b) Is zero initialization of weight in artificial neural network, a good initialization technique. Provide proper explanation and reasons to support the assertions. (545)M Q3. For the given multilayer neural network as shown in Figure 1, find the new weights when the input to the backpropagation network is {1,1,1] and target output is [1, 1, 1]. Using the learning rate 25 0.1, activation function f(x) = 1/(1 + e-*) and bias is set as 1. Draw the multilayer neural network after updating the weights. (5m) Figure 1: Neural network Q4. Write short note on a) Evaluation problem in Hidden Markov Model. b) Finding the state sequence problem in Hidden Markov Model, (545M Page 1 of 2QS. Suppose we have a Hidden Markov Model with w= (do, «x, Wz» @s }as hidden states, and Visible symbols/ states v = {vp, v1, V2, ¥3}. Also, the corresponding transition and emission probability as shown in figure 2, where ois the final or accepting state and v9 is the symbol/state only emitted by wo. If we have four symbolic sequence v* = {v2 vy v9} , what is the probability that the hidden Markov model will generate it with starting visible state as «; using forward recursive algorithm. (15m) Figure 2: Hidden Markov Model QP MAPPING Module PO PEO M No. | Number | COMIPPEd | pyre Mapped | PSMapped | Marks aa Zi 124 12,4 10 a2 5 4 124 i124 i0 Ce) 5 4 2345611 | 2345611 5 Cy 6 6 124 124 70 Os 6 6 3,45,6,9 345,69 | Page 2 of 2VIT-AP * UNIVERSITY Apply Knowledge. Improve Life!” QUESTION PAPER Name of the Examination: WINTER 2022-2023 — FAT Course Code: CSE 3008 Course Title: Introduction to Machine Learning set number: 2p pate ofexam: (3/06]2023 Cry) Duration: 120 Min Total Marks: 60 ¢ >) Instructions: Answer all the questions QL. A. Consider the input feature map and convolution filter and find the output feature map Input Feature Map 3S) aa Convolutional Filter g|z7|sl4is 2/o/6\/1/6 6/3[7|9 [2 a[4l[9lsla4 B. Apply Max pooling with pool size (filter size) 2x2 and stride 2 on the image below and show the output, aa ]a5 [2s |asa ° 100 [70 | 38 az [a2 |7 [2 a2 45 C. Find out the output y1, y2 and y3 obtained after applying the sofimax function corresponding to the inputs z1=4, 22=-7 and 23-1. Page 1 of 3Q2. Consider the follow Artificial Neural Network, Input values x1, and x2, randomly assigned weights are wi, w2, w3, w4, w5, w6, w7 and w8. Target values 01 = 0.05 and 02 = 0.95, Bias values b1 and b2. Use the sigmoid activation function. Learning rate a= 0.5. Calculate the error at node w5, wl and find the updated weights for both w5, w1. Q3. Find out the transition matrix from the below diagram. Assume the initial probabilities for sleeping, eating and playing are denoted as m= {0.25,0.25,0.50} . a. Find the probability of the series. Baby is sleeping, sleeping, eating, playing, playing, sleeping, b, Find the probability of baby playing given baby is eating. c. Draw the HMM model. Eating | Sleeping | Playing Eating [0.3 [0.4 03 sleeping [0.2 | 0.3 0.5 Playing [0.1 [0.3 0.6 Q4. It is well known that a DNA sequence is a series of components from {A,C, G, T}. Now let's assume there is one hidden variable $ that controls the generation of DNA sequence. S takes 2 Possible states {S1,S2}. Assume the following transition probabilities for HMM M P(Si|S1) = 0.8. PLS P(Si|) = onission probabilities a following, PAIS) = 04. PCS) = 0.1. PREIS; PAIS) = 0.1. PC|S,) = 04, PIC|S: aauel statt probabilities following PIS) = Page 2 of 3‘Assume the observed sequence x= C GT, find the most likely path of hidden states using the Viterbi algorithm, Show each step. QS. A). Write a short note on Ensemble learning B). What is Expectation Maximization. Explain yP Mappings Module | CO PO PEO PSO QNo. | Number | Mapped | Mapped | Mapped | Mapped | Marks a 13 COs 123 723,490, | 12,3 | 10 11,12 @ 45 COs 723 12,349,10, | 123 | 15 11,12 @ 6 C06 123 12,349,10, | 123 | 10 7 | 11,12 a 6 co6 1,23 1,2,3,4,9,10, 1,2,3 15 11,12 @ (2 €02,C04 | 1,23 123,490, | 12,3 | 10 14,12 Page 3 of 3i VIT-AP a UNIVERSITY Apply Knowledge. Improve Life!” QUESTION PAPER Name of the Examination: WINTER 2022-2023 - FAT Course Code: CSE3008 Course Title: Introduction to Machine Learning Setnumber: ) O ateotexam: |S] aS]2023 (Ar) ( 2) Duration: 120 mins Total Mar Instruction Qh a. ¢n below are an original image of size 7 x 7 and a filter of size 3 x 3. What will be the output feature map after applying the convolution operation with stride 1? (10m) 0/0 |0 |0 |o Jo |o olo 0 |1 JO Jo Jo |1 Jo olofololololo|/|2{9}9)¢ output feature map? 0 }o}o}1 jolojo/|O}14 0 [1 [o Jo fo [a fo Filter 0j0/1}1/1 {oO }o 0 {0 Jo |0 Jo Jo Jo Original image . Apply average pooling with pool! size (filter size) 2x2 and stride 2 on the image below and show the output, (5m) [31 [15 jas | 184 0 100 [70 |38 Q2. Consider the following neural network with the input, output and weight parameters values shown in the diagram. The activation values in each neuron is calculated using the sigmoid activation function. Now, answer the following: a. For the given input i1 and i2 as shown in the diagram, compute the output of the hidden layer and output layer neurons. (sm) b. Compute the error in the network with the initialized weight parameters shown in the diagram. (sm) . Update the weight parameters for w7 and w8 using backpropagation algorithm in the first iteration. (sm) Page 1 of 3@. Consider the following Hidden Markov Model (HIMIM) having three different states of share market considering the trend weekly. The state transition probabilities and the mission probabilities corresponding to different states are shown in the diagram. Here, profit and loss are the observable variables. (asm) rs see rearsers on [SS nas Pllos | Share Merket Plprofit] Share sideways) =? Market down)=0.05 = Plproft| Share Market Piless | Share Market down)=? sideways}=0.5 ‘Answer the following: a. From the observation of profit and loss for 5 weeks by one person, find out the likelihood of the share market status for these 5 weeks to be qi=Share Market down, q2-Share Market sideways, q3-Share Market sideways, q4=Share Market up, q5=Share Market down bb, Suppose, when the person invested his money the Share Market was going up, but the next week he suffered loss, What was the status of the market on that second week? Page 2 of 3Q4. _Accrew consisting only one scientist was sent to a newly discovered planet having ecosystem similar to earth. Unfortunately contact was lost with the crew for few days. When connection was re-established ground crew was shocked to receive 3 encoded messages in a sequence which says [x, y, 2]. By using Viterbi algorithm compute the most probable sequence of incidents happened with the crew from the sequence of messages. Here, the observations x means fever, y means paralyzed, z means likely to die. Parasite infection and Higher life form attack are the possible two states happened to the scientist. Show each step of the solution Using Viterbi algorithm. (10m) 4 06 03 Higher life form attack likely to ce paralyzed 04. 07 04 QS. Consider the following Belief network and answer the following: (sm) Ry Nw = PANW)=0.21 Tomis not well tts raining P(R}=0.76 Tomis staying at home T 6 SH [P(SisH) | T [as T 4 Tom is Fit [a9 F sleeping F [02 What is the probability that Tom is not sleeping although it is raining heavily and he is not well? He was upset that he could not join for the sleep over in his friend’s home and his mom forced him to stay at home? QP MAPPING Module PO PEO No. | umber |COM@PPCd| syareg | mepreg | *SOMapped | Marks aa 5 6 3 2 1 5 oy 5 6 4 [2 1 15 3 6 6 5 2 1 15 aa 6 6 6 ei 1 10 | Qs 4 5 4 a 1 5 Page 3 of 3=» VIT-AP. « UNIVERSITY Apply Knowledge. Improve Life!” QUESTION PAPER Name of the Examination: WINTER 2022-2023 — FAT Course Code: CSE 3008, Course Title: Introduction to Machine Learning Set number: bate of exam: 181052023 (FY) Duration: 90 minutes Total Marks: 60 Marks (Oy) Instructions: 1, Assume data wherever necessary. 2. Any assumptions made should be clearly stated. QL. Derive the gradient descent rule for locally weighted linear regression approximated to the target function. (10M) Q2. With reference to the Convolution Neural Networks architecture, answers the questions below. (asm) AG mcrae rer CONVOLUTION + RELY POSUNIG -GONVOLIION RU POOUNE aren lM gg soma PEATURE LEARNING. ‘CLASSIFICATION (a) How does CNN handle text and audio data? (b) How does a CNN differ from a fully connected neural network? (c) What are some best practices that need to be followed when designing a CNN? (a) Suggest the techniques to handle over fitting in CNN. (c) Explain about the function of convolution layer in CNN. For an input image of size 6x6 generate feature map using two 3x3 filter with stride 2. Page 1 of 3{a ojo|o}o 1a olilo|olj1 o aiala olols|1s|o}o| {als ls a{olo|o|+|o| |als|1 oji{o|o| 2 o Filter ojfolajoj1s| 0 6x 6 image bb. Explain on how the non-linearity condition is represented in Exclusive OR using perceptron. (sm) 3. For an Artificial Neural Network architecture with three input nodes, two hidden nodes, one qurput node and alearning rte, a= 0.9. Apply backpropogation and calculate the error at each nodes, for one iteration. (LOM) xt KAD xe xD [wie | wes | woe | wes | woe | wos | wee | Wee & |e |e a4, Once a day baby bob is observed to have one of moods H : Happy : Calm $: Sad (10M) Page 2 0f3The state transition probabilities are 04 0.3 0.3 A= |0.206 02 0.1.0.1 0.8 1. Given that the mood on day 1 (t = 1) is sad (state 3), what is the probability that the mood of baby bob for the next 7 days will be “sad-sad-Happy-Happy-sad-calm-sad”? 2. Given that the model is state i, what is the probability that it stays in state i for exactly d days? Apply Viterbi algorithm to compute maximum likelihood state sequence for the given pattern, (10m) Assume a student is taking GRE competitive exam and wants to know the grade he would secure before the answer key is released, So, based on the answer options he knows whether he would score medium or high marks in the exam, Compute the maximum likelihood sequence for the S = GGCACTGAA, It indicates the order of options selected for questions in the exam. Start Medium 05 High A 02| === [a 03 5 C cos: | =a ici 02 OD 8. G 03 04 G 02 T 02 T 03 OP MAPPING Module Po PEO Q.No | Number |COMAPPEE | Nyavreq Mapped | PS0Mapped | Marks aa 2 2 — 1 10 a 5 a 1 3 3 20 aw | 5 4 1 3 3 10 as 6 4 4 4 2 10 a5 6 6 4 4 2 FC) Page 3 of 3ere