0% found this document useful (0 votes)
14 views14 pages

Rlab Assignment

Uploaded by

20PH022 Sruthi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views14 pages

Rlab Assignment

Uploaded by

20PH022 Sruthi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

R LAB ASSIGNMENT

SRUTHI.A.M
23MSP1014
m=50
> m
[1] 50
> ps=0.6
> ps
[1] 0.6
> #exactly no errors
> lambda=m*ps
> lambda
[1] 30
> p1=dpois(0,lambda)
> p1
[1] 9.357623e-14
> round(1000*p1)
[1] 0
> #exactly one error
> p2=dpois(1,lambda)
> p2
[1] 2.807287e-12
> round(1000*p2)
[1] 0
> #exactly 2 errors
> p3=dpois(2,lambda)
> p3
[1] 4.21093e-11
> round(1000*p3)
[1] 0
> #plotting the distribution
> x1=0:m
> x1
[1] 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
22 23 24 25 26 27 28
[30] 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
> px1=dpois(x1,lambda)
> px1
[1] 9.357623e-14 2.807287e-12 4.210930e-11 4.210930e-10 3.158198e-09
1.894919e-08
[7] 9.474593e-08 4.060540e-07 1.522702e-06 5.075675e-06 1.522702e-05
4.152825e-05
[13] 1.038206e-04 2.395861e-04 5.133987e-04 1.026797e-03 1.925245e-03
3.397491e-03
[19] 5.662486e-03 8.940767e-03 1.341115e-02 1.915879e-02 2.612562e-02
3.407689e-02
[25] 4.259611e-02 5.111534e-02 5.897924e-02 6.553248e-02 7.021338e-02
7.263453e-02
[31] 7.263453e-02 7.029148e-02 6.589826e-02 5.990751e-02 5.285957e-02
4.530820e-02
[37] 3.775683e-02 3.061365e-02 2.416867e-02 1.859128e-02 1.394346e-02
1.020253e-02
[43] 7.287524e-03 5.084319e-03 3.466581e-03 2.311054e-03 1.507209e-03
9.620485e-04
[49] 6.012803e-04 3.681308e-04 2.208785e-04
> plot(x1,px1,xlab = "value of x",ylab = "probability distribution of
x")
> ex=weighted.mean(x1,px1)
> ex
[1] 29.99337
> varx1=weighted.mean(x1*x1,px1)-(weighted.mean(x1,px1))^2
> varx1
[1] 29.86076
sample1=c(23,20,19,21,18,17,23,16,19)
> sample1
[1] 23 20 19 21 18 17 23 16 19
> sample2=c(24,19,22,18,20,22,20,23,20,18)
> sample2
[1] 24 19 22 18 20 22 20 23 20 18
> sample1=c(23,20,19,21,18,20,17,23,16,19)
> sample1
[1] 23 20 19 21 18 20 17 23 16 19
> t=t.test(sample1,sample2)
> t

Welch Two Sample t-test

data: sample1 and sample2


t = -1.0183, df = 17.764, p-value = 0.3222
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-3.065199 1.065199
sample estimates:
mean of x mean of y
19.6 20.6

> alpha=0.05
> alpha
[1] 0.05
> tv=t$p.value
> tv
[1] 0.32222
> if(tv>alpha){print("accept hypothesis")}else print("reject
hypothesis")
[1] "accept hypothesis"
mean=52
> mean
[1] 52
> sd=10
> sd
[1] 10
> mean_p=50
> mean_p
[1] 50
> sd_p=8
> sd_p
[1] 8
> mean_c=48
> mean_c
[1] 48
> sd_c=6
> sd_c
[1] 6
> #180 or above
> p1=1-pnorm(180,mean+mean_p+mean_c,sd+sd_p+sd_c)
> p1
[1] 0.1056498
> p2=1-pnorm(135,mean+mean_p+mean_c,sd+sd_p+sd_c)
> p2
[1] 0.7340145
> marks=seq(0,200,by=5)
> marks
[1] 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75
80 85 90 95 100
[22] 105 110 115 120 125 130 135 140 145 150 155 160 165 170 175 180
185 190 195 200
> p=pnorm(mean+mean_p+mean_c,sd+sd_p+sd_c)
> p
[1] 1
> distribution_table=data.frame(marks,p)
> distribution_table
marks p
1 0 1
2 5 1
3 10 1
4 15 1
5 20 1
6 25 1
7 30 1
8 35 1
9 40 1
10 45 1
11 50 1
12 55 1
13 60 1
14 65 1
15 70 1
16 75 1
17 80 1
18 85 1
19 90 1
20 95 1
21 100 1
22 105 1
23 110 1
24 115 1
25 120 1
26 125 1
27 130 1
28 135 1
29 140 1
30 145 1
31 150 1
32 155 1
33 160 1
34 165 1
35 170 1
36 175 1
37 180 1
38 185 1
39 190 1
40 195 1
41 200 1

> data=matrix(c(25,13,9,47,62,53,49,164,12,34,43,89,99,100,10,300),ncol=4,byrow=T)
> data
[,1] [,2] [,3] [,4]
[1,] 25 13 9 47
[2,] 62 53 49 164
[3,] 12 34 43 89
[4,] 99 100 10 300
> totalrow=rowSums(data)
> totalrow
[1] 94 328 178 509

> totalcolumn=colSums(data)
> totalcolumn
[1] 198 200 111 600
> total=sum(data)
> total
[1] 1109
> exp=c(totalrow,totalcolumn)/(total)
> exp
[1] 0.08476105 0.29576195 0.16050496 0.45897205 0.17853922 0.18034265 0.10009017
0.54102795
> sum(exp)
[1] 2
> cv=chisq.test(data)
> cv

Pearson's Chi-squared test

data: data
X-squared = 100.23, df = 9, p-value < 2.2e-16

> cv=cv$p.value
> cv
[1] 1.415387e-17
> alpha=0.05
> alpha
[1] 0.05
> if(cv>alpha){print("accept the hypothesis")}else{print("reject the hypothesis")}
[1] "reject the hypothesis"

>

x=c(0,1,2,3,4,5)
> x
[1] 0 1 2 3 4 5
> f=c(142,156,69,27,5,1)
> f
[1] 142 156 69 27 5 1
> u=sum(f)
> u
[1] 400
> data=data.frame(x=rep(x,f),freq=rep(1,u))
> data
x freq
1 0 1
2 0 1
3 0 1
4 0 1
5 0 1
6 0 1
7 0 1
8 0 1
9 0 1
10 0 1
11 0 1
12 0 1
13 0 1
14 0 1
15 0 1
16 0 1
17 0 1
18 0 1
19 0 1
20 0 1
21 0 1
22 0 1
23 0 1
24 0 1
25 0 1
26 0 1
27 0 1
28 0 1
29 0 1
30 0 1
31 0 1
32 0 1
33 0 1
34 0 1
35 0 1
36 0 1
37 0 1
38 0 1
39 0 1
40 0 1
41 0 1
42 0 1
43 0 1
44 0 1
45 0 1
46 0 1
47 0 1
48 0 1
49 0 1
50 0 1
51 0 1
52 0 1
53 0 1
54 0 1
55 0 1
56 0 1
57 0 1
58 0 1
59 0 1
60 0 1
61 0 1
62 0 1
63 0 1
64 0 1
65 0 1
66 0 1
67 0 1
68 0 1
69 0 1
70 0 1
71 0 1
72 0 1
73 0 1
74 0 1
75 0 1
76 0 1
77 0 1
78 0 1
79 0 1
80 0 1
81 0 1
82 0 1
83 0 1
84 0 1
85 0 1
86 0 1
87 0 1
88 0 1
89 0 1
90 0 1
91 0 1
92 0 1
93 0 1
94 0 1
95 0 1
96 0 1
97 0 1
98 0 1
99 0 1
100 0 1
101 0 1
102 0 1
103 0 1
104 0 1
105 0 1
106 0 1
107 0 1
108 0 1
109 0 1
110 0 1
111 0 1
112 0 1
113 0 1
114 0 1
115 0 1
116 0 1
117 0 1
118 0 1
119 0 1
120 0 1
121 0 1
122 0 1
123 0 1
124 0 1
125 0 1
126 0 1
127 0 1
128 0 1
129 0 1
130 0 1
131 0 1
132 0 1
133 0 1
134 0 1
135 0 1
136 0 1
137 0 1
138 0 1
139 0 1
140 0 1
141 0 1
142 0 1
143 1 1
144 1 1
145 1 1
146 1 1
147 1 1
148 1 1
149 1 1
150 1 1
151 1 1
152 1 1
153 1 1
154 1 1
155 1 1
156 1 1
157 1 1
158 1 1
159 1 1
160 1 1
161 1 1
162 1 1
163 1 1
164 1 1
165 1 1
166 1 1
167 1 1
168 1 1
169 1 1
170 1 1
171 1 1
172 1 1
173 1 1
174 1 1
175 1 1
176 1 1
177 1 1
178 1 1
179 1 1
180 1 1
181 1 1
182 1 1
183 1 1
184 1 1
185 1 1
186 1 1
187 1 1
188 1 1
189 1 1
190 1 1
191 1 1
192 1 1
193 1 1
194 1 1
195 1 1
196 1 1
197 1 1
198 1 1
199 1 1
200 1 1
201 1 1
202 1 1
203 1 1
204 1 1
205 1 1
206 1 1
207 1 1
208 1 1
209 1 1
210 1 1
211 1 1
212 1 1
213 1 1
214 1 1
215 1 1
216 1 1
217 1 1
218 1 1
219 1 1
220 1 1
221 1 1
222 1 1
223 1 1
224 1 1
225 1 1
226 1 1
227 1 1
228 1 1
229 1 1
230 1 1
231 1 1
232 1 1
233 1 1
234 1 1
235 1 1
236 1 1
237 1 1
238 1 1
239 1 1
240 1 1
241 1 1
242 1 1
243 1 1
244 1 1
245 1 1
246 1 1
247 1 1
248 1 1
249 1 1
250 1 1
251 1 1
252 1 1
253 1 1
254 1 1
255 1 1
256 1 1
257 1 1
258 1 1
259 1 1
260 1 1
261 1 1
262 1 1
263 1 1
264 1 1
265 1 1
266 1 1
267 1 1
268 1 1
269 1 1
270 1 1
271 1 1
272 1 1
273 1 1
274 1 1
275 1 1
276 1 1
277 1 1
278 1 1
279 1 1
280 1 1
281 1 1
282 1 1
283 1 1
284 1 1
285 1 1
286 1 1
287 1 1
288 1 1
289 1 1
290 1 1
291 1 1
292 1 1
293 1 1
294 1 1
295 1 1
296 1 1
297 1 1
298 1 1
299 2 1
300 2 1
301 2 1
302 2 1
303 2 1
304 2 1
305 2 1
306 2 1
307 2 1
308 2 1
309 2 1
310 2 1
311 2 1
312 2 1
313 2 1
314 2 1
315 2 1
316 2 1
317 2 1
318 2 1
319 2 1
320 2 1
321 2 1
322 2 1
323 2 1
324 2 1
325 2 1
326 2 1
327 2 1
328 2 1
329 2 1
330 2 1
331 2 1
332 2 1
333 2 1
334 2 1
335 2 1
336 2 1
337 2 1
338 2 1
339 2 1
340 2 1
341 2 1
342 2 1
343 2 1
344 2 1
345 2 1
346 2 1
347 2 1
348 2 1
349 2 1
350 2 1
351 2 1
352 2 1
353 2 1
354 2 1
355 2 1
356 2 1
357 2 1
358 2 1
359 2 1
360 2 1
361 2 1
362 2 1
363 2 1
364 2 1
365 2 1
366 2 1
367 2 1
368 3 1
369 3 1
370 3 1
371 3 1
372 3 1
373 3 1
374 3 1
375 3 1
376 3 1
377 3 1
378 3 1
379 3 1
380 3 1
381 3 1
382 3 1
383 3 1
384 3 1
385 3 1
386 3 1

You might also like