0% found this document useful (0 votes)
59 views4 pages

ECE286 Final Exam Aid Sheet

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views4 pages

ECE286 Final Exam Aid Sheet

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Winter 2024 Final Exam

ECE286H1 S
Duration: 150 Minutes

Distribution PMF / PDF Expectation Variance


Binomial 𝑛 𝜇 = 𝑛𝑝 𝜎 2 = 𝑛𝑝(1 − 𝑝)
𝑏(𝑥; 𝑛, 𝑝) = ( ) 𝑝 𝑥 (1 − 𝑝)𝑛−𝑥
𝑥
Multinomial 𝑛 𝑥 𝑥 𝜇𝑖 = 𝑛𝑝𝑖 𝜎𝑖2 = 𝑛𝑝𝑖 (1 − 𝑝𝑖 )
𝑓(𝑥1 , … , 𝑥𝑚 ; 𝑝1 , … , 𝑝𝑚 , 𝑛) = (𝑥 , … , 𝑥 ) 𝑝1 1 … 𝑝𝑚𝑚
1 𝑚
Hypergeometric 𝑘 𝑁−𝑘
( )( ) 𝜇=
𝑛𝑘
𝜎2 =
𝑛𝑘(𝑁−𝑛) 𝑘
(1 − )
𝑥 𝑛−𝑥
ℎ(𝑥; 𝑁, 𝑛, 𝑘) = 𝑁
𝑁 𝑁(𝑁−1) 𝑁
( )
𝑛
Negative Binomial 𝑥 − 1 𝑘 𝑥−𝑘 𝜇=
𝑘
𝜎2 =
𝑘(1−𝑝)
𝑏 ∗ (𝑥; 𝑘, 𝑝) = ( )𝑝 𝑞 ,𝑥 ≥ 𝑘 𝑝 𝑝2
𝑘−1
Geometric 𝑔(𝑥; 𝑝) = 𝑝(1 − 𝑝) 𝑥−1 , 𝑥 ≥ 1 𝜇=
1
𝜎 =2 1−𝑝
𝑝 𝑝2
2
Poisson 𝑝(𝑥; 𝜆𝑡) =
𝑒 −𝜆𝑡 (𝜆𝑡)𝑥 𝜇 = 𝜆𝑡 𝜎 = 𝜆𝑡
𝑥!
Uniform 1 𝐴+𝐵 (𝐵−𝐴)2
𝑓(𝑥; 𝐴, 𝐵) = ,𝐴 ≤ 𝑥 ≤ 𝐵 𝜇= 𝜎2 =
𝐵−𝐴 2 12
Normal (𝑥−𝜇)2 𝜇 𝜎2
1 −
𝑛(𝑥; 𝜇, 𝜎) = 𝑒 2𝜎2 , −∞ < 𝑥 < ∞
√2𝜋𝜎
Standard Normal 𝑥 𝑡2 𝜇=0 𝜎2 = 1
1 −
CDF Φ(𝑥) = ∫ 𝑒 2 𝑑𝑡
√2𝜋 −∞
𝑃(𝐴 ≤ 𝑋 ≤ 𝐵) = Φ(𝐵) − Φ(𝐴)
𝑥
Gamma Distribution 1 − 𝜇 = 𝛼𝛽 𝜎 2 = 𝛼𝛽 2
𝑓(𝑥; 𝛼, 𝛽) = 𝑥 𝛼−1 𝑒 𝛽 ,𝑥 > 0
𝛽 𝛼 Γ(𝛼)
𝑥
Exponential 1 − 1 𝜇=𝛽 𝜎 2 = 𝛽2
𝑓(𝑥; 𝛼 = 1, 𝛽) = 𝑓(𝑥; 𝛽) = 𝑒 𝛽 , 𝑥 ≥ 0, 𝛽 =
Distribution 𝛽 𝜆
𝜈 𝑥
Chi-Squared 𝜈
𝑓 (𝑥; 𝛼 = , 𝛽 = 2) = 𝑓(𝑥; 𝜈) = 𝜈
1
𝑥 2−1 𝑒 −2 , 𝑥 > 0 𝜇=𝜈 𝜎 2 = 2𝜈
Distribution 2 𝜈
22 Γ( )
2

Operations with Sets CDF Properties


𝑥
𝐴 ∩ ∅ = ∅, 𝐴 ∪ ∅ = 𝐴, 𝐴 ∩ 𝐴′ = ∅, 𝐴 ∪ 𝐴′ = 𝑆, 𝑆 ′ = ∅, 𝑃(𝑋 ≤ 𝑥) = 𝐹(𝑥) = ∑𝑡≤𝑥 𝑓(𝑡) = ∫−∞ 𝑓(𝑡) 𝑑𝑡, 𝑥 ∈ ℝ
∅′ = 𝑆, (𝐴′ )′ = 𝐴, (𝐴 ∩ 𝐵)′ = 𝐴′ ∪ 𝐵′, (𝐴 ∪ 𝐵)′ = 𝐴′ ∩ 𝐵′ 𝑃(𝑎 < 𝑋 < 𝑏) = 𝐹(𝑏) − 𝐹(𝑎)
Permutations and Combinations Joint Distribution Properties and Marginals
𝑛! 𝑛! 𝑛 𝑛!
𝑛𝑃𝑟 = (𝑛−𝑟)! , 𝑛𝐶𝑟 = ,( )= 𝑓(𝑥, 𝑦) ≥ 0 ∀(𝑥, 𝑦) ∈ 𝑆
𝑟!(𝑛−𝑟)! 𝑛1 , … , 𝑛𝑚 𝑛1 !𝑛2 !…𝑛𝑚! ∞ ∞
Additive and Product Rules ∑𝑥 ∑𝑦 𝑓(𝑥, 𝑦) = ∫−∞ ∫−∞ 𝑓(𝑥, 𝑦)𝑑𝑥𝑑𝑦 = 1
𝑃(𝐴 ∪ 𝐵) = 𝑃(𝐴) + 𝑃(𝐵) − 𝑃(𝐴 ∩ 𝐵) 𝑃((𝑋, 𝑌) ∈ 𝐴) = ∑(𝑥,𝑦)∈𝐴 𝑓(𝑥, 𝑦) = ∫(𝑥,𝑦)∈𝐴 𝑓(𝑥, 𝑦)𝑑𝑥𝑑𝑦

𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐵|𝐴), 𝑃(𝐴) > 0 𝑔(𝑥) = ∑𝑦 𝑓(𝑥, 𝑦) = ∫−∞ 𝑓(𝑥, 𝑦)𝑑𝑦
Independence, Conditional Probability, and Bayes’ Rule ∞
ℎ(𝑦) = ∑𝑥 𝑓(𝑥, 𝑦) = ∫−∞ 𝑓(𝑥, 𝑦)𝑑𝑥
𝑃(𝐴|𝐵) = 𝑃(𝐴) or 𝑃(𝐵|𝐴) = 𝑃(𝐵) 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐵)
𝑃(𝐴∩𝐵) 𝑃(𝐴|𝐵)𝑃(𝐵) Conditional Distributions and Independence of RVs
𝑃(𝐵|𝐴) = , 𝑃(𝐵|𝐴) = , 𝑃(𝐴) > 0 𝑓(𝑥,𝑦)
𝑃(𝐴) 𝑃(𝐴) 𝑓(𝑥|𝑦) =
𝑔(𝑥)
Total Probability Theorem and Corresponding Bayes’ Rule
𝑃(𝑎 ≤ 𝑋 ≤ 𝑏|𝑌 = 𝑐) = ∑𝑎≤𝑥≤𝑏 𝑓(𝑥|𝑌 = 𝑐) =
𝑃(𝐴) = ∑𝑘𝑖=1 𝑃(𝐴 ∩ 𝐵𝑖 ) for partition 𝐵1 , … , 𝐵𝑘
∫𝑎≤𝑥≤𝑏 𝑓(𝑥|𝑌 = 𝑐)𝑑𝑥
𝑃(𝐵)𝑃(𝐴|𝐵)
𝑃(𝐵|𝐴) = ∑𝑘 for partition 𝐶1 , … , 𝐶𝑘 𝑓(𝑥, 𝑦) = 𝑔(𝑥)ℎ(𝑦) (independence)
𝑖=1 𝑃(𝐶𝑖 )𝑃(𝐴|𝐶𝑖 )
PMF and PDF Properties Expectations of Functions of RVs (One and Two)

𝑓(𝑥) ≥ 0, for each outcome/value 𝑋 = 𝑥 𝐸[𝑧(𝑋)] = ∑𝑥 𝑧(𝑥)𝑓(𝑥) = ∫−∞ 𝑧(𝑥)𝑓(𝑥)𝑑𝑥

∑𝑥 𝑓(𝑥) = ∫−∞ 𝑓(𝑥)𝑑𝑥 = 1 normalization 𝐸[𝑧(𝑋, 𝑌)] =
∞ ∞
𝑃(𝑋 = 𝑥) = 𝑓(𝑥) (discrete), 𝑃(𝑎 < 𝑋 < 𝑏) =
𝑏 ∑𝑥 ∑𝑦 𝑧(𝑥, 𝑦)𝑓(𝑥, 𝑦) = ∫−∞ ∫−∞ 𝑧(𝑥, 𝑦)𝑓(𝑥, 𝑦)𝑑𝑥𝑑𝑦
∫𝑎 𝑓(𝑥) 𝑑𝑥
(continuous) 𝐸[𝑎𝑋 + 𝑏𝑌] = 𝑎𝐸[𝑋] + 𝑏𝐸[𝑌] (any case), 𝐸[𝑋𝑌] =
𝐸[𝑋]𝐸[𝑌] (X and Y independent only)
Winter 2024 Final Exam
ECE286H1 S
Duration: 150 Minutes

Variance Properties Moments and Moment-Generating Functions


𝜎 2 = 𝑉𝑎𝑟(𝑋) = 𝐸[(𝑋 − 𝜇)2 ] = 𝐸[𝑋 2 ] − 𝜇 2 𝑑 𝑟 𝑀𝑋 (𝑡) th
𝜇𝑟′ = 𝐸[𝑋 𝑟 ] = |𝑡=0 , r moment about origin of X
∞ 𝑑𝑡 𝑟
𝜎 2 = ∑𝑥(𝑥 − 𝜇)2 𝑓(𝑥) = ∫−∞(𝑥 − 𝜇)2 𝑓(𝑥)𝑑𝑥 ∞
𝜇𝑟′ = ∑𝑥 𝑟 𝑓(𝑥) = ∫−∞ 𝑥 𝑟 𝑓(𝑥) 𝑑𝑥
2
𝜎𝑎𝑋+𝑏𝑌+𝑐 = 𝑎2 𝜎𝑋2 + 𝑏 2 𝜎𝑌2 + 2𝑎𝑏𝜎𝑋𝑌 (any case)
𝜇 = 𝜇1′ , 𝜎 2 = 𝜇2′ − 𝜇 2
Covariance Properties ∞
𝑀𝑋 (𝑡) = 𝐸[𝑒 𝑡𝑋 ] = ∑𝑒 𝑡𝑥 𝑓(𝑥) = ∫−∞ 𝑒 𝑡𝑥 𝑓(𝑥) 𝑑𝑥 , MGF
𝜎𝑋𝑌 = 𝐶𝑜𝑣(𝑋, 𝑌) = 𝐸[(𝑋 − 𝜇𝑋 )(𝑌 − 𝜇𝑌 )] = 𝐸[𝑋𝑌] − 𝜇𝑋 𝜇𝑌
𝑡2 𝜎2
𝜇𝑡+( )
𝜎𝑋𝑌 = ∑𝑥 ∑𝑦(𝑥 − 𝜇𝑥 )(𝑦 − 𝜇𝑦 )𝑓(𝑥, 𝑦)(discrete) 𝑀𝑋 (𝑡) = 𝑒 2 (Normal Distribution)
∞ ∞
𝜎𝑋𝑌 = ∫−∞ ∫−∞(𝑥 − 𝜇𝑋 )(𝑦 − 𝜇𝑌 )𝑓(𝑥, 𝑦)𝑑𝑥𝑑𝑦 (continuous) Linear Combinations of RVs
𝜎𝑋𝑌 = 𝐸[𝑋𝑌] − 𝐸[𝑋]𝐸[𝑌] ∴ 𝜎𝑋𝑌 = 0 for X and Y independent For 𝑌 = 𝑎𝑋 given distribution 𝑓(𝑥)
1 𝑦
Correlation Coefficient ℎ(𝑦) = |𝑎| 𝑓 ( )
𝑎
𝜎𝑋𝑌
𝜌𝑋𝑌 = , −1 ≤ 𝜌𝑋𝑌 ≤ 1 𝑀𝑎𝑋+𝑏𝑌+𝑐 (𝑡) = 𝑀𝑋 (𝑎𝑡)𝑀𝑌 (𝑏𝑡)𝑒 𝑐𝑡
𝜎𝑋 𝜎𝑌
Poisson Approximation for Binomial For 𝑍 = 𝑋 + 𝑌 and distributions 𝑓(𝑥), 𝑔(𝑦)
𝑏(𝑥; 𝑛, 𝑝) → 𝑝(𝑥; 𝜆𝑡) as 𝑛 → ∞, 𝑝 → 0, 𝑛𝑝 constant X and Y independent, 𝑋 = 𝑊, 𝑌 = 𝑍 − 𝑊

Chebyshev’s Theorem (Discrete and Continuous RV) ℎ(𝑧) = ∑∞−∞ 𝑓(𝑤)𝑔(𝑧 − 𝑤) = ∫−∞ 𝑓(𝑤)𝑔(𝑧 − 𝑤) 𝑑𝑤
1
𝑃(𝜇 − 𝑘𝜎 < 𝑋 < 𝜇 + 𝑘𝜎) ≥ 1 − Sample Random Variables (Statistics)
𝑘2
Standardized Variable Sample data 𝑥1 , 𝑥2 , … , 𝑥𝑛
𝑋−𝜇 Each independent measures of RV 𝑋𝑖
𝑍= 1
𝜎
𝑥−𝜇 𝑥̅ = ∑𝑛𝑖=1 𝑥𝑖 (empirical value of mean)
𝑥−𝜇 𝑛
𝑃 (𝑍 ≤ ) = ∫−∞ 𝑛(𝑠; 0,1) 𝑑𝑠
𝜎
1
𝜎 𝑋̅ = ∑𝑛𝑖=1 𝑋𝑖 (sample mean RV)
𝐴−𝜇 𝐵−𝜇 𝐵−𝜇 𝐴−𝜇 𝑛
𝑃( ≤𝑍≤ ) = Φ( ) − Φ( ) Data in increasing order 𝑥(1) , … , 𝑥(𝑛)
𝜎 𝜎 𝜎 𝜎
𝑥−𝜇 𝑥 𝑛 +𝑥 𝑛
𝑛( 𝜎 ;0,1) ( ) ( +1)
𝑛(𝑥; 𝜇, 𝜎) = 𝑀𝑒𝑑𝑖𝑎𝑛 = 2 2
, n is even
𝜎 2
Normal Approx. of Binomial Distribution 𝑀𝑒𝑑𝑖𝑎𝑛 = 𝑥(𝑛+1) , n is odd
𝑥+0.5−𝑛𝑝 2
𝑃(𝑋 ≤ 𝑥) ≈ 𝑃 (𝑍 ≤ ) 1
√𝑛𝑝(1−𝑝) 𝑠2 = ∑𝑛𝑖=1(𝑥𝑖 − 𝑥̅ )2 (sample variance)
𝑛−1
𝑛𝑝 ≥ 5, 𝑛(1 − 𝑝) ≥ 5
𝑠 = √𝑠 2 (sample standard deviation)
Gamma Function

𝑅𝑎𝑛𝑔𝑒 = max(𝑥𝑖 ) − min (𝑥𝑖 )
Γ(𝛼) = ∫0 𝑥 𝛼−1 𝑒 −𝑥 𝑑𝑥 , 𝛼 > 0 𝐸[𝑋̅] = 𝜇 and 𝐸[𝑆 2 ] = 𝜎 2 (unbiased)
1 1
Γ ( ) = √𝜋, Γ(𝑛) = (𝑛 − 1)!, 𝑛 ∈ ℕ
2
𝑆2 = [𝑛 ∑𝑛𝑖=1 𝑋𝑖2 − (∑𝑛𝑖=1 𝑋𝑖 )2 ]
𝑛(𝑛−1)
Poisson and Exponential Distributions Box-and-Whisker and Quantile Plots
𝑑 −𝜆𝑥 1 𝑖
𝑃(𝑋 ≤ 𝑥) = 𝜆𝑒 ,𝜆 = 𝑄𝑖 = (𝑛 + 1) ⋅ , 𝑖 = 1, 2, 3 (quartiles)
𝑑𝑥 𝛽 4
Transformations of Random Variables 𝐼𝑄𝑅 = 𝑄3 − 𝑄1 , 𝑄2 is the median
𝑌 = 𝑢(𝑋) ∴ 𝑢−1 (𝑌) exists if bijective Lower Whisker (minimum): 𝑄1 − 1.5(𝐼𝑄𝑅)
Given distribution 𝑓(𝑥) Upper Whisker (maximum): 𝑄3 + 1.5(𝐼𝑄𝑅)
𝑔(𝑦) = 𝑓(𝑢−1 (𝑦)) (Discrete) 𝑖−8
3

𝑑𝑢−1 (𝑦) Quantile Plot: ( 1 , 𝑥𝑖 ) = (𝑓, 𝑥𝑖 )


𝑔(𝑦) = 𝑓(𝑢−1 (𝑦)) ⋅ | | (Continuous) 𝑛+
4
𝑑𝑦
𝑞𝜇,𝜎 (𝑓) = 𝜇 + 𝜎{4.91[𝑓 0.14 − (1 − 𝑓)0.14 ]} (Normal Q-Q) =
𝑌1 = 𝑢1 (𝑋1 , 𝑋2 ), 𝑌2 = 𝑢2 (𝑋1 , 𝑋2 )
(𝑞0,1 (𝑓𝑖 ), 𝑥𝑖 )
∴ 𝑋1 = 𝑢1−1 (𝑌1 , 𝑌2 ), 𝑋2 = 𝑢2−1 (𝑌1 , 𝑌2 )
𝑞0,1 = 4.91[𝑓 0.14 − (1 − 𝑓)0.14 ]
𝑔(𝑦1 , 𝑦2 ) = 𝑓(𝑢1−1 (𝑦1 , 𝑦2 ), 𝑢2−1 (𝑦1 , 𝑦2 )|𝐽| (Continuous)
𝜕𝑥1 𝜕𝑥1 For CDF 𝐹(𝑥), 𝑞(𝑓) = 𝐹 −1
𝜕𝑦 𝜕𝑦2 CLT and T-Distribution (Mean), Chi-Squared (Variance)
|𝐽| = |𝜕𝑥1 𝜕𝑥2
|
2 𝑋̅𝑛−𝜇
𝜕𝑦1 𝜕𝑦2 𝑍𝑛 = as 𝑛 → ∞, 𝑍𝑛 → 𝑛(𝑧; 0,1), 𝑛 ≥ 30
𝜎/√𝑛
𝑔(𝑦1 , 𝑦2 ) = 𝑓(𝑢1−1 (𝑦1 , 𝑦2 ), 𝑢2−1 (𝑦1 , 𝑦2 )) (Discrete) 𝑋̅−𝜇
𝑇= , 𝑛 < 30, 𝜈 = 𝑛 − 1
𝑆/√𝑛
𝑔(𝑦) = ∑𝑘𝑖=1 𝑓[𝑤𝑖 (𝑦)]|𝐽𝑖 | (non-bijective)
(𝑛−1)𝑆 2
𝜒2 =
𝜎2
Winter 2024 Final Exam
ECE286H1 S
Duration: 150 Minutes

Point Estimates 𝑝̂(1−𝑝̂) 𝑝̂(1−𝑝̂)


𝑃 (𝑝̂ − 𝑧𝛼 √ ≤ 𝑝 ≤ 𝑝̂ + 𝑧𝛼 √ )=1−𝛼
𝜃 = 𝜇, 𝜃̂ = 𝑥̅ , Θ
̂ = 𝑋̅ 2 𝑛 2 𝑛

̂
𝐸[Θ] = 𝜃 unbiased estimator 2 ̂(1−𝑝̂)
𝑧𝛼 𝑝
2
Confidence Intervals 𝑛=
𝛿2
2
𝑃(Θ𝐿 ≤ 𝜃 ≤ Θ𝑈 ) = 1 − 𝛼 𝑧𝛼
𝜎 𝜎 𝑛≥ 2
, max(𝑝̂ (1 − 𝑝̂ ) = 0.25
𝑃 (𝑋̅ − 𝑧𝛼 ≤ 𝜇 ≤ 𝑋̅ + 𝑧𝛼 )=1−𝛼 4𝛿 2
2 √𝑛 2 √𝑛 CI For Variance
𝑠 𝑠
𝑃 (𝑋̅ − 𝑡𝛼 ≤ 𝜇 ≤ 𝑋̅ + 𝑡𝛼 )=1−𝛼 (𝑛−1)𝑆 2 (𝑛−1)𝑆 2
2 √𝑛 2 √𝑛 𝑃( ≤ 𝜎2 ≤ ) = 1 − 𝛼, 𝜈 = 𝑛 − 1
𝜎 𝜎 χ2 𝜒2 𝛼
𝑒= (standard), 𝑒 = 𝑧𝛼 (margin) 𝛼
2 1− 2
√𝑛 2 √𝑛
𝜎 Maximum Likelihood Estimation and Log-Likelihood
𝑃 (𝜇 ≤ 𝑋̅ + 𝑧𝛼 ) = 1 − 𝛼 (upper bound)
√𝑛 Samples 𝑥1 , … , 𝑥𝑛 , Joint PDF 𝑓(𝑥1 , … , 𝑥𝑛 ; 𝜃)
𝜎
𝑃 (𝜇 ≥ 𝑋̅ − 𝑧𝛼 ) = 1 − 𝛼 (lower bound) 𝐿(𝑥1 , … , 𝑥𝑛 ; 𝜃) = ∏𝑛𝑖=1 𝑔(𝑥𝑖 ; 𝜃)
√𝑛
𝑑𝐿 𝑑 2𝐿
Prediction Intervals 𝜃̂ = arg max 𝐿(𝑥1 , … , 𝑥𝑛 ; 𝜃) = 𝜃 𝑠. 𝑡. = 0 and <0
𝑑𝜃 𝑑𝜃 2
For next observation 𝑥0 log 𝐿 = log(∏𝑛𝑖=1 𝑔(𝑥𝑖 ; 𝜃)) = ∑𝑛𝑖=1 log(𝑔(𝑥𝑖 ; 𝜃))
1 1 𝑑(log 𝐿)
𝑃 (𝑥̅ − 𝑧𝛼 𝜎√1 + ≤ 𝑥0 ≤ 𝑥̅ + 𝑧𝛼 𝜎√1 + ) = 1 − 𝛼
𝑛 𝑛
𝜃̂ = arg max log 𝐿(𝑥1 , … , 𝑥𝑛 ; 𝜃) = 𝜃 𝑠. 𝑡. =0
2 2 𝑑𝜃
𝜕 1 𝑛
Tolerance Limits 𝜇= ln(𝐿(𝑥1 , … , 𝑥𝑛 ; 𝜇, 𝜎)) = 𝑥̅ = ∑ 𝑥
𝜕𝜇 𝑛 𝑖=1 𝑖
𝑃(𝑥̅ ± 𝑘𝑠) = 1 − 𝛾 (that 1 − 𝛼 of samples in range) 𝜕 𝑛−1 2 1
𝜎2 = ln(𝐿(𝑥1 , … , 𝑥𝑛 ; 𝜇, 𝜎)) = 𝑠 = ∑𝑛𝑖=1(𝑥𝑖 − 𝑥̅ )2
CI for Difference of Means with Known 𝝈 𝜕𝜎 2 𝑛 𝑛
Hypothesis Testing (One and Two-Tailed)
𝜎12 𝜎22
𝑃 ((𝑥̅1 − 𝑥̅2 ) − 𝑧𝛼 √ + ≤ 𝜇1 − 𝜇2 ≤ (𝑥̅1 − 𝑥̅2 ) + Null: 𝐻0 : 𝜃 = 𝜃0
2 𝑛1 𝑛2
Alternate: 𝐻1 : 𝜃 ≠ 𝜃0 (two-sided), 𝐻1 : 𝜃 > 𝜃0 (upper bound),
𝜎12 𝜎22 𝐻1 : 𝜃 < 𝜃0 (lower bound)
𝑧𝛼 √ + ) = 1−𝛼
2 𝑛1 𝑛2 Type I (False Positive) and Type II (False Negative) Errors
Unknown and Equal Population Variance 𝛼 = 𝑃(𝑡𝑦𝑝𝑒 𝐼) = 1 − 𝑃(𝐶𝑟𝑖𝑡𝑖𝑐𝑎𝑙 𝑅𝑒𝑔𝑖𝑜𝑛) with 𝐻0 parameter
(𝑛1 −1)𝑆12 +(𝑛2 −1)𝑆22 (level of significance)
𝑆𝑝2 = , 𝑛1 , 𝑛2 < 30, 𝜎1 = 𝜎2 𝛽 = 𝑃(𝑡𝑦𝑝𝑒 𝐼𝐼) = 𝑃(𝐶𝑟𝑖𝑡𝑖𝑐𝑎𝑙 𝑅𝑒𝑔𝑖𝑜𝑛) with 𝐻1 parameter
𝑛1 +𝑛2 −2

1 1 𝑃𝑜𝑤𝑒𝑟 = 1 − 𝛽
𝑃 ((𝑥̅1 − 𝑥̅2 ) − 𝑡𝛼 𝑆𝑝 √ + ≤ 𝜇1 − 𝜇2 ≤ (𝑥̅1 − 𝑥̅2 ) +
2 𝑛1 𝑛2 P-Value and Critical Region (Symmetric)
1 1 𝑃 ≤ 𝛼, reject 𝐻0 , 𝑃 ≥ 𝛼, fail to reject 𝐻0
𝑡𝛼 𝑆𝑝 √ + ) = 1 − 𝛼, 𝜈 = 𝑛1 + 𝑛2 − 2
2 𝑛1 𝑛2 𝑃 = 2𝑃(𝑀 > |𝑚|) (two-sided, 𝑀 = 𝑍, 𝑇, 𝑚 = 𝑧, 𝑡)
Unknown and Unequal Population Variance 𝑃 = 𝑃(𝑀 > |𝑚|) (one-sided, 𝑀 = 𝑍, 𝑇, 𝑚 = 𝑧, 𝑡)
𝑆12 𝑆22 𝑃 (−𝑚𝛼 ≤ 𝑚 ≤ 𝑚𝛼 ) = 1 − 𝛼, reject 𝐻0 if |𝑚| > 𝑚𝛼 ,
𝑃 ((𝑥̅1 − 𝑥̅2 ) − 𝑡𝛼 √ + ≤ 𝜇1 − 𝜇2 ≤ (𝑥̅1 − 𝑥̅2 ) + 2 2 2
𝑛1 𝑛2
2 otherwise fail to reject 𝐻0 (two-sided)
𝑆2 𝑆 2 2 𝑃(𝑚 ≤ 𝑚𝛼 ) = 1 − 𝛼 or 𝑃(𝑚 ≥ 𝑚𝛼 ) = 1 − 𝛼, reject 𝐻0 if
( 1+ 2)
𝑆12 𝑆22 𝑛1 𝑛2 |𝑚| > 𝑚𝛼 , otherwise fail to reject 𝐻0
𝑡𝛼 √ + ) = 1 − 𝛼, 𝜈 = 2 2
2 𝑛1 𝑛2 𝑆2 𝑆2
(𝑛1 ) (𝑛2 )
P-Value and Critical Region (Asymmetric)
1 2
⌊ ⌋ +
𝑛1 −1 𝑛2 −1 𝑃 ≤ 𝛼, reject 𝐻0 , 𝑃 ≥ 𝛼, fail to reject 𝐻0
CI for Paired Observations 𝑃 = 𝑃(𝑅 > 𝑟) + 𝑃(𝑅 < 𝑟) (two-sided, 𝑅 = 𝜒 2 , 𝐹, 𝑟 = 𝜒 2 , 𝑓)
𝐷𝑖 = 𝑋1,𝑖 − 𝑋2,𝑖 𝑃 = 𝑃(𝑅 > 𝑟) or 𝑃(𝑅 < 𝑟) (one-sided, 𝑅 = 𝜒 2 , 𝐹, 𝑟 = 𝜒 2 , 𝑓)
𝜇𝐷 = 𝜇1 − 𝜇2 , 𝑑̅ = 𝑥̅1 − 𝑥̅2 , 𝜈 = 𝑛 − 1 𝑃 (𝑟𝛼 ≤ 𝑟 ≤ 𝑟1−𝛼 ) = 1 − 𝛼, reject 𝐻0 if 𝑟 < 𝑟1−𝛼 or
𝑣𝑎𝑟(𝐷𝑖 ) = 𝜎𝑋21,𝑖 + 𝜎𝑋22,𝑖 − 2𝑐𝑜𝑣(𝑋1,𝑖 , 𝑋2,𝑖 ) 2 2 2

𝑠𝑑 𝑠𝑑
𝑟 > 𝑟 , otherwise fail to reject 𝐻0 (two-sided)
𝛼
𝑃 (𝑑̅ − 𝑡𝛼 ≤ 𝜇𝐷 ≤ 𝑑̅ + 𝑡𝛼 )=1−𝛼 2
2 √𝑛 2 √𝑛 𝑃(𝑟 ≤ 𝑟1−𝛼 ) = 1 − 𝛼 or 𝑃(𝑟 ≥ 𝑟𝛼 ) = 1 − 𝛼, reject 𝐻0 if
CI for Estimating a Proportion (Single Sample) 𝑟 > 𝑟𝛼 or 𝑟 < 𝑟1−𝛼 , otherwise fail to reject 𝐻0
𝑋 𝑥
𝑃̂ = , 𝑝̂ = , 𝑝 unknown (Binomial distribution) Hypothesis Testing for the Mean (Known 𝝈𝟐 )
𝑛 𝑛
𝐻0 : 𝜇 = 𝜇0 , 𝐻1 : 𝜇 ≠ 𝜇0 (two-sided), 𝐻1 : 𝜇 > 𝜇0 or 𝐻1 : 𝜇 < 𝜇0
(one-sided)
Winter 2024 Final Exam
ECE286H1 S
Duration: 150 Minutes
𝑋̅ −𝜇0 𝜕(𝑆𝑆𝐸) 𝜕(𝑆𝑆𝐸)
𝑧= , 𝑛 ≥ 30 𝑆𝑆𝐸 = ∑𝑛𝑖=1 𝑒𝑖2 , = 0, =0
𝜎/√𝑛 𝜕𝑏0 𝜕𝑏1
1
𝑃 = 2𝑃(𝑍 > |𝑧|) = 2(1 − Φ(|𝑧|)) = 2Φ(−|𝑧|) (two-sided) 𝑏0 = 𝑦̅ − 𝑏1 𝑥̅ = (∑𝑛𝑖=1 𝑦𝑖 − 𝑏1 ∑𝑛𝑖=1 𝑥𝑖 )
𝑛
𝑃 = 𝑃(𝑍 > |𝑧|) = 1 − Φ(|𝑧|) (one-sided) ∑𝑛
𝑖=1(𝑥𝑖 −𝑥̅ )(𝑦𝑖 −𝑦
̅) 𝑛 ∑𝑛 𝑛 𝑛
𝑖=1 𝑥𝑖 𝑦𝑖 −(∑𝑖=1 𝑥𝑖 )(∑𝑖=1 𝑦𝑖 )
𝑏1 = ∑𝑛
= 2
Hypothesis Testing for the Mean (Unknown 𝝈𝟐 ) 𝑖=1(𝑥𝑖 −𝑥̅ )
2
𝑛 ∑𝑛 2 𝑛
𝑖=1 𝑥𝑖 −(∑𝑖=1 𝑥𝑖 )

𝑡=
𝑋̅ −𝜇0
, 𝜈 = 𝑛 − 1, 𝑛 ≤ 30 Sums of Errors
𝑠/√𝑛 1
𝑆𝑥𝑥 = ∑𝑛𝑖=1(𝑥𝑖 − 𝑥̅ )2 = ∑𝑛𝑖=1 𝑥𝑖2 − (∑𝑛𝑖=1 𝑥𝑖 )2
Hypothesis Testing for Two Means (Known 𝝈𝟐 ) 𝑛

𝐻0 : 𝜇1 − 𝜇2 = 𝑑0 𝐻1 : 𝜇1 − 𝜇2 ≠ 𝑑0 (two-sided) 𝑆𝑥𝑥 = ∑𝑛𝑖=1 𝑥𝑖2 − 𝑛(𝑥̅ )2


1
𝐻1 : 𝜇1 − 𝜇2 > 𝑑0 or 𝐻1 : 𝜇1 − 𝜇2 < 𝑑0 (one-sided) 𝑆𝑦𝑦 = ∑𝑛𝑖=1(𝑦𝑖 − 𝑦̅)2 = ∑𝑛𝑖=1 𝑦𝑖2 − (∑𝑛𝑖=1 𝑦𝑖 )2
𝑛
(𝑥̅ 1 −𝑥̅2 )−𝑑0
𝑧= (CLT) 𝑆𝑦𝑦 = ∑𝑛𝑖=1 𝑦𝑖2 − 𝑛(𝑦̅)2
𝜎2 𝜎2
√ 1+ 2
𝑛1 𝑛2 𝑆𝑥𝑦 = ∑𝑛𝑖=1(𝑥𝑖 − 𝑥̅ )(𝑦𝑖 − 𝑦̅)
1
Hypothesis Testing for Two Means (Unknown 𝝈𝟐𝟏 = 𝝈𝟐𝟐 ) 𝑆𝑥𝑦 = ∑𝑛𝑖=1 𝑥𝑖 𝑦𝑖 − (∑𝑛𝑖=1 𝑥𝑖 )(∑𝑛𝑖=1 𝑦𝑖 ) = ∑𝑛𝑖=1 𝑥𝑖 𝑦𝑖 − 𝑛𝑥̅ 𝑦̅
𝑛
(𝑥̅1 −𝑥̅2 )−𝑑0 (𝑛1 −1)𝑆12 +(𝑛2 −1)𝑆22 𝑆𝑥𝑦 2
𝑆𝑥𝑦
𝑡= 1 1
, 𝜈 = 𝑛1 + 𝑛2 − 2, 𝑆𝑝2 = 𝑆𝑆𝐸 = 𝑆𝑦𝑦 − 𝑏1 𝑆𝑥𝑦 = 𝑆𝑦𝑦 − ( ) 𝑆𝑥𝑦 = 𝑆𝑦𝑦 −
𝑠𝑝 √ + 𝑛1 +𝑛2 −2 𝑆𝑥𝑥 𝑆𝑥𝑥
𝑛1 𝑛2
2 2] 𝑆𝑆𝐸
𝑠 = 𝐸[𝜎 =
Hypothesis Testing for Two Means (Unknown 𝝈𝟐𝟏 ≠ 𝝈𝟐𝟐 ) 𝑛−2
2 CI for Regression Parameters
𝑆2 𝑆2
( 1+ 2) 𝑏1 −𝛽1 𝑏0 −𝛽0
(𝑥̅1 −𝑥̅2 )−𝑑0 𝑛1 𝑛2 𝑡= ,𝑡=
𝑡= ,𝜈= 2 2 𝑠/√𝑆𝑥𝑥 ∑ 𝑥 𝑛 2
2 2 𝑆2 𝑆2 𝑠⋅√ 𝑖=1 𝑖
𝑠 𝑠
√ 1+ 2 (𝑛1 ) (𝑛2 ) 𝑛𝑆𝑥𝑥
𝑛1 𝑛2 1 2
⌊ ⌋ +
𝑛1 −1 𝑛2 −1
𝑠 𝑠
Hypothesis Testing for Paired Observations 𝑃 (𝑏1 − 𝑡𝛼 ≤ 𝛽1 ≤ 𝑏1 + 𝑡𝛼 ) = 1 − 𝛼, 𝜈 = 𝑛 − 2
2 √𝑆𝑥𝑥 2 √𝑆𝑥𝑥
𝐻0 : 𝜇𝐷 = 𝑑0 𝐻1 : 𝜇𝐷 ≠ 𝑑0 (two-sided) 𝑠 𝑠
𝑃 (𝑏0 − 𝑡𝛼 √∑𝑛𝑖=1 𝑥𝑖2 ≤ 𝛽0 ≤ 𝑏0 + 𝑡𝛼 √∑𝑛𝑖=1 𝑥𝑖2 ) =
𝐻1 : 𝜇𝐷 > 𝑑0 or 𝐻1 : 𝜇𝐷 < 𝑑0 (one-sided) 2 √𝑛𝑆𝑥𝑥 2 √𝑆𝑥𝑥
𝑑̅ −𝑑0 1 − 𝛼, 𝜈 = 𝑛 − 2
𝑡= ,𝜈 =𝑛−1
𝑠𝑑 /√𝑛 Hypothesis Testing for Regression Parameters
Hypothesis Testing for One Variance 𝐻0 : 𝛽1 = 𝛽1 0 , 𝐻1 : 𝛽1 ≠ 𝛽1 0 (two-sided)
𝐻0 : 𝜎 2 = 𝜎02 , 𝐻1 : 𝜎 2 ≠ 𝜎02 (two-sided) 𝐻1 : 𝛽1 > 𝛽1 0 or 𝐻1 : 𝛽1 < 𝛽1 0 (one-sided)
𝐻1 : 𝜎 2 > 𝜎02 or 𝐻1 : 𝜎 2 < 𝜎02 (one-sided) 𝐻0 : 𝛽0 = 𝛽0 0 , 𝐻1 : 𝛽0 ≠ 𝛽0 0 (two-sided)
(𝑛−1)𝑠 2
𝜒2 = ,𝜈 =𝑛−1 𝐻1 : 𝛽0 > 𝛽0 0 or 𝐻1 : 𝛽0 < 𝛽0 0 (one-sided)
𝜎02
𝑏1 −𝛽1 𝑏0 −𝛽0
Hypothesis Testing for Two Variances 𝑡= ,𝑡=
𝑠/√𝑆𝑥𝑥 𝑛 2
𝐻0 : 𝜎12 = 𝜎22 , 𝐻1 : 𝜎12 ≠ 𝜎22 (two-sided) ∑ 𝑥
𝑠⋅√ 𝑖=1 𝑖
𝑛𝑆𝑥𝑥
𝐻1 : 𝜎12 > 𝜎22 or 𝐻1 : 𝜎12 < 𝜎22 (one-sided)
𝑠12
Coefficient of Determination 𝑹𝟐
𝑓= 2 , 𝜈1 = 𝑛1 − 1, 𝜈2 = 𝑛2 − 1 𝑆𝑆𝐸 (𝑦 −𝑦̂𝑖 )2
𝑠2 𝑅2 = 1 − = 1 − ∑𝑛𝑖=1 (𝑦𝑖 ̅) 2 , 0 ≤ 𝑅2 ≤ 1
1 𝑆𝑆𝑇 𝑖 −𝑦
𝑓1−𝛼 (𝜈1 , 𝜈2 ) =
𝑓𝛼 (𝜈2 ,𝜈1 )
Goodness of Fit
𝑒𝑖 = 𝑛𝑃(𝑖), n trials, k outcomes
(𝑂𝑖 −𝑒𝑖 )2
𝜒 2 = ∑𝑘𝑖=1 ,𝜈 =𝑘−1
𝑒𝑖
Small 𝜒 2 = good fit
Reject if 𝜒 2 > 𝜒𝛼2
Linear Regression
Data {(𝑥𝑖 , 𝑦𝑖 ); 𝑖 = 1,2, … , 𝑛}
Real: 𝑦𝑖 = 𝛽0 + 𝛽1 𝑥𝑖 + 𝜖𝑖 , 𝐸[𝜖] = 0, 𝑉𝑎𝑟(𝜖) = 𝜎 2
Estimator: 𝑦̂𝑖 = 𝑏0 + 𝑏1 𝑥𝑖
Residual: 𝑒𝑖 = 𝑦𝑖 − 𝑦̂𝑖 , 𝑖 = 1, … , 𝑛
𝑦𝑖 = 𝑏0 + 𝑏1 𝑥𝑖 + 𝑒𝑖

You might also like