0% found this document useful (0 votes)
80 views

4) MLR4: Zero Cond Mean E (U - x1,..xk) 0: 2) NO BIAS Condition

1) OLS estimators are chosen to minimize the sum of squared residuals and provide unbiased estimates if the classical linear regression model assumptions are satisfied. 2) Violations of the assumptions, such as omitting important variables or having perfect multicollinearity between regressors, can lead to biased estimates. 3) The variance of OLS estimators decreases as the sample size increases based on the central limit theorem.

Uploaded by

Christopher Gian
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views

4) MLR4: Zero Cond Mean E (U - x1,..xk) 0: 2) NO BIAS Condition

1) OLS estimators are chosen to minimize the sum of squared residuals and provide unbiased estimates if the classical linear regression model assumptions are satisfied. 2) Violations of the assumptions, such as omitting important variables or having perfect multicollinearity between regressors, can lead to biased estimates. 3) The variance of OLS estimators decreases as the sample size increases based on the central limit theorem.

Uploaded by

Christopher Gian
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

-----------------Chapter 2-------------------OLS Estimators

> [will be equal if corr(x1,x2)=0] Violations(stata will drop) expendA,expendB,totalexpend inc(10s),inc(100s) 2 basically says, when b2=0, dropping 4) MLR4: zero cond mean; it will give us a smaller variance! Estimated slope is (note denom is SSTx) E(u|x1,..xk)=0 1: when!=0, DROPbias. Instead: Failure: Excluding important var(inc, Increase N, Var(b)0 So: inc^2), wrong level-log relationship, 2) NO BIAS Condition: variance of Estimators(SLR): omitting var that correlates with (cov[x1,x2]=0) another independent var (0 partial effect) Fail (E(u|X)!=0 then, endogenous chooses estimates that minimize SSR E(u|X)=0, exogenous Algebraic Properties of OLS residuals 5) MLR5:homoskedasticity , drop x3, if x2,x1 1) OLS estimates are chosen to make Unbiasedness(req:MLR1-4) are correlated, then both biased, even if x3 residuals add up to zero (MSe total) and x2 are uncorrelated. If x2 and x3 are the procedure by which OLS etimates are uncorrelated, then only x1 will be biased. Note: errors are never observable, so we have 2) Sample Cov of regressor and OLS obtained is unbiased across all possible Estimating sigma-squared to use residuals residual is zero random samples ------Chapter 3------------Variance of Estimators(req:MLR5) Partialling out: regress x1 on all other 3) is always on OLS line Sum of Squares: independents (no y), save residuals, then regress y on residuals for In simple regression, no partialling out of others because only one explanatory or SER= SD of error term, +K] [ambiguous when

5) Homoskedasticity of var cond.on x, treat x fixed (look pg53) 1-4 for Unbiased + 5 for min variance

(SSM) increases when explan var goes down Goodness of fit R-squared never decreases when K incrse

Low r-sq, doesnt mean that its useless Penalizes more K explan var(utilizes DF) SLR assumptions 1) slr1: Linear param 2) slr2: rand. Samp. a. Cov(ui,uj)= 3) slr3: var(x)>0 4) slr4:E[u|x]=0

(larger var=less precise est) 1) Err var: Positive relationship 2) Var(xi): Negative relationship( add to Properties for OLS residuals N;extreme: sstx=0,violates MLR3) 1) Sample average of residuals is zero 3) Rj-squared(regressed xj on other x): positive relationship high rj-sq, causes 2) Sample covar between independent larger var(b). var and OLS residual is zero High RJ: drop explanatory, but will reuslt in bias for all explanatory. If 3) Points will always be bias small(below) drop it. on OLS line Overspecifying model(irrelevant): Doesnt MLR assumptions affect Unbiasedness but does affect Variance 1) MLR1: Linear in parameters of OLS estimators 2) MLR2: Random sample Underspecifying(omit): Guess it! cov(xi,risidual)=0=residuals.bar E(B~)>B, upward bias.When E(B~) closer to 0 3) MLR3: No perfect collinearity than B, then bias toward 0. Independent varibles cannot be Omitting and bias perfectly correlated. Still can be correlated 1) Variance of estimated coefficient----

--------------------------Chapter4------------------------ ----------Tables and other things -----------------Type I error: Rejecting Null when it is true Type II error: Failure to reject Null when it is false

----Study group suggestions----E[x+y]=E[x]+E[y] E[xy] != E[x]E[y] (Unless corr(x,y) = 0) E(aX)=a*E[x] Cov(x,x)=var(x) MLR6: Normality; Var(x+y)=var(x)+var(y) indie:x,y Normality in errors implies that the t and F tests are exact in large Var(x+y)=var(x)+var(y)+2cov(x,y) depn:x,y samples, even without normality, Cov(x,y+z) = cov(xy) +cov(xz) the estimates are approximately Watch out for level-log, Ex) Var(aX+B) = a^2 varx(x) normally distributed, but not so in Ex. Rdintens= .472 +.321lsales+.050 Cov(aX,bY) = abCOV(x,y) small samples.) .321/100=.00321*lsales=unit change. Correlation = Cov(x,y)/[sd(y)sd(x)] ( if u is complicated function, CLT Say lsales=10, then .00321*10=.0321 Var(x) = E[x^2] (E[x])^2 doesnt apply) Not all errors have units increase Cov(x,y) = E[xy]-E[x]E[y] (above) distribution, wage for Hypothesis testing guidelines: example. 1) Identify Log-level P-values Root MSE = (SSR/N-K-1)^.5 2) Null, 3) ALT -----Chapter 1 shit----4) Draw Definition Cross sectional: sample of units( -----problems-level-log---firms, cities) at a point in time Stata gives you two-sided output (independence) Econ sign: The size & sign of Time series: observations indexed Statistical sig: determined by t over time T-stat Pooled Cross sections: two years (Answer) The coefficient on log(pop) is an cross section data elasticity. A correct statement is that a 10% Pooled time series: like observing 5 F-statistic increase in population increases rent by diff econ 140 & observing what .066(10) = .66%. happens when admin criteria change OR T=(b.hat.j-b)/se(b.hat.j) Root mse = sigma.hat MSE = sigma^2 Overall F: F-tests only for nulls of equality Include info about random,fixed, matts stuff Only 1 restriction: H0: b1+3b2+b4=0 Critical Values (1-t)5%: 1.645 (2-t)5%: 1.96

Central Limit Theorem (CLT): A key result from probability theory that implies that the sum of independent random variables, or even weakly dependent random variables, when standardized by its standard deviation, has a distribution that tends to standard normal as the sample size grows.

You might also like