Draft
Draft
Draft
α
αβ (1 − τ)(1 − α) 1−α αβ (1 − τ)(1 − α) 0
∂k 1
=
∂τ 1 − α α(1 + β ) + τ(1 − α) α(1 + β ) + τ(1 − α)
Where: 0
αβ (1 − τ)(1 − α) u 0
=
α(1 + β ) + τ(1 − α) v
−αβ (1−α)[α(1+β )+τ(1−α)]−αβ (1−τ)(1−α)2 0 0u
= 2 (= u v−v
v2 )
[α(1+β )+τ(1−α)]
2 2 2 −αβ τ(1−α) 2
= − α β (1−α)(1+β )+αβ τ(1−α) +αβ (1−α)
[α(1+β )+τ(1−α)]2
2 2
= − α β (1−α)(1+β )+αβ (1−α)
2
[α(1+β )+τ(1−α)]
Since 1 > α > 0 and β > 0, we have: α 2 β (1−α)(1+β ) > 0, αβ (1−α)2 > 0, and [α(1 + β ) + τ(1 − α)]2 >
0, ∀0 < τ < 1
∂k
⇒ ∂τ <0
When the optimization problem has a linear objective function, it is necessary to convexify the
function.
In the case of symmetric information, it is necessary to check the incentive condition so that the
agent doesn’t deviate from his choice. This check is a deterrent to make sure that the implemented
effort is optimal.
When the effort level is not observable, wage is a function of outcome level.
1
Assuming that the IC is slack, then the PC is binding by optimality. The solution needs to be
checked again as a sufficient condition for it to hold. If the sufficient condition doesn’t hold, the
assumption is not valid. In such a case, the IC must be binding, from this the optimization problem
can easily be solved (AGAIN!!).
Always use the Standard Error that is in accordance with the null hypothesis. Precisely, if the null
hypothesis is the data is homokedastic, the homokedastic Standard Error should be used (not the
robust one)
Third condition for the IV is that the IV doesn’t have a direct effect on the objective variable.
∂L
= β t ∂c u(ct , 1 − lt ) − µt = 0
∂ ct
∂L
= −β t ∂l u(ct , 1 − lt ) + µt wt = 0
∂ lt
∂L
= −µt + Et (µt+1 Rt+1 ) = 0
∂ st
µt (wt lt + Rt st−1 − ct − st ) = 0, ∀t ≥ 0
µt = β t ∂c u(ct , 1 − lt ) > 0
2
Plug in the second equation, we have (135), plug in the third equation, we have (136), plug in the
fourth equation, we have (134).
Multicollinearity doesn’t reduce the predictive power or reliability of the model as a whole, at
least within the sample data set; it only affects calculations regarding individual predictors. That
is, a multiple regression model with colinear predictors can indicate how well the entire bundle
of predictors predicts the outcome variable, but it may not give valid results about any individual
predictor, or about which predictors are redundant with respect others.
For the OLS estimator for the multiple linear regression, the variance of any coefficient estimate j
is given by the formula
σU2
Var β̂ j X1 , X2 , ..., Xk =
2 , ∀ j = 1, . . . , k
∑N
i=1 X ji − X j 1 − R2
j
Where R2j is the R2 from regressing X j on all other regressors, i.e. the R2 from X j = α0 + α1 X1 +
· · · + α j−1 X j−1 + α j+1 X j+1 + · · · + αk Xk + ε. If X j is highly correlated with other factors, the R2j
would be large, which in turn reduces the denominator in the formula of Var β̂ j X1 , X2 , ..., Xk and
results in a higher variance in the estimator. Therefore, when the regressors are highly correlated,
the OLS estimator is less efficient.