Draft

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

1 FMAK-P2e

 α 
αβ (1 − τ)(1 − α) 1−α αβ (1 − τ)(1 − α) 0
 
∂k 1
=
∂τ 1 − α α(1 + β ) + τ(1 − α) α(1 + β ) + τ(1 − α)
Where:  0    
αβ (1 − τ)(1 − α) u 0
=
α(1 + β ) + τ(1 − α) v
−αβ (1−α)[α(1+β )+τ(1−α)]−αβ (1−τ)(1−α)2 0 0u
= 2 (= u v−v
v2 )
[α(1+β )+τ(1−α)]
2 2 2 −αβ τ(1−α) 2
= − α β (1−α)(1+β )+αβ τ(1−α) +αβ (1−α)
[α(1+β )+τ(1−α)]2
2 2
= − α β (1−α)(1+β )+αβ (1−α)
2
[α(1+β )+τ(1−α)]

Since 1 > α > 0 and β > 0, we have: α 2 β (1−α)(1+β ) > 0, αβ (1−α)2 > 0, and [α(1 + β ) + τ(1 − α)]2 >
0, ∀0 < τ < 1
∂k
⇒ ∂τ <0

2 FMAK (Jan 15, 2018)

Model Microeconomics Unique BGP Opti- Long-run


Foundation BGP mal Growth
Rate
Solow Growth Model no yes no exogenous
gh
OLG Growth Model yes no no exogenous
gh
Neoclassical Growth yes yes yes exogenous
Model gh

3 FMIK (Jan 19, 2018)

When the optimization problem has a linear objective function, it is necessary to convexify the
function.

In the case of symmetric information, it is necessary to check the incentive condition so that the
agent doesn’t deviate from his choice. This check is a deterrent to make sure that the implemented
effort is optimal.

When the effort level is not observable, wage is a function of outcome level.

1
Assuming that the IC is slack, then the PC is binding by optimality. The solution needs to be
checked again as a sufficient condition for it to hold. If the sufficient condition doesn’t hold, the
assumption is not valid. In such a case, the IC must be binding, from this the optimization problem
can easily be solved (AGAIN!!).

4 FECO (Jan 19, 2018)

Always use the Standard Error that is in accordance with the null hypothesis. Precisely, if the null
hypothesis is the data is homokedastic, the homokedastic Standard Error should be used (not the
robust one)

Third condition for the IV is that the IV doesn’t have a direct effect on the objective variable.

5 FMAK, (Jan 22, 2018)

Interior solutions to consumer problem (132):

Define Lagrangian function:


" #
∞ ∞
L((ct , lt , st )t≥0 , (µt )t≥0 ) = E0 ∑ β t u(ct , 1 − lt ) + ∑ µt (wt lt + Rt st−1 − ct − st )
t=0 t=0

FOCs: For each t ≥ 0, conditional on history θ t ∈ Θt :

∂L
= β t ∂c u(ct , 1 − lt ) − µt = 0
∂ ct
∂L
= −β t ∂l u(ct , 1 − lt ) + µt wt = 0
∂ lt
∂L
= −µt + Et (µt+1 Rt+1 ) = 0
∂ st

Complimentary Slackness Condition:

µt (wt lt + Rt st−1 − ct − st ) = 0, ∀t ≥ 0

From the first equation, we have:

µt = β t ∂c u(ct , 1 − lt ) > 0

2
Plug in the second equation, we have (135), plug in the third equation, we have (136), plug in the
fourth equation, we have (134).

Moreover, if F is homogeneous of degree 1:


⇒ ∂l F(·), ∂k F(·) are homogeneous of degree zero

6 FECO, (Feb 4, 2018)

In statistics, multicollinearity is a phenomenon in which one predictor variable in a multiple regres-


sion model can be linearly predicted from the others with a substantial degree of accuracy. In this
situation, the coefficient estimates of the multiple regression may change erratically in response to
small changes in the model or the data.

Multicollinearity doesn’t reduce the predictive power or reliability of the model as a whole, at
least within the sample data set; it only affects calculations regarding individual predictors. That
is, a multiple regression model with colinear predictors can indicate how well the entire bundle
of predictors predicts the outcome variable, but it may not give valid results about any individual
predictor, or about which predictors are redundant with respect others.

For the OLS estimator for the multiple linear regression, the variance of any coefficient estimate j
is given by the formula
  σU2
Var β̂ j X1 , X2 , ..., Xk =

2   , ∀ j = 1, . . . , k
∑N
i=1 X ji − X j 1 − R2
j

Where R2j is the R2 from regressing X j on all other regressors, i.e. the R2 from X j = α0 + α1 X1 +
· · · + α j−1 X j−1 + α j+1 X j+1 + · · · + αk Xk + ε. If X j is highly correlated with other factors, the R2j
 
would be large, which in turn reduces the denominator in the formula of Var β̂ j X1 , X2 , ..., Xk and

results in a higher variance in the estimator. Therefore, when the regressors are highly correlated,
the OLS estimator is less efficient.

7 FMIK, (Feb 20, 2018)

Substitution effect and income effect

• Total Effect: ∆xT = x(p’, m) − x(p, m)


• Substitution Effect: ∆xS = h(p’, u) − x(p, m)
• Income Effect: ∆xI = x(p’, m) − h(p’, u)

You might also like