5

Suppose we have a simple maximization problem as described in Equation 1.1 here or here. This leads us to the Lagrangian Equation 1.3: $$\begin{align*}\mathcal{L} &= \sum_{t=1}^\infty \beta^{t-1}\left\{u(c_t) + \lambda_t \left[ f(k_t) + (1 - \delta)k_t - c_t - k_{t+1}\right]\right\} \\ &= \sum_{t=1}^\infty \left[\beta^{t-1} u(c_t) - \beta^{t-1}\lambda_t c_t + \beta^{t-1} \lambda_t f(\mathbf{k_t}) + \beta^{t-1}\lambda_t(1-\delta)\mathbf{k_t} - \beta^{t-1}\lambda_t \mathbf{k_{t+1}}\right] \end{align*} $$

When we derive the first order condition with respect to $k_{t+1}$, which is: $$\frac{\partial \mathcal{L} (\cdot)}{\partial k_{t+1}} = 0 : \beta \lambda_{t+1} \frac{\partial f(k_{t+1})}{\partial k_{t+1}} + \beta \lambda_{t+1} (1 - \delta) -\lambda_t=0$$

why do we use the subscript $\phantom{.}_{t+1}$ in $\lambda_{t+1}$ and why does $\beta^{t-1}$ becomes $\beta$? I cannot understand how the first two terms are combined with the last one ($-\lambda_t$).

The relevant terms (with $k$) of the Lagrangian in period $\phantom{.}_{t+1}$ are: $$ \beta^{(t+1)-1} \lambda_{t+1} f(k_{t+1}) + \beta^{(t+1)-1} \lambda_{t+1} k_{t+1} (1 - \delta) - \beta^{(t+1)-1} \lambda_{t+1} k_{(t+1)+1}$$ so for this part of the sum we do not "care" about the last term when we take the derivative with respect to $k_{t+1}$. So for this period this part of the sum is $$\frac{\partial \mathcal{L}_{t+1}}{\partial k_{t+1}} = \beta^{t} \lambda_{t+1} \frac{\partial f(k_{t+1})}{\partial k_{t+1}} + \beta^{t} \lambda_{t+1} (1 - \delta)$$

The relevant terms (with $k$) of the Lagrangian in period $\phantom{.}_{t}$ are: $$\beta^{t-1} \lambda_t f({k_t}) + \beta^{t-1}\lambda_t(1-\delta){k_t} - \beta^{t-1}\lambda_t k_{t+1}$$ so for this period the part of the sum is $$\frac{\partial \mathcal{L}_t}{\partial k_{t+1}} = - \beta^{t-1}\lambda_t$$ Now the First Order Condition with respect to $k_{t+1}$ should be: $$\frac{\partial \mathcal{L}_{t+1}}{\partial k_{t+1}} + \frac{\partial \mathcal{L}_t}{\partial k_{t+1}} = \beta^{t} \lambda_{t+1} \frac{\partial f(k_{t+1})}{\partial k_{t+1}} + \beta^{t} \lambda_{t+1} (1 - \delta) - \beta^{t-1}\lambda_t = 0$$right?

Konstantinos
  • 405
  • 2
  • 7

2 Answers2

7

In an intertemporal maximization problem, we seek to find the optimal sequence of the control and the state variables. It is the recursive nature of the problem that permits us to consider a "typical" point in time and just one condition per variable.

For each such problem, we need to find out (carefully) in how many distinct periods a specific realization of a variable appears. To do this properly we should distinguish between the "absolute" index, and a "running" index. In the formulation of the Lagrangean as appears in the question, this is not done (and it is usual practice not to, but it may become confusing).

So I would use the $t$ symbol as the absolute index (to arrive at same-looking first-order conditions), and some other symbol for the running index, say

$$\mathcal{L_t} = \sum_{j=0}^\infty \beta^{j}\left\{u(c_{t+j}) + \lambda_{t+j} \left[ f(k_{t+j}) + (1 - \delta)k_{t+j} - c_{t+j} - k_{t+j+1}\right]\right\} $$

Note that $t$ no longer affects the discount factor $\beta$, and this is because the discount factor has to do with looking at the future, which is represented by the index $j$. Also, note that $j$ starts at zero, indicating that the first period is the $t$ period.

Written this way, the Lagrangean says "we are at some point in time indicated by $t$ (that can take the value zero or whatever positive value), and we are looking forward period by period counted by index $j$".

For any given $j$ we have

$$\mathcal{L}_t =...+ \beta^{j}\Big\{u(c_{t+j}) + \lambda_{t+j} \left[ f(k_{t+j}) + (1 - \delta)k_{t+j} - c_{t+j} - k_{t+j+1}\right]\Big\} + \beta^{j+1}\Big\{u(c_{t+j+1}) + \lambda_{t+j+1} \left[ f(k_{t+j+1}) + (1 - \delta)k_{t+j+1} - c_{t+j+1} - k_{t+j+2}\right]\Big\} + ...$$

Pondering this, we realize that the variable $k_{t+j+1}$ will appear in only two consecutive periods, and so the first order condition for a "typical" element of the sequence $\{k_{t+j}\}_{j=0}^{\infty}$ can be exrpessed by differentiating only these two periods with respect to $k_{t+j+1}$. Doing so we get

$$\frac {\partial \mathcal{L}_t}{\partial k_{t+j+1}} = -\beta^{j} \lambda_{t+j} + \beta^{j+1}\Big\{ \lambda_{t+j+1} \left[ f'(k_{t+j+1}) + (1 - \delta)\right]\Big\} $$

Take common factors (which will simplify the discount factor) and set equal to zero

$$\frac {\partial \mathcal{L}_t}{\partial k_{t+j+1}} = \beta^{j} \Big[-\lambda_{t+j} + \beta\Big\{ \lambda_{t+j+1} \left[ f'(k_{t+j+1}) + (1 - \delta)\right]\Big\}\Big] = 0$$

To lighten the indexing burden, we can express this for $j=0$, to obtain

$$\frac {\partial \mathcal{L}_t}{\partial k_{t+1}} = 0 \implies -\lambda_{t} + \beta\Big\{ \lambda_{t+1} \left[ f'(k_{t+1}) + (1 - \delta)\right]\Big\} = 0$$

Alecos Papadopoulos
  • 33,814
  • 1
  • 48
  • 116
  • A comment: when we add uncertainty ($E_t$) we obviously do not change its subscript, because the subscript refers to the variable for which we take the expectation, right? I mean we do not write $E_{t+1}$ instead of $E_{t}$. – Konstantinos Mar 02 '15 at 17:02
  • 1
    @pidosaurus Indeed, because $E_t$ applies to the whole objective function -which is an additional reason why the notation I used is "more" correct ("we stand at time $t$" and so expectations are with respect to information set at $t$). In the alternative notation, invoke the Law of Iterated Expectations to "return back" to $E_{t}$. – Alecos Papadopoulos Mar 02 '15 at 18:45
2

(facepalm) Multiply both sides with $\beta^{1-t}$: $$ \beta^{1-t+t} \lambda_{t+1} \frac{\partial f(k_{t+1})}{\partial k_{t+1}} + \beta^{1-t+t} \lambda_{t+1} (1 - \delta) - \beta^{1-t+t-1}\lambda_t = 0$$

Konstantinos
  • 405
  • 2
  • 7