Time-inconsistent Stochastic Control
by Agatha Murgoci (Stockholm School of Economics, Sweden)
We are studying optimization problems of the type:
$$ \max_u E_t[F(X_T)]+G(E_t[X_T]) $$
where $X_t$ is some stochastic process. This type of problem is time-inconsistent and cannot be solved by traditional tools of dynamic programming. This is why we take a game theoretic approach and consider the optimal strategy from a sub-perfect Nash equilibrium.
We view the problem as a game where, at each point in time $t$, we have one separate player no $t$. Player $t$ choses his/her strategy $u(t,X_t)$ taking into account the optimal strategies of the following players $\hat{u}(s,X_s)$, $s\geq t$.
We prove that this optimality criteria is leading to a system of PDE-s similar to the classic Hamilton-Jacobi-Bellman equation, with an embedded fixed point problem. We find the adjusted HJB system of equations both in discrete and continuous time.
Applications to the above problem include portfolio allocation with a multi-period mean variance preferences and various hedging problems. We solve for several specific examples, such as mean-variance portfolio optimization where the underlying asset has jumps, and obtain analytical solutions.