I. Introduction
In Model Predictive Control (MPC), the main objective is to compute and apply control inputs such that the behavior of a system quantified by means of a cost function is optimized over a certain control horizon. Modelling errors or disturbances affecting the system can be considered in a stochastic fashion leading thereby to stochastic MPC (SMPC). The optimal solution, or strictly speaking the closed-loop optimal solution, is given by the Bellman equation, which unfortunately is only computable in few special cases, such as the linear quadratic Gaussian (LQG) control problem or the control of systems with a finite number of states and control inputs. This is mostly due to the curse of dimensionality and the fact that separation of estimation and control does not hold in more general system classes, as is the case with stochastic nonlinear systems [1], [2].