Publisher Summary This chapter considers stationary processes—if a solution exists, the optimal return for the horizon [n, ∞] depends only on x as n can always be taken as the time origin. The optimal return can thus be written Ȓ(x). The corresponding control function is also stationary and can be written g(x). In some processes, the final time is not specified in advance, but rather the process stops whenever certain conditions are fulfilled. Two iterative methods are described in this chapter. The first is in return space, and its convergence depends on properties of the operator, which are quite restrictive and difficult to establish. The second method is in policy space and converges if r ≥ 0 in the case of search for a minimum, and provided the iteration is begun with an admissible policy. In the chapter, it is shown that the optimal solution, if it exists, is obtained from a stationary control function g(x). This corresponds to a closed-loop structure. The free closed-loop system behaves as n → ∞—that is, when its stability can be investigated. The problems discussed in the chapter lead to an implicit functional equation for the optimal return. This equation is of a novel type involving the optimization operator. In general, a literal solution is impossible, and an iterative method must be used (Picard's method).