The dynamic programming approach is applied to both fully and partially observed constrained Markov process control problems with both probabilistic and total cost criteria that are motivated by the optimal search problem. For the fully observed case, point-wise convergence of the optimal cost function for the finite horizon problem to that of the infinite horizon problem is shown. For the partially observed case, a constrained finite horizon problem with both probabilistic and expected total cost criteria is formulated that is demonstrated to be applicable to the radar search problem. This formulation allows the explicit inclusion of certain probability of detection and probability of false alarm criteria, and consequently it allows integration of control and detection objectives. This is illustrated by formulating an optimal truncated sequential detection problem involving minimization of resources required to achieve specified levels of probability of detection and probability of false alarm. A simple example of optimal truncated sequential detection that represents the optimization of a radar detection process is given.