NASA is moving into an era of Increasing spacecraft autonomy. However, before autonomy can be routinely utilized, we must develop techniques for providing assurance that the system will perform correctly in flight. We describe why autonomous systems require advanced verification techniques, and offer some management and technical techniques for addressing the differences. Autonomous goal-driven spacecraft require advances in verification techniques because optimization (e.g., planning and scheduling) algorithms are at the core of much of autonomy. It Is the nature of such algorithms that over much of the input space an intuitively "small" change in the input results in a correspondingly "small" change in the output: This type of response typically leads one to conclude, quite reasonably, that if the two responses are correct, those responses "between" them will probably also be correct. However, there are certain regions in the input space where a "small" change in the input will result in a radically different output: One is not so inclined to conclude that all responses in these transition zones are likely to be correct. We believe, for two reasons, that these transition zones are one place where autonomous systems are likely to fail. First, boundary conditions, often a rich source of faults, are highly exercised in the transition zones, and so increase the likelihood of faults. Second, within the transition zone the algorithm outputs are likely to appear unusual, and, since the outputs of the algorithm become inputs to the remainder of the system, the whole system is probably pushed outside of its nominal usage profile: historically shown to be another good source of faults. We close with a discussion of risk management. Autonomous systems have many well-known management risk factors. Risk management and quality concerns must be pervasive, throughout all team members and the whole life-cycle of the project.