Current high-accuracy speech understanding systems achieve their performance at the cost of highly constrained grammars over relatively small vocabularies. Less-constrained systems will need to compensate for their loss of top-down constraint by improving bottom-up performance. To do this, they will need to eliminate from consideration at each place in the utterance most words in their vocabularies solely on the basis of acoustic information and expected pronunciations of the words. Towards this goal, we present the design and performance of Noah, a bottom-up word hypothesizer which is capable of handling large vocabularies-more than 10 000 words. Noah takes (machine) segmented and labeled speech as input and produces word hypotheses. The primary concern of this work is the problem of word hypothesizing from large vocabularies. Particular attention has been paid to accuracy, knowledge representation, knowledge acquisition, and flexibility. In this paper we discuss the problem of word hypothesizing, describe how the design of Noah faces these problems, and present the performance of Noah as a function of the vocabulary size.