Vision-based robot localization in outdoor environments is difficult because of changing illumination conditions. Another problem is the rough and cluttered environment which makes it hard to use visual features that are not rotation invariant. A popular method that is rotation invariant and relatively robust to changing illumination is the Scale Invariant Feature Transform (SIFT). However, due to the computationally intensive feature extraction and image matching, localization using SIFT is slow. On the other hand, techniques which use global image features are in general less robust and exact than SIFT, but are often much faster due to fast image matching. In this paper, we present a hybrid localization approach that switches between local and global image features. For most images, the hybrid approach uses fast global features. Only in difficult situations, e.g. containing strong illumination changes, the hybrid approach switches to local features. To decide which features to use for an image, we analyze the particle cloud of the particle filter that we use for position estimation. Experiments on outdoor images taken under varying illumination conditions show that the position estimates of the hybrid approach are about as exact as the estimates of SIFT alone. However, the average localization time using the hybrid approach is more than 3.5 times faster than using SIFT.