In this work we propose two techniques for mobile robot localization for the indoor environment. First, we use the images of the markers attached on the ceiling with known positions to calculate the location and orientation of the robot, like a global method. Second, an RGB-D camera mounted on the robot is adopted to acquire the color and depth images of the environment, like a local method. The relative robot motion is then computed based on the registration of 3D point clouds and SURF features between the consecutive image frames. Since the uncorrelated sensing information is used by the above techniques, we have combined these two approaches to increase the robustness and accuracy of the localization. Experimental results and performance analysis are presented using real world image sequences.