The robust perception of robots is strongly needed to handle various objects skillfully. In this paper, we propose a novel approach to recognize objects and estimate their 6-DOF pose using 3D feature descriptors, called Geometric and Photometric Local Feature (GPLF). The proposed descriptors use both the geometric and photometric information of 3D point clouds from RGB-D camera and integrate those information into efficient descriptors. GPLF shows robust discriminative performance regardless of characteristics such as shapes or appearances of objects in cluttered scenes. The experimental results show how well the proposed approach classifies and identify objects. The performance of pose estimation is robust and stable enough for the robot to manipulate objects. We also compare the proposed approach with previous approaches that use partial information of objects with a representative large-scale RGB-D object dataset.