Skip to Main Content
Is it possible to train a learning model to separate tigers from elks when we have 1) labeled samples of leopard and zebra and 2) unlabelled samples of tiger and elk at hand? Cross-domain learning algorithms can be used to solve the above problem. However, existing cross-domain algorithms cannot be applied for dimension reduction, which plays a key role in computer vision tasks, e.g., face recognition and web image annotation. This paper envisions the cross-domain discriminative dimension reduction to provide an effective solution for cross-domain dimension reduction. In particular, we propose the cross-domain discriminative Hessian Eigenmaps or CDHE for short. CDHE connects training and test samples by minimizing the quadratic distance between the distribution of the training set and that of the test set. Therefore, a common subspace for data representation can be well preserved. Furthermore, we basically expect the discriminative information used to separate leopards and zebra can be shared to separate tigers and elks, and thus we have a chance to duly address the above question. Margin maximization principle is adopted in CDHE so the discriminative information for separating different classes (e.g., leopard and zebra here) can be well preserved. Finally, CDHE encodes the local geometry of each training class (e.g., leopard and zebra here) in the local tangent space which is locally isometric to the data manifold and thus CDHE preserves the intraclass local geometry. The objective function of CDHE is not convex, so the gradient descent strategy can only find a local optimal solution. In this paper, we carefully design an evolutionary search strategy to find a better solution of CDHE. Experimental evidence on both synthetic and real word image datasets demonstrates the effectiveness of CDHE for cross-domain web image annotation and face recognition.