Skip to Main Content
Multi-task learning refers to the learning problem of performing inference by jointly considering multiple related tasks. There have already been many research efforts on supervised multi-task learning. However, collecting sufficient labeled data for each task is usually time consuming and expensive. In this paper, we consider the semi-supervised multitask learning (SSMTL) problem, where we are given a small portion of labeled points together with a large pool of unlabeled data within each task. We assume that the different tasks can form some task clusters and the task in the same cluster share similar classifier parameters. The final learning problem is relaxed to a convex one and an efficient gradient descent strategy is proposed. Finally the experimental results on both synthetic and real world data sets are presented to show the effectiveness of our method.