Skip to Main Content
In this paper we present ATLAS, a new graphical tool for annotation of multi-modal data streams. Although Atlas has been developed for data bases collected in human computer interaction (HCI) scenarios, it is applicable for multimodal time series in general settings. In our HCI scenario, besides multi-channel audio and video inputs, various bio-physiological data has been recorded, e.g. complex multi-variate signals such as ECG, EEG, EMG as well as simple uni-variate skin conductivity, respiration, blood volume pulse, etc. All these different types of data can be processed through ATLAS. In addition to processing raw data, intermediate data processing results, such as extracted features, and even (probabilistic or crisp) outputs of pre-trained classifier modules can be displayed. Furthermore, annotation and transcription tools have been implemented. ATLAS's basic structure is briefly described. Besides these basic annotation features, active learning (active data selection) approaches have been included into the overall system. Support Vector Machines (SVM) utilizing probabilistic outputs are the current algorithms to select confident data. Confident classification results made by the SVM classifier support the human expert to investigate unlabeled parts of the data.