Skip to Main Content
We present an architecture that provides semantic Web annotations of sound clips described by MPEG-7 audio descriptions. The great flexibility of the MPEG-7 standard makes especially difficult to compare descriptions coming from heterogeneous sources. To cope with this, the architecture would first obtain "normalized" versions of the audio descriptions using different adaptation techniques. Once in a "normalized" format, descriptions can be then projected into uniform and semantically relevant vector spaces, ready to be processed by a variety of well known computational intelligence techniques. As higher semantic results are then available, these can be exported as interoperable (RDF) annotations about the resource that was originally fed into the system. As novel aspect, through the use and interchange of MPEG-7 descriptions, the framework allows building applications (e.g. classificators) which can provide annotations on distributed audio resource sets.