By Topic

Advances in Multimedia, 2009. MMEDIA '09. First International Conference on

Date 20-25 July 2009

Filter Results

Displaying Results 1 - 25 of 46
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (324 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (10 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (53 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (125 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - viii
    Save to Project icon | Request Permissions | PDF file iconPDF (178 KB)  
    Freely Available from IEEE
  • Preface

    Page(s): ix - x
    Save to Project icon | Request Permissions | PDF file iconPDF (65 KB)  
    Freely Available from IEEE
  • Program Committee

    Page(s): xi - xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (77 KB)  
    Freely Available from IEEE
  • list-reviewer

    Save to Project icon | Request Permissions | PDF file iconPDF (74 KB)  
    Freely Available from IEEE
  • Audio Compression Using a Munich and Cambridge Morlet Wavelet

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (337 KB) |  | HTML iconHTML  

    Most psycho-acoustic models for coding applications use a uniform (equal bandwidth) spectral decomposition as a first step to approximate the frequency selectivity of the human auditory system. However, the equal filter properties of the uniform sub-bands do not match the non uniform characteristics of cochlear filters and reduce the precision of psycho-acoustic modelling. In this paper we present a new design of a psycho-acoustic model for audio coding following the model used in the standard MPEG-1 audio layer 3. This architecture is based on appropriate wavelet packet decomposition instead of a short term Fourier transformation. Its important characteristic is to propose an analysis of the frequency bands that come closer to the critical bands of the ear. This study shows the best performance of the Morlet Munich coder. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Adaptive Mechanism for Multipath Video Streaming over Video Distribution Network (VDN)

    Page(s): 6 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (339 KB) |  | HTML iconHTML  

    In the objective of improving the video quality as perceived by end-users, multipath video streaming in Video Distribution Network (VDN) comes as a promising solution. In this paper, we present a new adaptive mechanism to maximize the overall video quality at the client. Overlay path selection is dynamically done based on available bandwidth estimation, while the Quality of Experience (QoE) is subjectively measured using Pseudo-Subjective Quality Assessment (PSQA) tool. Simulation results show that our proposed method can automatically adapt to the load variation on the different Internet paths, in a way that it guarantees a best perceived quality by end-users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Contextual Metadata in Practice

    Page(s): 12 - 17
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (359 KB) |  | HTML iconHTML  

    This article aims to survey the scene of contextual metadata annotation of digital content. This is performed through a literature review of current research, looking closer at formal context models, contextual metadata gathering and contextual metadata formalisation respectively. The findings of these efforts indicate that the reviewed context models have three elements of context in common. However, these formal models are inconsistent with the contextual metadata gathered in approaches described in the reviewed literature, in which location metadata are the most common. Furthermore, contextual metadata gathered in these initiatives end up in a variety of data structures. One reason for the dominance of location metadata might be the clear understanding of their ontological dimension. Therefore, the author introduces the notion of ontological dimensions in the area of contextual metadata. The proposed notion of conceiving contextual metadata as a set of ontological dimensions makes it possible to divide the endeavour of formalising elements of context into manageable information chunks. Through identifying and reaching a consensus of the constituent parts of these dimensions, it is the belief of the author that we are well on our way towards improved interoperability, metadata interpretability and digital content reusability facilitated by contextual metadata. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrating Lecture Recordings with Social Networks

    Page(s): 18 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1383 KB) |  | HTML iconHTML  

    This paper describes an approach to integrate lecture recordings, based on the virtPresenter framework, within social networks. A lot of social networks have recently begun to provide application programming interfaces (APIs) for external applications, allowing access to userspsila profile information and their friendship relations. The project ldquosocial virtPresenterrdquo combines the resulting social graph with the existing lecture recordings. Thus, ldquosocial virtPresenterrdquo provides a basis for a new collaborative multimedia learning process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Video Sequence Deinterlacing Using Intensity Gradient Filter and Median Filter with Texture Detection

    Page(s): 23 - 28
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (888 KB) |  | HTML iconHTML  

    In this paper, we propose a new de-interlacing algorithm for video data using intensity gradient filter and median filter with texture detection in the image block. We first introduce the texture detection. According to texture detection, the current region is determined into smooth region or texture region. In case that the smooth region interpolated by median filter. In addition, in case of the texture region, we calculate missing pixel value using intensity gradient filter. Therefore, we analyze the local region feature using the texture detection and classify each missing pixel into two categories. And then, based on the classification result, a different deinterlacing algorithm is activated in order to obtain the best performance. Experimental results show that the proposed algorithm performs well with a variety of moving sequences compared with traditional intra-field method in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Virtualization as a Strategy for Maintaining Future Access to Multimedia Content

    Page(s): 29 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB) |  | HTML iconHTML  

    Virtualization and emulation techniques are getting better and better, and it feels thus natural to apply them to address the problem of data preservation for future content access. In this paper, we virtualized a number of existing servers and clients and ran the virtual machines under VMware's hypervisor. Not all of the investigated operating systems could be virtualized successfully, and even with successful conversions often settings had to be reconfigurated, and support for particular devices had to be added manually. It is concluded that many operating systems are not flexible enough to allow for a problem-free conversion to a virtual machine. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Advanced Bilinear Image Interpolation Based on Edge Features

    Page(s): 33 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (531 KB) |  | HTML iconHTML  

    A variety of image interpolation methods have been used in order to obtain high-resolution (HR) images. In this paper, we propose an advanced bilinear interpolation algorithm which improves edge components. The conventional bilinear image interpolation method has a serious problem such as blurring artifacts. In the experimental results, our proposed algorithm outperforms the bilinear image interpolation not only objective qualities but also subjective qualities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • S3: A Spectral and Spatial Sharpness Measure

    Page(s): 37 - 43
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1116 KB) |  | HTML iconHTML  

    This paper presents a block-based algorithm designed to measure the local perceived sharpness in an image. Our method utilizes both spectral and spatial properties of the image: For each block, we measure the slope of the magnitude spectrum and the total spatial variation. These measures are then adjusted to account for visual perception, and then the adjusted measures are combined via a weighted geometric mean. The resulting measure, S3 (Spectral and Spatial Sharpness), yields a perceived sharpness map in which greater values denote perceptually sharper regions. This map can be collapsed into a single index which quantifies the overall perceived sharpness of the whole image. We demonstrate the utility of the S3 measure for within-image and across-image sharpness prediction, for global blur estimation, and for no-reference image quality assessment of blurred images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Visualization and Animation of Algorithmically Generated 3D Models Using Open Source Software

    Page(s): 44 - 49
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1557 KB) |  | HTML iconHTML  

    Investigations into creating 3D models using the open source application Blender and its accompanying scripting language Python are documented. Firstly, the principles of how virtual models are stored are explained leading to their creation using mathematical algorithms. A plane is considered and a method for automated face generation outlined. Iterative midpoint displacement methods are then discussed and fractal surfaces produced. Methods for mapping the plane to different coordinate systems are described with particular emphasis on the spherical coordinate system. IFS algorithms are explored and fractal examples visualized. Techniques for animating models are considered. Lastly, L systems are investigated and 3D models generated. It is shown that Blender is well able to visualize these models within limits and Python provides an effective method to create a uniform library of algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collaborative Multimedia Content Caching Algorithms for Mobile Ad-Hoc Networks

    Page(s): 50 - 57
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB) |  | HTML iconHTML  

    In this paper, we address the problem of collaborative video caching in mobile ad-hoc networks. We consider network portraying static video server with wired interface to gateway node that is equipped with wireless interfaces, other nodes are requiring access to the video streams that is stored at video server. In order to reduce the average access latency as well as enhance the video accessibility, efficient video caching placement and replacement strategies are crucial at some of the distributed intermediate nodes across the network. Virtual backbone caching nodes will be elected by executing caching placement algorithm after running the routing protocol phase. The simulation results indicate that the proposed collaborative aggregate cache mechanism can significantly improve the video QoS in terms of packet loss and average packet delay. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Adaptive Linear Interpolation Algorithm Using Pattern Weight Based on Inverse Gradient

    Page(s): 58 - 61
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1292 KB) |  | HTML iconHTML  

    Most of conventional interpolation methods do not consider the sufficient pattern of neighbor pixels, which cause quality degradation. Kim et al. proposed new adaptive linear (NAL) algorithm to consider patterns near the interpolated value. However, they have a critical defect which does not reflect whether each neighbor pixels influence the interpolated pixel. To remove this defect, we propose a new image interpolation method using adaptive weight based on inverse gradient. Experimental results show that the proposed algorithm exhibits a better performance than conventional algorithms in both objective and subjective criteria with a variety of images. In addition, the proposed method just needs a little computational burden compared with other algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Requirements for an Adaptive Multimedia Presentation System with Contextual Supplemental Support Media

    Page(s): 62 - 67
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (499 KB) |  | HTML iconHTML  

    Investigations into the requirements for a practical adaptive multimedia presentation system have led the writers to propose the use of a video segmentation process that provides contextual supplementary updates produced by users. Supplements consisting of tailored segments are dynamically inserted into previously stored material in response to questions from users. A proposal for the use of this technique is presented in the context of personalisation within a Virtual Learning Environment. During the investigation, a brief survey of advanced adaptive approaches revealed that adaptation may be enhanced by use of manually generated metadata, automated or semi-automated use of metadata by stored context dependent ontology hierarchies that describe the semantics of the learning domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Drift-Reduced Hierarchical Wavelet Coding Scheme for Scalable Video Transmissions

    Page(s): 68 - 73
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (186 KB) |  | HTML iconHTML  

    Scalable video coding allows for the capability of (partially) decoding a video bitstream when faced with communication deficiencies such as low bandwidth or loss of data resulting in lower video quality. As the encoding is usually based on perfectly reconstructed frames, such deficiencies result in differently decoded frames at the decoder than the ones used in the encoder and, therefore, leading to errors being accumulated in the decoder. This is commonly referred to as the drift error. Drift-free scalable video coding methods also suffer from the low performance problem as they do not combine the residue encoding scheme of the current standards such as MPEG-4 and H.264 with scalability characteristics. We propose a scalable video coding method which is based on the motion compensation and residue encoding methods found in current video standards combined with the scalability property of discrete wavelet transform. Our proposed method aims to reduce the drift error while preserving the compression efficiency. Our results show that the drift error has been greatly reduced when a hierarchical structure for frame encoding is introduced. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generation and Maintenance of Semantic Metadata for Personal Multimedia Document Management

    Page(s): 74 - 79
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1421 KB) |  | HTML iconHTML  

    Personal multimedia document management benefits from semantic Web technologies. However, an ontology-based document management system has to meet a number of challenges regarding flexibility, soundness, and controllability of the semantic data model. This paper presents an integrated approach for ontology-based multimedia document management, which covers the process of automated modeling of semantic descriptions for multimedia objects and allows for the domain-specific customization of the used ontology. Furthermore, the proposed approach addresses the problems of data validation and consolidation to ensure semantic descriptions of proper quality. We demonstrate the practicability of our concept by a prototypical implementation of a service platform for personal information management applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Streaming Mobile Multimedia Optimization for Video-Conferencing Scenarios

    Page(s): 80 - 85
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5597 KB) |  | HTML iconHTML  

    Video conferencing in mobile environments involves real time streaming information among mobile nodes connected within a wireless environment. This kind of framework is difficult to deal with, because of its concrete requirements. In this paper, we propose an architecture based on adaptive applications that will change their behavior depending on the network information gathered by the operating system. We will demonstrate that we can provide a better QoS for video conferencing scenarios with minimum costs while maintaining a high portability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rapidly Building Multimedia Management Interfaces for Ubiquitous Computing Services

    Page(s): 86 - 91
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1026 KB) |  | HTML iconHTML  

    This paper presents a component framework for building and operating multimedia interfaces for context-aware services in smart spaces. By using a compound-document technology, it provides physical entities, places, and services in smart spaces with multimedia components to annotate and control them. It can automatically assemble multimedia components into a multimedia interface for monitoring and managing the spaces according to the spatial containment relationships between their targets in the physical world by using underlying location-sensing systems. End-users can manually customize smart spaces through user-friendly GUI-based manipulations for editing documents. This paper presents the design for this framework and describes its implementation and practical application. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Mode Decision Algorithm Using Efficient Block Skip Techniques for H.264 P Slices

    Page(s): 92 - 97
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (391 KB) |  | HTML iconHTML  

    In this paper, we propose a fast algorithm that can reduce the complexity for inter mode decision of the H.264 encoder by minimizing a large number of calculations of inter mode decision process adaptively. The main idea is to use the technique skipping unnecessary macroblock modes. We focus on two block size modes, which is 16times16 and 8times8 block size modes, in proposed algorithm. The percentage of 16times16 block size modes is the largest in most of the sequences. This means that many redundant mode calculations can be removed. The percentage of 8times8 block size mode is small. But time consumption caused in the mode decision of encoder is very considerable. Therefore if we can extract the unnecessary 8times8 block size mode calculation well, a large amount of time can be saved in total encoding process. The experimental results show that the proposed algorithm can achieve up to 43% speed up ratio with a little PSNR loss. Increase of total bits encoded is also not much noticeable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.