By Topic

MultiMedia, IEEE

Issue 2 • Date April-June 2005

Filter Results

Displaying Results 1 - 19 of 19
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (676 KB)  
    Freely Available from IEEE
  • Multimedia in High Gear

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (112 KB)  
    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Table of contents

    Page(s): 2 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (631 KB)  
    Freely Available from IEEE
  • Video blogging: content to the max

    Page(s): 4 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB)  

    The lure of video blogging combines the ubiquitous, grassroots, Web-based journaling of blogging with the richness of expression available in multimedia. Some claim that video blogging is an important force in a future world of video journalism and a powerful technical adjunct to our existing televised news sources. Others point to the huge demands it imposes on networking resources, the lack of hard standards, and the poor usability of current video blogging systems as indicators that it's doomed to fail. Like any nascent technology, video blogging has many unsolved problems. The field, however, is vibrant, the goals are fairly clear, and the challenges they pose to multimedia researchers are exciting indeed. Developing the standards and technologies for video blogging requires a combination of approaches from various areas including media representation, information retrieval, multimedia content analysis, and video summarization. Like the development of the Web and text blogging before, video blogging only come about through open development and collaboration between engineers and researchers from diverse fields. Most strikingly, it is fueled by the passion and enthusiasm of those creating content - those who go to the trouble of recording their lives and opinions within the fledgling medium, shaping it as a lively and useful resource for generations of Internet users to come. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Novel infrastructures for supporting mixed-reality experiences

    Page(s): 12 - 19
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (696 KB)  

    This article focuses on the challenge of managing the large amount of heterogeneous spatiotemporal mixed-reality data (such as audio/video files, GPS logs, and text messages) generated in a distributed asynchronous fashion that must be indexed, annotated, synchronized, and replayed in the postproduction phase. As mixed-reality technologies grow and mature, this process becomes increasingly difficult. We face a vast amount of mixed-reality data as well as a vast number and diversity of services and tools. Producing meaningful content requires a systematic and integrated approach to managing audio and video streams rather than sophisticated tools or large development teams. We need an infrastructure to support the provenance and context of (often) transient resources (for example, data or tools and services being changed and updated) that could support richer user interaction with the resources. More specifically, we identify key grid and peer-to-peer capabilities that can bring us closer to such a large-scale generic record-and-reuse infrastructure for mixed-reality experiences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Guest Editors' Introduction: An Introduction to Interactive Sonification

    Page(s): 20 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB)  

    The research field of sonification, a subset of the topic of auditory display, has developed rapidly in recent decades. It brings together interests from the areas of data mining, exploratory data analysis, human-computer interfaces, and computer music. Sonification presents information by using sound (particularly nonspeech), so that the user of an auditory display obtains a deeper understanding of the data or processes under investigation by listening. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive sonification of choropleth maps

    Page(s): 26 - 35
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1144 KB)  

    Auditory information is an important channel for the visually impaired. Effective sonification (the use of non-speech audio to convey information) promotes equal working opportunities for people with vision impairments by helping them explore data collections for problem solving and decision making. Interactive sonification systems can make georeferenced data accessible to people with vision impairments. The authors compare methods for using sound to encode georeferenced data patterns and for navigating maps. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • HCI design and interactive sonification for fingers and ears

    Page(s): 36 - 44
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1032 KB)  

    We examine the use of auditory display for ubiquitous computing to extend the boundaries of human-computer interaction (HCI). Our design process is based on listening tests, gathering free-text identification responses from participants. The responses and their classifications indicate how accurately sounds are identified and help us identify possible metaphors and mappings of sound to human action and/or system status. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sonification of probabilistic feedback through granular synthesis

    Page(s): 45 - 52
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1584 KB)  

    We describe a method to improve user feedback, specifically the display of time-varying probabilistic information, through asynchronous granular synthesis. We have applied these techniques to challenging control problems as well as to the sonification of online probabilistic gesture recognition. We're using these displays in mobile, gestural interfaces where visual display is often impractical. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Movement sonification: Effects on perception and action

    Page(s): 53 - 59
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB)  

    Research shows that it enhances users' perception to present stimuli in two modalities simultaneously (for example, with speech perception). In our experiments, we use sonification to transform parameters of human movement patterns into sound to enhance perception accuracy. This article also presents further features of the human perceptual system like multisensory integration and perceptual stream dissociation with regard to effective interactive sonification. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Continuous sonic feedback from a rolling ball

    Page(s): 60 - 69
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1912 KB)  

    Balancing a ball along a tillable track is a control metaphor for a variety of continuous control tasks. The authors designed the Ballancer experimental tangible interface to exploit such a metaphor. Direct, model-based sonification of the rolling ball improves the experience and effectiveness of the interaction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parametric orchestral sonification of EEG in real time

    Page(s): 70 - 79
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    The authors introduce a device for the parametric sonification of electroencephalographic (EEC) data. The device allows auditory feedback of multiple EEG characteristics in real time. Six frequency bands are assigned as instruments from a MIDI device. The time-dependent parameters modulate the timing, pitch, and volume of the instruments. Using this, we studied subjects' ability to perform a discrimination task with parametric sonification in real time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Navigation with auditory cues in a virtual environment

    Page(s): 80 - 86
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1152 KB)  

    The authors use 3D sound to help navigate an immersive virtual environment and report results of user tests obtained with a game-like application. The results show that auditory cues help in navigation, and auditory navigation is possible even without any visual feedback. The best performance is obtained in audiovisual navigation where auditory cues indicate the approximate direction and visual cues help in the final approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Upcoming Events

    Page(s): 87
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interactive Television

    Page(s): 88 - 89
    Save to Project icon | Request Permissions | PDF file iconPDF (112 KB)  
    Full text access may be available. Click article title to sign in or learn about subscription options.
  • User-controlled, multimedia-enhanced communication using prior knowledge and experience

    Page(s): 90 - 95
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (112 KB)  

    This article focuses on enriching the communication experience between human beings interacting through different kinds of devices, including mobile phones and PDAs. The key point is how to effectively exploit multimedia data to enhance the richness of communication without overloading the communication channel. To this end, the system tries to minimize the quantity of information transmitted over the networks and to maximize the usage of locally stored information. The proposed system sends several kinds of data, including continuous video, video clips, still images, and avatars, over the communication channel. The specific data that are sent are based on the parameters set by the transmitting and receiving terminals. Bandwidth-consuming data, such as continuous video, are transmitted only occasionally at the receiving terminal. The system has been partially implemented and incorporated in avatar-enabled cellular phones. The user may then easily create different avatars combining components characterizing faces, hair, and so on, and associate them with different callers. When a call comes through, the avatar corresponding to the caller is retrieved and displayed on the screen and the corresponding information is retrieved from the personal database. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ISMA interoperability and conformance

    Page(s): 96 - 102
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (94 KB)  

    Ubiquitous streaming of rich media has long been one of the most difficult challenges, and at the same time it has invoked the most rewarding killer applications. With the increasing bandwidth available to users, expanding pervasiveness of multimedia-ready devices, and growth in rich media content, the dream of streaming rich media is coming closer to reality. However, interoperability is still one of the important remaining challenges. The Internet Streaming Media Alliance (ISMA) is working toward the goal of interoperability of streaming rich media (video, audio, and data) over Internet protocol (IP) networks by developing open streaming standards. Some of ISMA's interoperability testing work takes the form of plugfests that provide intense interactions and exchange of media streams among tools and systems. This article describes how ISMA addresses interoperability testing and conformance, working toward the vision of seamless interworking streaming media devices. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A true multimedia client

    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB)  

    A true multimedia device is emerging that might become a powerful tool in the age of converged communication; computing, and content (CCC). Communication, computing, and content are converging, but this convergence's process has the characteristic of people viewing the converged space through the viewpoint of their own area. Thus, it's not uncommon to see CCC dominated by one area. The most interesting example is that of phones. People started realizing that mobile phones are powerful and are likely to become a dominant CCC device. Computing people got into action and now we see an increasing number of computer-like phones appearing. These devices often have full keyboards for interfacing with the Internet and email. What's equally interesting is that the purpose of these devices is to browse the Internet, in addition to email, and even receive documents. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New Products

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (328 KB)  
    Freely Available from IEEE

Aims & Scope

The magazine contains technical information covering a broad range of issues in multimedia systems and applications

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
John R. Smith
IBM T.J. Watson Research Center