Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. For technical support, please contact us at onlinesupport@ieee.org. We apologize for any inconvenience.
By Topic

Proceedings of the IEEE

Issue 5 • Date May 1998

Filter Results

Displaying Results 1 - 18 of 18
  • Special Issue On Multimedia Signal Processing, Part I [Scanning the Issue]

    Publication Year: 1998 , Page(s): 751 - 754
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • Television: Past, Present, And Future

    Publication Year: 1998 , Page(s): 998 - 1004
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (96 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Description Of An Experimental Television System And The Kinescope

    Publication Year: 1998 , Page(s): 1005 - 1012
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Information Science And Industry Fifty Years Hence

    Publication Year: 1998 , Page(s): 1013 - 1014
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (18 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comments on "The Information Science And Industry Fifty Years Hence" By R.M. Bowie

    Publication Year: 1998 , Page(s): 1015 - 1017
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (24 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • R&D Project Selection [Book Reviews]

    Publication Year: 1998 , Page(s): 1018 - 1019
    Save to Project icon | Request Permissions | PDF file iconPDF (14 KB)  
    Freely Available from IEEE
  • Charles L.G. Fortescue and the method of symmetrical components [Scanning the Past]

    Publication Year: 1998 , Page(s): 1020 - 1021
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (56 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward multimodal human-computer interface

    Publication Year: 1998 , Page(s): 853 - 869
    Cited by:  Papers (71)  |  Patents (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB)  

    Recent advances in various signal processing technologies, coupled with an explosion in the available computing power, have given rise to a number of novel human-computer interaction (HCI) modalities: speech, vision-based gesture recognition, eye tracking, electroencephalograph, etc. Successful embodiment of these modalities into an interface has the potential of easing the HCI bottleneck that has become noticeable with the advances in computing and communication. It has also become increasingly evident that the difficulties encountered in the analysis and interpretation of individual sensing modalities may be overcome by integrating them into a multimodal human-computer interface. We examine several promising directions toward achieving multimodal HCI. We consider some of the emerging novel input modalities for HCI and the fundamental issues in integrating them at various levels, from early signal level to intermediate feature level to late decision level. We discuss the different computational approaches that may be applied at the different levels of modality integration. We also briefly review several demonstrated multimodal HCI systems and applications. Despite all the recent developments, it is clear that further research is needed for interpreting and fitting multiple sensing modalities in the context of HCI. This research can benefit from many disparate fields of study that increase our understanding of the different human communication modalities and their potential role in HCI View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • VBR video: tradeoffs and potentials

    Publication Year: 1998 , Page(s): 952 - 973
    Cited by:  Papers (97)  |  Patents (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB)  

    The authors examine the transport and storage of video compressed with a variable bit rate (VBR). They focus primarily on networked video, although they also briefly consider other applications of VBR video, including satellite transmission (channel sharing), playback of stored video, and wireless transport. Packet video research requires careful integration between the network and the video systems; however, a major stumbling block has resulted because commonly used terms are often interpreted differently by the video and networking communities. The paper then, has two main goals: (i) to clarify the definitions of terms that are often used with different meaning by networking and video-coding researchers and (ii) to explore the tradeoffs entailed by each of the various modalities of VBR transmission (unconstrained, shaped, constrained, and feedback). In particular, they evaluate the tradeoff among the advantages (better video quality, less delay, and more calls) that were identified by early proponents of VBR video transmission. An underlying theme of this paper is that increased interaction between the video and network design has potential for improving overall decoded video quality without changing the network capacity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Next-generation content representation, creation, and searching for new-media applications in education

    Publication Year: 1998 , Page(s): 884 - 904
    Cited by:  Papers (19)  |  Patents (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (468 KB)  

    Content creation, editing, and searching are extremely time-consuming tasks that often require substantial training and experience, especially when high-quality audio and video are involved. New media represents a new paradigm for multimedia information representation and processing, in which the emphasis is placed on the actual content. It thus brings the tasks of content creation and searching much closer to actual users and enables them to be active producers of audio-visual information rather than passive recipients. We discuss the state of the art and present next-generation techniques for content representation, searching, creation and editing. We discuss our experiences in developing a Web-based distributed compressed video editing and searching system (WebClip), a media-representation language (Flavor) and an object-based video authoring system (Zest) based on it, and a large image/video search engine for the World Wide Web (WebSEEk). We also present a case study of new media applications based on specific planned multimedia education experiments with the above systems in several K-12 schools in Manhattan, NY View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Error control and concealment for video communication: a review

    Publication Year: 1998 , Page(s): 974 - 997
    Cited by:  Papers (641)  |  Patents (78)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (416 KB)  

    The problem of error control and concealment in video communication is becoming increasingly important because of the growing interest in video delivery over unreliable channels such as wireless networks and the Internet. This paper reviews the techniques that have been developed for error control and concealment. These techniques are described in three categories according to the roles that the encoder and decoder play in the underlying approaches. Forward error concealment includes methods that add redundancy at the source end to enhance error resilience of the coded bit streams. Error concealment by postprocessing refers to operations at the decoder to recover the damaged areas based on characteristics of image and video signals. Last, interactive error concealment covers techniques that are dependent on a dialogue between the source and destination. Both current research activities and practice in international standards are covered View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward the creation of a new medium for the multimedia era

    Publication Year: 1998 , Page(s): 825 - 836
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB)  

    It is expected that various new types of telecommunications services will emerge based on multimedia technologies. A concept of hypercommunication is proposed that merges human communication in cyberspace involving various people in different places, different times, and even different cultures, as well as communication with human-like agents generated by computers. Technologies necessary to realize this new telecommunications concept are described along with several research examples being done at ATR Media Integration and Communications Research Laboratories View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the applications of multimedia processing to communications

    Publication Year: 1998 , Page(s): 755 - 824
    Cited by:  Papers (18)  |  Patents (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1320 KB)  

    The challenge of multimedia processing is to provide services that seamlessly integrate text, sound, image, and video information and to do it in a way that preserves the ease of use and interactivity of conventional plain old telephone service (POTS) telephony. To achieve this goal, there are a number of technological problems that must be considered, including: compression and coding of multimedia signals, including algorithmic issues, standards issues, and transmission issues; synthesis and recognition of multimedia signals, including speech, images, handwriting, and text; organization, storage, and retrieval of multimedia signals, including the appropriate method and speed of delivery, resolution, and quality of service; access methods to the multimedia signal, including spoken natural language interfaces, agent interfaces, and media conversion tools; searching by text, speech, and image queries; browsing by accessing the text, by voice, or by indexed images. In each of these areas, a great deal of progress has been made in the past few years, driven in part by the relentless growth in multimedia personal computers and in part by the promise of broad-band access from the home and from wireless connections. Standards have also played a key role in driving new multimedia services, both on the POTS network and on the Internet. It is the purpose of this paper to review the status of the technology in each of the areas listed above and to illustrate current capabilities by describing several multimedia applications that have been implemented at AT&T Labs over the past several years View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fundamental and technological limitations of immersive audio systems

    Publication Year: 1998 , Page(s): 941 - 951
    Cited by:  Papers (28)  |  Patents (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (164 KB)  

    Numerous applications are currently envisioned for immersive audio systems. The principal function of such systems is to synthesize, manipulate, and render sound fields in real time. We examine several fundamental and technological limitations that impede the development of seamless immersive audio systems. Such limitations stem from signal processing requirements, acoustical considerations, human listening characteristics, and listener movement. We present a brief historical overview to outline the development of immersive audio technologies and discuss the performance and future research directions of immersive audio systems with respect to such limits. Last, we present a novel desktop audio system with integrated listener-tracking capability that circumvents several of the technological limitations faced by today's digital audio workstations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structured audio: creation, transmission, and rendering of parametric sound representations

    Publication Year: 1998 , Page(s): 922 - 940
    Cited by:  Papers (31)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (560 KB)  

    Structured audio representations are semantic and symbolic descriptions that are useful for ultralow-bit-rate transmission, flexible synthesis, and perceptually based manipulation and retrieval of sound. We present an overview of techniques for transmitting and synthesizing sound represented in structured format, and for creating structured representations from audio waveforms. We discuss applications for structured audio in virtual environments, music synthesis, gaming, content-based retrieval, interactive broadcast, and other multimedia contexts View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Video indexing based on mosaic representations

    Publication Year: 1998 , Page(s): 905 - 921
    Cited by:  Papers (101)  |  Patents (33)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (512 KB)  

    Video is a rich source of information. It provides visual information about scenes. This information is implicitly buried inside the raw video data, however, and is provided with the cost of very high temporal redundancy. While the standard sequential form of video storage is adequate for viewing in a movie mode, it fails to support rapid access to information of interest that is required in many of the emerging applications of video. This paper presents an approach for efficient access, use and manipulation of video data. The video data are first transformed from their sequential and redundant frame-based representation, in which the information about the scene is distributed over many frames, to an explicit and compact scene-based representation, to which each frame can be directly related. This compact reorganization of the video data supports nonlinear browsing and efficient indexing to provide rapid access directly to information of interest. This paper describes a new set of methods for indexing into the video sequence based on the scene-based representation. These indexing methods are based on geometric and dynamic information contained in the video. These methods complement the more traditional content-based indexing methods, which utilize image appearance information (namely, color and texture properties) but are considerably simpler to achieve and are highly computationally efficient View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Face to virtual face

    Publication Year: 1998 , Page(s): 870 - 883
    Cited by:  Papers (14)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB)  

    The first virtual humans appeared in the early 1980s in such films as Dreamflight (1982) and The Juggler (1982). Pioneering work in the ensuing period focused on realistic appearance in the simulation of virtual humans. In the 1990s, the emphasis has shifted to real-time animation and interaction in virtual worlds. Virtual humans have begun to inhabit virtual worlds and so have we. To prepare our place in the virtual world we first develop techniques for the automatic representation of a human face capable of being animated in real time using both video and audio input. The objective is for one's representative to look, talk, and behave like oneself in the virtual world. Furthermore, the virtual inhabitants of this world should be able to see our avatars and to react to what we say and to the emotions we convey. We sketch an overview of the problems related to the analysis and synthesis of face-to-virtual-face communication in a virtual world. We describe different components of our system for real-time interaction and communication between a cloned face representing a real person and an autonomous virtual face. It provides an insight into the various problems and gives particular solutions adopted in reconstructing a virtual clone capable of reproducing the shape and movements of the real person's face. It includes the analysis of the facial expression and speech of the cloned face, which can be used to elicit a response from the autonomous virtual human with both verbal and nonverbal facial movements synchronized with the audio voice View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Audio-visual integration in multimodal communication

    Publication Year: 1998 , Page(s): 837 - 852
    Cited by:  Papers (76)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB)  

    We review recent research that examines audio-visual integration in multimodal communication. The topics include bimodality in human speech, human and automated lip reading, facial animation, lip synchronization, joint audio-video coding, and bimodal speaker verification. We also study the enabling technologies for these research topics, including automatic facial-feature tracking and audio-to-visual mapping. Recent progress in audio-visual research shows that joint processing of audio and video provides advantages that are not available when the audio and video are processed independently View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The most highly-cited general interest journal in electrical engineering and computer science, the Proceedings is the best way to stay informed on an exemplary range of topics.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
H. Joel Trussell
North Carolina State University