Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 5:00 PM ET (12:00 - 21:00 UTC). We apologize for the inconvenience.
By Topic

Intelligent Systems and their Applications, IEEE

Issue 5 • Date Sept.-Oct. 1999

Filter Results

Displaying Results 1 - 11 of 11
  • Integrating and using large databases of text, images, video, and audio

    Publication Year: 1999 , Page(s): 34 - 35
    Cited by:  Papers (1)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (187 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Intelligent Classroom

    Publication Year: 1999 , Page(s): 2 - 5
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB)  

    Computer software is being designed under the principle that the more features it has, the better it is. Consequently, most people find learning to use a new product overwhelming. What good is having several hundred commands in your word processor, if you can't find the ones you want, and aren't even certain what most of the others do? The difficulty lies in the way people are expected to interact with their computers. All the effort lies with the users, who must decide what they want to achieve and deduce how they can do it. Intelligent systems should not restrict themselves to following this user-interaction paradigm-they should infer what their users are trying to do. In our research lab, we are developing the Intelligent Classroom, an automated presentation facility that a lecturer can interact with and control. In the Intelligent Classroom, we are enabling new modes of user interaction through multiple sensing modes and plan recognition. The Classroom uses cameras and microphones to determine what the speaker is trying to do and then takes the actions it deems appropriate. One of our goals is to let the speaker interact with the Classroom as she would with an audiovisual assistant: through commands (speech, gesture, or both) or by just making her presentation and trusting the Classroom to do what she wants. One way the Classroom assists speakers is by controlling AV components such as VCRs and slide projectors. Additionally, the Classroom lets speakers easily produce fair-quality lecture videos. Based on the speaker's actions, the video cameras pan, tilt, and zoom to best capture what is important. This will allow the presentation of interesting lectures on cable TV, the distribution of videos of entire classes, and the broadcasting of lectures to support distance learning-extending learning beyond the confines of a traditional classroom View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Retrieving related TV news reports and newspaper articles

    Publication Year: 1999 , Page(s): 40 - 44
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB)  

    Using TV newscasts and newspaper together can enable more effective communication. TV newscasts typically report events clearly and intuitively with speech and image information, but without much detail. In contrast, newspapers usually report the same events in greater detail but primarily use text information. However, using TV newscasts and newspapers together without aligning the newspaper articles with the TV news reports is difficult. To solve this problem, we propose an alignment method that extracts text from TV captions and newspaper articles. We also propose a method for extracting a newspaper article and its follow-up articles. With these methods, we've developed a system for browsing and retrieving newspaper articles and TV news reports View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Virtual spiders guide robotic control design

    Publication Year: 1999 , Page(s): 77 - 84
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (600 KB)  

    Traditionally, researchers have studied natural science phenomena by describing observations and by experimentally isolating and analyzing pertinent parameters and factors. As computational power has mushroomed in recent years, analysis of biological systems has increasingly involved formal models coupled with computer simulations. Lately, investigators have turned to individual-based models that mimic the complexity of biological processes by using repeated local interactions between units, described in few simple behavior rules. By mimicking plant morphology or the problem-solving capabilities of “social” robots, computer scientists can use a comparable concept for simulation, applied chaos theory, artificial intelligence, and artificial life. In our approach, called TheseusV, we used this approach to mimic the spatial orientation of spiders during web construction. As this article shows, behavioral principles of arthropods (spiders and insects) during orientation are interesting not only for research into the behavior of real animals, but also because their robustness and simplicity make them potentially quite useful for controlling autonomous agents such as insect robots. After all, most real bugs are notoriously good at coping with unpredictable environments, so lessons we learn from animal models can also enhance artificial-life models and offer new insight into spatial orientation to AI researchers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mixed-initiative interaction

    Publication Year: 1999 , Page(s): 14 - 23
    Cited by:  Papers (42)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (812 KB)  

    Presents three essays about the area of mixed-initiative interaction. The first essay introduces the area and creates a useful taxonomy of mixed-initiative dialog issues. The author summarises several years' worth of research on mixed-initiative planning systems. The second essay describes the role of uncertainty in mixed-initiative interaction and describes two innovative systems for semi-automated assistance that make use of Bayesian reasoning. The final essay confronts the difficult task off evaluating such systems, including the creation of test sets and metrics for evaluating descriptive versus prescriptive dialogue models View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RiboWeb: an ontology-based system for collaborative molecular biology

    Publication Year: 1999 , Page(s): 68 - 76
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (796 KB)  

    RiboWeb is an online data resource for the ribosome, a vital cellular apparatus. It contains a large knowledge base of relevant published data and computational modules that can process this data to test hypotheses about the ribosome's structure View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Indexing flower patent images using domain knowledge

    Publication Year: 1999 , Page(s): 24 - 33
    Cited by:  Papers (21)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (740 KB)  

    The article explains how to query a database of flower patent images using both an example flower image and color names. This database consists of images that have been digitized from photographs submitted as a part of applications for flower patents to the US Patents and Trademark Office. This database must be queried by both example images and color name, so that both those checking new patent applications and those buying patents for cultivation can use it View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Part-whole reasoning: a case study in medical ontology engineering

    Publication Year: 1999 , Page(s): 59 - 67
    Cited by:  Papers (8)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (792 KB)  

    Clinical computing requires effective medical ontologies that can support large-scale formal reasoning. Our proposal lets the knowledge engineer, on demand, enable or disable transitivity of part-whole reasoning and part-whole induced concept specialization and role propagation, with respect to commonly shared medical conceptualizations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Named Faces: putting names to faces

    Publication Year: 1999 , Page(s): 45 - 50
    Cited by:  Papers (10)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (480 KB)  

    To provide automatic labeling of faces in video, the author has developed Named Faces, a fully functional automated system that builds a large database of name-face association pairs from broadcast news. This article describes how the system detects and recognizes superimposed text in the video, then verifies or repairs the text by comparing it with a large list of automatically generated names found in news stories. Faces found in the video where superimposed names were recognized are tracked, extracted, and associated with the superimposed text. With Named Faces, users can submit queries to find names for faces in video images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image-retrieval agent: integrating image content and text

    Publication Year: 1999 , Page(s): 36 - 39
    Cited by:  Papers (3)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    The explosive growth of the Web and increased storage capacity have made multimedia ubiquitous, increasing the need to retrieve images, audio, and video. However, finding useful information by navigating the Web is difficult. Although search engines help users find Web information, most of these tools index only text and ignore image, audio, and video content. Partial solutions to this problem are the use of image caption text to characterize an image and using manually indexed key words. Another solution is content-based image retrieval, which uses image-processing and computer-vision techniques to retrieve images from a repository using attributes, such as color, texture, and form. We present an approach that combines a content-based image-retrieval technique using wavelets with a traditional text-retrieval algorithm to retrieve images from the Web. We incorporate these two methods in an agent called query by image content and its associated text (QBICAT) that creates a single index that includes features from the image and its textual description. Following the preprocessing of each media, the system creates a single index and uses a similarity metric for retrieval View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning to recognize speech by watching television

    Publication Year: 1999 , Page(s): 51 - 58
    Cited by:  Papers (11)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (620 KB)  

    Our proposed technique gathers large amounts of speech from open broadcast sources and combines it with automatically obtained text or closed captioning to identify suitable speech-training material. George Zavaliagkos and Thomas Colthurst worked on a different approach to this method that uses confidence scoring on the acoustic data itself to improve performance in the absence of any transcribed data, but their approach only yielded marginal results. Our initial efforts also provided only limited success with small amounts of data. We describe our approach to collecting almost unlimited amounts of accurately transcribed speech data. This information serves as training data for the acoustic model component of most high-accuracy speaker-independent speech-recognition systems. The error-ridden closed-captioned text aligns with the similarly error-ridden speech recognizer output. We assume matching segments of sufficient length are reliable transcriptions of the corresponding speech. We then use these segments as the training data for an improved speech recognizer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

This Periodical ceased production in 2000. The current retitled publication is IEEE Intelligent Systems.

Full Aims & Scope