By Topic

Information Technology: Coding and Computing, 2001. Proceedings. International Conference on

Date 2-4 April 2001

Filter Results

Displaying Results 1 - 25 of 126
  • International Conference On Information Technology: Coding And Computing [front matter]

    Publication Year: 2001 , Page(s): i - xii
    Save to Project icon | Request Permissions | PDF file iconPDF (465 KB)  
    Freely Available from IEEE
  • Special session on media streaming [session intro.]

    Publication Year: 2001 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (93 KB)  

    Summary form only given. This special session is devoted to address some of the main technological challenges for media streaming, including rate-adaptation and scalable coding for heterogeneous networks, QoS (quality of service) and congestion control, streaming systems and architectures, and packet-loss resilient coding and transport. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Author index

    Publication Year: 2001 , Page(s): 695 - 698
    Save to Project icon | Request Permissions | PDF file iconPDF (188 KB)  
    Freely Available from IEEE
  • Constraint database query evaluation with approximation

    Publication Year: 2001 , Page(s): 634 - 638
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (388 KB) |  | HTML iconHTML  

    Considers the problem of solving a large number of simple systems of linear constraints. This problem occurs in the context of constraint databases. The developed methodology is based on a hierarchical evaluation of the constraints, which are first simplified and replaced by approximations. We focus on systems of linear constraints over the reals, which model spatial objects, and we consider both geometric and topological approximations, defined with very simple constraints. We show that these constraints can be used either to solve the initial systems, or at least to filter out unsatisfiable systems. The main contribution of the paper is the development of a set of rewriting rules that allow the transformation of spatial object queries into equivalent ones that make use of the approximation, reducing the query evaluation cost View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An overview of security issues in streaming video

    Publication Year: 2001 , Page(s): 345 - 348
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB) |  | HTML iconHTML  

    The article describes some of the security issues in streaming video over the Internet. If high quality video sequences are to be delivered to computers and digital television systems over the Internet in our “digital future”, this material must be protected View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • System fusion for improving performance in information retrieval systems

    Publication Year: 2001 , Page(s): 639 - 643
    Cited by:  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB) |  | HTML iconHTML  

    The fusion of various retrieval strategies has long been suggested as a means of improving retrieval effectiveness. To date, testing of fusion has been done by combining result sets from widely disparate approaches that include an uncontrolled mixture of retrieval strategies and utilities. To isolate the effect of fusion on individual retrieval models, we have implemented probabilistic, vector-space and weighted Boolean models and tested the effect of fusion on these strategies in a systematic fashion. We also tested the effect of fusion on various query representations and have shown up to a 12% improvement in the average precision View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Developing a standards agenda for safety critical multimedia systems

    Publication Year: 2001 , Page(s): 295 - 299
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB) |  | HTML iconHTML  

    There is an increasing demand for hypermedia delivery of technical documentation for safety critical systems. In many ways this documentation is as safety critical as the systems it documents, in that errors in the documentation can cause catastrophic maintenance failures. For this reason there need to be standards regulating the production of hypermedia documentation in these areas, but most of the work being done is directed towards data formats for storage and delivery, rather than on quality of content. The paper discusses the types of standard that might be relevant to this activity, along with some examples taken from the author's experience of instances of that type of standard. This discussion is used as a basis for the proposal of a standards framework for hypermedia technical documentation systems and a research agenda, based on some of the author's work, for development of such standards View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Local maximum likelihood multiuser detection for CDMA communications

    Publication Year: 2001 , Page(s): 307 - 311
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (364 KB) |  | HTML iconHTML  

    The optimum multiuser detector achieves global maximum likelihood and has a complexity growing exponentially with the number of users. We propose the local maximum likelihood (LML) multiuser detectors with an arbitrary neighborhood size. As the neighborhood size is one, two, etc., up to the total number of users, the computational complexity of the LML detector is linear quadratic, etc., up to exponential in the total number of users. Every LML detector is associated with a local minimum error probability defined with the corresponding neighborhood size. A family of local-maximum-likelihood likelihood-ascent-search (LMLAS) detectors is proposed, each of which is shown to be an LML detector. An LMLAS detector monotonically increases likelihood step by step, and thus converges to an LML point in a finite number of search steps with probability one. Following any detector, an LMLAS detector can reduce the error probability of the initial detector to a local minimum or not change it when the initial detector is an LML detector with the same or larger neighborhood size with probability one View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maintenance of connected components in quadtree-based image representation

    Publication Year: 2001 , Page(s): 647 - 651
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB) |  | HTML iconHTML  

    In this paper, we have considered the problem of maintaining connected components in quadtree representation of binary images when a small portion of the image undergoes change. The batch approach to re-compute the connected components information is very expensive. Our algorithms update the quadtree as well as the connected components' labeling when a homogeneous region in the quadtree is changing. The updating algorithms visit less nodes compared to the batch approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Content-based retrieval and data mining of a skin cancer image database

    Publication Year: 2001 , Page(s): 611 - 615
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (620 KB) |  | HTML iconHTML  

    Skin cancer is the most common type of cancer in the USA. A large, shared skin cancer image database on the Internet would be quite valuable to both medical professionals and consumers. In this paper, such a database is created using a three-tier system: a client application implemented in Java applets, a Web server and a back-end database server. JDBC-ODBC is used for the Web server to communicate with the database server. Various browsing and content-based retrieval methods are supported for the skin cancer image database through Web-based graphical user interfaces. A data mining algorithm for finding association rules between different features of the skin cancer images is also implemented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive thresholding of document images based on Laplacian sign

    Publication Year: 2001 , Page(s): 501 - 505
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB) |  | HTML iconHTML  

    We present a new technique for document image binarization to manage different situations in an image. The problems of image binarization caused by illumination, contrast, noise and much source type-related degradation are addressed. A new technique is applied to determine a local threshold for each pixel. The idea of our technique is to update locally the threshold value whenever the Laplacian sign of the input image changes along the raster scanned line. The Differential of Gaussian (DoG) is used to define the sign image. The proposed technique is tested with images including different types of document components and degradations. The results were compared with a global thresholding technique. It is shown that the proposed technique performs well and is highly robust View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spatial segmentation based on modified morphological tools

    Publication Year: 2001 , Page(s): 478 - 482
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (532 KB) |  | HTML iconHTML  

    The MPEG-4 visual standard supports the trend in video object segmentation for the spatial segmentation process in only one way. Therefore, this paper presents the spatial segmentation process according to MPEG-4. The spatial segmentation includes the image simplification, gradient operation and watershed algorithm. This paper presents the morphological tools to use in the segmentation process. Firstly, the morphological filters are used to simplify the image by the open-close operation by partial reconstruction while preserving the boundary information. Secondly, the multiscale gradient operator is used to reduce the oversegmentation and also preserve the homogeneous region boundary. This simplification and labeling of the region method give the homogeneous region and the small number of regions. Finally, the uncertain pixels are reduced for use with the watershed algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal maximal encoding different from Huffman encoding

    Publication Year: 2001 , Page(s): 493 - 497
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    Novel maximal encoding, encoding, and maximal prefix encoding different from Huffman encoding are introduced. It is proven that for finite source alphabets all Huffman codes are optimal maximal codes, codes, and maximal prefix codes. Conversely, the above three types optimal codes need not to be the Huffman codes. Completely similar to Huffman codes, we prove that for every random variable with a countably infinite set of outcomes and with finite entropy there exists an optimal maximal code (code, maximal prefix code) which can be constructed from optimal maximal codes (codes, maximal prefix codes) for truncated versions of the random variable, and furthermore, that the average code word lengths of any sequence of optimal maximal codes (codes, maximal prefix codes) for the truncated versions converge to that of the optimal maximal code (cone, maximal prefix code) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reconfigurable media processing

    Publication Year: 2001 , Page(s): 300 - 304
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (444 KB) |  | HTML iconHTML  

    Multimedia processing is becoming increasingly important with a range of applications. Existing approaches for processing multimedia data can be broadly classified into two categories, namely: (i) microprocessors with extended media processing capabilities; and (ii) dedicated implementations (ASICs). The complexity, variety of techniques and tools associated with multimedia processing points to the opportunities for reconfigurable computing devices which will be able to adapt the underlying hardware dynamically in response to changes in the input data or processing environment. The paper proposes a novel approach to design a dynamically reconfigurable processor by performing hardware software co-design for a media processing application. As an example, the analysis of the shape coding module of MPEG-4 is chosen to demonstrate the potential for reconfigurability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data hiding techniques for printed binary images

    Publication Year: 2001 , Page(s): 55 - 59
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (612 KB) |  | HTML iconHTML  

    The objective of this research is to develop a method to hide information inside a binary image by digital halftoning techniques with certain modifications. Two modified digital halftoning techniques, modified ordered dithering and modified multiscale error diffusion, are used in this research. The data is encoded pixel by pixel in the halftone image according to position at the image and sequence of binarization, respectively. The eye model and mean square error are used to measure the image quality. A computer vision method has been developed to recognize the printed binary image. The results show that thousands of binary images similar to human vision but quite distinct from each other by computer vision can be generated. The eye model and computer vision are useful for both binary image quality measurement and data recognition. These new techniques have great potential in printing security documents such as currency, ID card as well as confidential documents View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Practical capacity of digital watermark as constrained by reliability

    Publication Year: 2001 , Page(s): 85 - 89
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB) |  | HTML iconHTML  

    The paper presents a theoretic analysis of watermark capacity. First, a simplified watermark scheme is postulated. In the scheme, detection yields a multidimensional vector, in which each dimension is assumed to be i.i.d. (independent and identically distributed) and to follow the Gaussian distribution. The major constraint on the capacity is detected reliability, which is one of the most important measures of the utility of watermarks. The problem is to figure out the maximum amount of information payload with the reliability requirement still satisfied. The reliability is represented by three kinds of error rates: false positive error rate, false negative error rate, and bit error rate. These error rates are formulated under certain assumptions, and the theoretical capacity can be determined by setting the bounds on all of the error rates. Further, experiments were performed to verify the theoretic analysis and it was shown that this approach yields a good estimate of the capacity of a watermark View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal algebraic coding of noisy and distorted images

    Publication Year: 2001 , Page(s): 537 - 541
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (348 KB) |  | HTML iconHTML  

    The traditional approach to coding of noisy and distorted images assumes preliminary enhancement and restoring of images and only then in fact their coding. A novel technique is proposed, where the indicated procedures are combined in optimum algebraic coding. At the heart of the approach is the optimization of coordinate basis of an image representation. It is shown, that the required optimum properties have the eigenfunctions of Fisher's information operator known in the statistical theory of estimation. These functions satisfying defined criteria of a selection are called Principal Information Components (PIC). The PIC-projection technique for improvement of an arbitrary image decomposition is proposed. The given technique allows primary physical representation of an image to decrease a statistical error of its coding on noisy data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inverted-space storage organization for persistent data of very high dimensionality

    Publication Year: 2001 , Page(s): 616 - 621
    Cited by:  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB) |  | HTML iconHTML  

    Contemporary database technology is severely limited in managing the high-dimensional data of many advanced applications, such as multimedia systems and data mining. The main concern of this paper is the well-known performance degradation of multi-dimensional access methods in spaces with many dimensions. The paper proposes an elaborate storage organization, called the inverted space, which can support efficient processing of data in spaces with very high dimensionality. The organization allows system administrators to control the size of spatial indexes and thereby to avoid the negative impact of extremely high data dimensionality on the retrieval performance. In addition, this paper introduces a new point access method designed to address numerous other problems that contemporary retrieval schemes experience in high-dimensional situations. This mechanism is envisioned to serve as the core indexing structure of inverted-space storage organizations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computation and performance trade-offs in motion estimation algorithms

    Publication Year: 2001 , Page(s): 263 - 267
    Cited by:  Papers (4)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB) |  | HTML iconHTML  

    Real-time video/visual communication applications require trade-offs in terms of processing speed, visual image quality and power consumption. Motion estimation is one of the tasks in video coding that requires significant amount of computation. Block matching motion estimation algorithms such as the three-step search and the diamond search algorithms are being used in video coding schemes as alternatives to full search algorithms. Fast motion estimation algorithms reduce the computational complexity, at the expense of reduced performance. Special purpose fast processors can be employed as an alternative to meet the computational demand. However, the processing speed comes at the expense of higher power consumption. This paper investigates motion estimation algorithms and presents the computational, and performance trade-offs involved in choosing a motion estimation algorithm for video coding applications. Fast motion estimation algorithms often assume monotonic error surface in order to speed up the algorithm. The argument against this assumption is that the search might be trapped in local minima and may result in a noisy motion field. Prediction methods have been suggested in the literature as a solution to avoid these local minima and noisy motion field. The paper also investigates the effects of the monotonic error surface assumption as well as the appropriate choice of initial motion vectors that results in better performance of the motion estimation algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Off-line recognition of isolated Persian handwritten characters using multiple hidden Markov models

    Publication Year: 2001 , Page(s): 506 - 510
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    In this paper a new method for off-line recognition of isolated handwritten Persian characters based on hidden Markov models (HMMs) is proposed. In the proposed system, document images are acquired in 300-dpi resolution. Multiple filters such as median and morphologal filters are utilized for noise removal. The features used in this process are methods based on regional projection contour transformation (RPCT). In this stage, two types of feature vectors, based on this technique, are extracted. The recognition system consists of two stages. For each character in the training phase, multiple HMMs corresponding to different feature vectors are built. In the classification phase, the results of the individual classifiers are integrated to produce the final recognition View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel cellular search algorithm for block-matching motion estimation

    Publication Year: 2001 , Page(s): 629 - 633
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB) |  | HTML iconHTML  

    A novel cellular search (CS) algorithm for block-matching motion estimation is presented. Two different search patterns, namely the large CS pattern (LCSP) and the small CS pattern (SCSP), are employed to perform a search for the best-matching block. The LCSP assumes that the best-matching block can be located in any direction from the centre of the LCSP, and we show that the number of blocks via LCSP searching is less than via other algorithms. Following the LCSP search, the SCSP is used to search those blocks near the centre block. We show that the CS algorithm is computationally efficient; it requires less computation time than other algorithms, such as the three-step search of T. Koga et al. (1981), the new three-step search of R. Li et al. (1994) and the four-step search of L.M. Po et al. (1996) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discovering authorities and hubs in different topological Web graph structures

    Publication Year: 2001 , Page(s): 594 - 598
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB) |  | HTML iconHTML  

    The research looks at a Web page as a graph structure or a Web graph and tries to classify different Web graphs in the new coordinate space: Out-Degree, In-Degree. The Out-degree coordinate is defined as the number of outgoing Web pages from a given Web page. The In-degree coordinate is the number of Web pages that point to a given Web page. J. Kleinberg's (1988) Web algorithm on discovering “hub Web pages” and “authorities Web pages” is applied in this new coordinate space. Some very uncommon phenomena have been discovered and new interesting results interpreted. The author believes that understanding the underlying Web page as a graph will help design better Web algorithms, enhance retrieval and Web performance, and recommends using graphs as part of a visual aid for search engine designers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Watermarking capacity of digital images based on domain-specific masking effects

    Publication Year: 2001 , Page(s): 90 - 94
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB) |  | HTML iconHTML  

    Our objective is to find a theoretical watermarking capacity bound of digital images based on domain-specific masking effects. We first show the capacity of private watermarking in which the power constraints are not uniform. Then, we apply several domain-specific Human Vision System approximation models to estimate the power constraints and then show the theoretical watermarking capacity of an image in a general noisy environment. Note that we consider all pixels, watermarks and noises to be discrete values, which occur in realistic cases View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New bounds on a hypercube coloring problem and linear codes

    Publication Year: 2001 , Page(s): 542 - 546
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB) |  | HTML iconHTML  

    In studying the scalability of optical networks, one problem arising involves coloring the vertices of the n-dimensional hypercube with as few colors as possible such that any two vertices whose Hamming distance is at most k are colored differently. Determining the exact value of χ(n), the minimum number of colors needed, appears to be a difficult problem. We improve the known lower and upper bounds of χ(n) and indicate the connection of this colouring problem to linear codes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • LIPT: a lossless text transform to improve compression

    Publication Year: 2001 , Page(s): 452 - 460
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (656 KB) |  | HTML iconHTML  

    We propose an approach to develop a dictionary based reversible lossless text transformation, called LIFT (length index preserving transform), which can be applied to a source text to improve the existing algorithm's ability to compress. In LIFT, the length of the input word and the offset of the words in the dictionary are denoted with alphabets. Our encoding scheme makes use of the recurrence of same length words in the English language to create context in the transformed text that the entropy coders can exploit. LIFT also achieves some compression at the preprocessing stage and retains enough context and redundancy for the compression algorithms to give better results. Bzip2 with LIFT gives 5.24% improvement in average BPC over Bzip2 without LIPT, and PPMD with LIPT gives 4.46% improvement in average BPC over PPMD without LIFT, for our test corpus View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.