By Topic

Image Processing, IEEE Transactions on

Issue 12 • Date Dec. 2005

Filter Results

Displaying Results 1 - 25 of 30
  • Table of contents

    Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (41 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing publication information

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • Model-based color halftoning using direct binary search

    Page(s): 1945 - 1959
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1165 KB) |  | HTML iconHTML  

    In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantization of accumulated diffused errors in error diffusion

    Page(s): 1960 - 1976
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1796 KB) |  | HTML iconHTML  

    Due to its high image quality and moderate computational complexity, error diffusion is a popular halftoning algorithm for use with inkjet printers. However, error diffusion is an inherently serial algorithm that requires buffering a full row of accumulated diffused error (ADE) samples. For the best performance when the algorithm is implemented in hardware, the ADE data should be stored on the chip on which the error diffusion algorithm is implemented. However, this may result in an unacceptable hardware cost. In this paper, we examine the use of quantization of the ADE to reduce the amount of data that must be stored. We consider both uniform and nonuniform quantizers. For the nonuniform quantizers, we build on the concept of tone-dependency in error diffusion, by proposing several novel feature-dependent quantizers that yield improved image quality at a given bit rate, compared to memoryless quantizers. The optimal design of these quantizers is coupled with the design of the tone-dependent parameters associated with error diffusion. This is done via a combination of the classical Lloyd-Max algorithm and the training framework for tone-dependent error diffusion. Our results show that 4-bit uniform quantization of the ADE yields the same halftone quality as error diffusion without quantization of the ADE. At rates that vary from 2 to 3 bits per pixel, depending on the selectivity of the feature on which the quantizer depends, the feature-dependent quantizers achieve essentially the same quality as 4-bit uniform quantization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hardcopy image barcodes via block-error diffusion

    Page(s): 1977 - 1989
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1188 KB) |  | HTML iconHTML  

    Error diffusion halftoning is a popular method of producing frequency modulated (FM) halftones for printing and display. FM halftoning fixes the dot size (e.g., to one pixel in conventional error diffusion) and varies the dot frequency according to the intensity of the original grayscale image. We generalize error diffusion to produce FM halftones with user-controlled dot size and shape by using block quantization and block filtering. As a key application, we show how block-error diffusion may be applied to embed information in hardcopy using dot shape modulation. We enable the encoding and subsequent decoding of information embedded in the hardcopy version of continuous-tone base images. The encoding-decoding process is modeled by robust data transmission through a noisy print-scan channel that is explicitly modeled. We refer to the encoded printed version as an image barcode due to its high information capacity that differentiates it from common hardcopy watermarks. The encoding/halftoning strategy is based on a modified version of block-error diffusion. Encoder stability, image quality versus information capacity tradeoffs, and decoding issues with and without explicit knowledge of the base image are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A robust structure-adaptive hybrid vector filter for color image restoration

    Page(s): 1990 - 2001
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3017 KB) |  | HTML iconHTML  

    A robust structure-adaptive hybrid vector filter is proposed for digital color image restoration in this paper. At each pixel location, the image vector (i.e., pixel) is first classified into several different signal activity categories by applying a modified quadtree decomposition to luminance component (image) of the input color image. A weight-adaptive vector filtering operation with an optimal window is then activated to achieve the best tradeoff between noise suppression and detail preservation. Through extensive simulation experiments conducted using a wide range of test color images, the filter has demonstrated superior performance to that of a number of well known benchmark techniques, in terms of both standard objective measurements and perceived image quality, in suppressing several distinct types of noise commonly considered in color image restoration, including Gaussian noise, impulse noise, and mixed noise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal erasure protection strategy for scalably compressed data with tree-structured dependencies

    Page(s): 2002 - 2011
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (466 KB) |  | HTML iconHTML  

    This paper is concerned with the transmission of scalably compressed data sources over lossy channels. Specifically, this paper is concerned with packet networks or, more generally, erasure channels. Previous work has generally assumed that the source elements form linear dependencies. The contribution of this paper is an unequal erasure protection algorithm which is able to take advantage of scalable data with more general dependency structures. In particular, the proposed scheme is adapted to data with tree-structured dependencies. The source elements are allocated to clusters of packets according to their dependency structure, subject to constraints on packet size and channel codeword length. Given a packet cluster arrangement, source elements are assigned optimal channel codes subject to a constraint on the total transmission length. Experimental results confirm the benefit associated with exploiting the actual dependency structure of the data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On multirate optimality of JPEG2000 code stream

    Page(s): 2012 - 2023
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (476 KB) |  | HTML iconHTML  

    Arguably, the most important and defining feature of the JPEG2000 image compression standard is its R-D optimized code stream of multiple progressive layers. This code stream is an interleaving of many scalable code streams of different sample blocks. In this paper, we reexamine the R-D optimality of JPEG2000 scalable code streams under an expected multirate distortion measure (EMRD), which is defined to be the average distortion weighted by a probability distribution of operational rates in a given range, rather than for one or few fixed rates. We prove that the JPEG2000 code stream constructed by embedded block coding of optimal truncation is almost optimal in the EMRD sense for uniform rate distribution function, even if the individual scalable code streams have nonconvex operational R-D curves. We also develop algorithms to optimize the JPEG2000 code stream for exponential and Laplacian rate distribution functions while maintaining compatibility with the JPEG2000 standard. Both of our analytical and experimental results lend strong support to JPEG2000 as a near-optimal scalable image codec in a fairly general setting. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feature-based wavelet shrinkage algorithm for image denoising

    Page(s): 2024 - 2039
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4855 KB) |  | HTML iconHTML  

    A selective wavelet shrinkage algorithm for digital image denoising is presented. The performance of this method is an improvement upon other methods proposed in the literature and is algorithmically simple for large computational savings. The improved performance and computational speed of the proposed wavelet shrinkage algorithm is presented and experimentally compared with established methods. The denoising method incorporated in the proposed algorithm involves a two-threshold validation process for real-time selection of wavelet coefficients. The two-threshold criteria selects wavelet coefficients based on their absolute value, spatial regularity, and regularity across multiresolution scales. The proposed algorithm takes image features into consideration in the selection process. Statistically, most images have regular features resulting in connected subband coefficients. Therefore, the resulting subbands of wavelet transformed images in large part do not contain isolated coefficients. In the proposed algorithm, coefficients are selected due to their magnitude, and only a subset of those selected coefficients which exhibit a spatially regular behavior remain for image reconstruction. Therefore, two thresholds are used in the coefficient selection process. The first threshold is used to distinguish coefficients of large magnitude and the second is used to distinguish coefficients of spatial regularity. The performance of the proposed wavelet denoising technique is an improvement upon several other established wavelet denoising techniques, as well as being computationally efficient to facilitate real-time image-processing applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Is image steganography natural?

    Page(s): 2040 - 2050
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB)  

    Steganography is the art of secret communication. Its purpose is to hide the presence of information, using, for example, images as covers. We experimentally investigate if stego-images, bearing a secret message, are statistically "natural". For this purpose, we use recent results on the statistics of natural images and investigate the effect of some popular steganography techniques. We found that these fundamental statistics of natural images are, in fact, generally altered by the hidden "nonnatural" information. Frequently, the change is consistently biased in a given direction. However, for the class of natural images considered, the change generally falls within the intrinsic variability of the statistics, and, thus, does not allow for reliable detection, unless knowledge of the data hiding process is taken into account. In the latter case, significant levels of detection are demonstrated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identification of a discrete planar symmetric shape from a single noisy view

    Page(s): 2051 - 2059
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (426 KB) |  | HTML iconHTML  

    In this paper, we propose a method for identifying a discrete planar symmetric shape from an arbitrary viewpoint. Our algorithm is based on a newly proposed notion of a view's skeleton. We show that this concept yields projective invariants which facilitate the identification procedure. It is, furthermore, shown that the proposed method may be extended to the case of noisy data to yield an optimal estimate of a shape in question. Substantiating examples are provided. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A wrapper-based approach to image segmentation and classification

    Page(s): 2060 - 2072
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1499 KB) |  | HTML iconHTML  

    The traditional processing flow of segmentation followed by classification in computer vision assumes that the segmentation is able to successfully extract the object of interest from the background image. It is extremely difficult to obtain a reliable segmentation without any prior knowledge about the object that is being extracted from the scene. This is further complicated by the lack of any clearly defined metrics for evaluating the quality of segmentation or for comparing segmentation algorithms. We propose a method of segmentation that addresses both of these issues, by using the object classification subsystem as an integral part of the segmentation. This will provide contextual information regarding the objects to be segmented, as well as allow us to use the probability of correct classification as a metric to determine the quality of the segmentation. We view traditional segmentation as a filter operating on the image that is independent of the classifier, much like the filter methods for feature selection. We propose a new paradigm for segmentation and classification that follows the wrapper methods of feature selection. Our method wraps the segmentation and classification together, and uses the classification accuracy as the metric to determine the best segmentation. By using shape as the classification feature, we are able to develop a segmentation algorithm that relaxes the requirement that the object of interest to be segmented must be homogeneous in some low-level image parameter, such as texture, color, or grayscale. This represents an improvement over other segmentation methods that have used classification information only to modify the segmenter parameters, since these algorithms still require an underlying homogeneity in some parameter space. Rather than considering our method as, yet, another segmentation algorithm, we propose that our wrapper method can be considered as an image segmentation framework, within which existing image segmentation a- - lgorithms may be executed. We show the performance of our proposed wrapper-based segmenter on real-world and complex images of automotive vehicle occupants for the purpose of recognizing infants on the passenger seat and disabling the vehicle airbag. This is an interesting application for testing the robustness of our approach, due to the complexity of the images, and, consequently, we believe the algorithm will be suitable for many other real-world applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bidirectional labeling and registration scheme for grayscale image segmentation

    Page(s): 2073 - 2081
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (761 KB) |  | HTML iconHTML  

    In this paper, we introduce a new image segmentation scheme that is based on bidirectional labeling and registration and prove that its segmentation performance is equivalent to that of the conventional watershed segmentation algorithm. The proposed bidirectional labeling and registration scheme, which we refer to as bidirectional labeling and registration scheme (BIDS), involves only linear scans of image pixels. It uses one-dimensional operations rather than the queues that are used in traditional segmentation algorithms, which are two-dimensional problems. BIDS also provides unique labels for individual homogeneous regions. In addition to achieving the same segmentation results, BIDS is four times less computationally complex than the conventional watershed by immersion technique. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reversible data embedding into images using wavelet techniques and sorting

    Page(s): 2082 - 2090
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB) |  | HTML iconHTML  

    The proliferation of digital information in our society has enticed a lot of research into data-embedding techniques that add information to digital content, like images, audio, and video. In this paper, we investigate high-capacity lossless data-embedding methods that allow one to embed large amounts of data into digital images (or video) in such a way that the original image can be reconstructed from the watermarked image. We present two new techniques: one based on least significant bit prediction and Sweldens' lifting scheme and another that is an improvement of Tian's technique of difference expansion. The new techniques are then compared with various existing embedding methods by looking at capacity-distortion behavior and capacity control. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The contourlet transform: an efficient directional multiresolution image representation

    Page(s): 2091 - 2106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1511 KB) |  | HTML iconHTML  

    The limitations of commonly used separable extensions of one-dimensional transforms, such as the Fourier and wavelet transforms, in capturing the geometry of image edges are well known. In this paper, we pursue a "true" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information. The main challenge in exploring geometry in images comes from the discrete nature of the data. Thus, unlike other approaches, such as curvelets, that first develop a transform in the continuous domain and then discretize for sampled data, our approach starts with a discrete-domain construction and then studies its convergence to an expansion in the continuous domain. Specifically, we construct a discrete-domain multiresolution and multidirection expansion using nonseparable filter banks, in much the same way that wavelets were derived from filter banks. This construction results in a flexible multiresolution, local, and directional image expansion using contour segments, and, thus, it is named the contourlet transform. The discrete contourlet transform has a fast iterated filter bank algorithm that requires an order N operations for N-pixel images. Furthermore, we establish a precise link between the developed filter bank and the associated continuous-domain contourlet expansion via a directional multiresolution analysis framework. We show that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical behavior of joint least-square estimation in the phase diversity context

    Page(s): 2107 - 2116
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1245 KB) |  | HTML iconHTML  

    The images recorded by optical telescopes are often degraded by aberrations that induce phase variations in the pupil plane. Several wavefront sensing techniques have been proposed to estimate aberrated phases. One of them is phase diversity, for which the joint least-square approach introduced by Gonsalves et al. is a reference method to estimate phase coefficients from the recorded images. In this paper, we rely on the asymptotic theory of Toeplitz matrices to show that Gonsalves' technique provides a consistent phase estimator as the size of the images grows. No comparable result is yielded by the classical joint maximum likelihood interpretation (e.g., as found in the work by Paxman et al.). Finally, our theoretical analysis is illustrated through simulated problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An information fidelity criterion for image quality assessment using natural scene statistics

    Page(s): 2117 - 2128
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (444 KB) |  | HTML iconHTML  

    Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Traditionally, image QA algorithms interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by arbitrary signal fidelity criteria. In this paper, we approach the problem of image QA by proposing a novel information fidelity criterion that is based on natural scene statistics. QA systems are invariably involved with judging the visual quality of images and videos that are meant for "human consumption". Researchers have developed sophisticated models to capture the statistics of natural signals, that is, pictures and videos of the visual environment. Using these statistical models in an information-theoretic setting, we derive a novel QA algorithm that provides clear advantages over the traditional approaches. In particular, it is parameterless and outperforms current methods in our testing. We validate the performance of our algorithm with an extensive subjective study involving 779 images. We also show that, although our approach distinctly departs from traditional HVS-based methods, it is functionally similar to them under certain conditions, yet it outperforms them due to improved modeling. The code and the data from the subjective study are available at [1]. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fingerprinting protocol for images based on additive homomorphic property

    Page(s): 2129 - 2139
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (843 KB) |  | HTML iconHTML  

    Homomorphic property of public-key cryptosystems is applied for several cryptographic protocols, such as electronic cash, voting system, bidding protocols, etc. Several fingerprinting protocols also exploit the property to achieve an asymmetric system. However, their enciphering rate is extremely low and the implementation of watermarking technique is difficult. In this paper, we propose a new fingerprinting protocol applying additive homomorphic property of Okamoto-Uchiyama encryption scheme. Exploiting the property ingenuously, the enciphering rate of our fingerprinting scheme can be close to the corresponding cryptosystem. We study the problem of implementation of watermarking technique and propose a successful method to embed an encrypted information without knowing the plain value. The security can also be protected for both a buyer and a merchant in our scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Digital watermarking robust to geometric distortions

    Page(s): 2140 - 2150
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1254 KB) |  | HTML iconHTML  

    In this paper, we present two watermarking approaches that are robust to geometric distortions. The first approach is based on image normalization, in which both watermark embedding and extraction are carried out with respect to an image normalized to meet a set of predefined moment criteria. We propose a new normalization procedure, which is invariant to affine transform attacks. The resulting watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. The second approach is based on a watermark resynchronization scheme aimed to alleviate the effects of random bending attacks. In this scheme, a deformable mesh is used to correct the distortion caused by the attack. The watermark is then extracted from the corrected image. In contrast to the first scheme, the latter is suitable for private watermarking applications, where the original image is necessary for watermark detection. In both schemes, we employ a direct-sequence code division multiple access approach to embed a multibit watermark in the discrete cosine transform domain of the image. Numerical experiments demonstrate that the proposed watermarking schemes are robust to a wide range of geometric attacks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal block boundary pre/postfiltering for wavelet-based image and video compression

    Page(s): 2151 - 2158
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (785 KB) |  | HTML iconHTML  

    This paper presents a pre/postfiltering framework to reduce the reconstruction errors near block boundaries in wavelet-based image and video compression. Two algorithms are developed to obtain the optimal filter, based on boundary filter bank and polyphase structure, respectively. A low-complexity structure is employed to approximate the optimal solution. Performances of the proposed method in the removal of JPEG 2000 tiling artifact and the jittering artifact of three-dimensional wavelet video coding are reported. Comparisons with other methods demonstrate the advantages of our pre/postfiltering framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Elastic body spline technique for feature point generation and face modeling

    Page(s): 2159 - 2166
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2189 KB) |  | HTML iconHTML  

    Due to the advent of MPEG-4 standard, facial animation has been receiving significant attention lately. A common approach for facial animation is to use the mesh model. The physics-based transformation, elastic body spline (EBS), has been proposed to deform the facial mesh model and generate realistic expression by assuming the whole facial image has the same elastic property. In this paper, we partition facial images into different regions and propose an iterative algorithm to find the elastic property of each facial region. By doing so, we can obtain the EBS for mesh vertices in the facial mesh model such that facial animation can be more realistically achieved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Color demosaicking via directional linear minimum mean square-error estimation

    Page(s): 2167 - 2178
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1673 KB) |  | HTML iconHTML  

    Digital cameras sample scenes using a color filter array of mosaic pattern (e.g., the Bayer pattern). The demosaicking of the color samples is critical to the image quality. This paper presents a new color demosaicking technique of optimal directional filtering of the green-red and green-blue difference signals. Under the assumption that the primary difference signals (PDS) between the green and red/blue channels are low pass, the missing green samples are adaptively estimated in both horizontal and vertical directions by the linear minimum mean square-error estimation (LMMSE) technique. These directional estimates are then optimally fused to further improve the green estimates. Finally, guided by the demosaicked full-resolution green channel, the other two color channels are reconstructed from the LMMSE filtered and fused PDS. The experimental results show that the presented color demosaicking technique outperforms the existing methods both in PSNR measure and visual perception. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Motion-JPEG2000 codec compensated for interlaced scanning videos

    Page(s): 2179 - 2191
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3798 KB) |  | HTML iconHTML  

    This paper presents an implementation scheme of Motion-JPEG2000 (MJP2) integrated with invertible deinterlacing. In previous work, we developed an invertible deinterlacing technique that suppresses the comb-tooth artifacts which are caused by field interleaving for interlaced scanning videos, and affect the quality of scalable frame-based codecs, such as MJP2. Our technique has two features, where sampling density is preserved and image quality is recovered by an inverse process. When no codec is placed between the deinterlacer and inverse process, the original video is perfectly reconstructed. Otherwise, it is almost completely recovered. We suggest an application scenario of this invertible deinterlacer for enhancing the sophisticated signal-to-noise ratio scalability in the frame-based MJP2 coding. The proposed system suppresses the comb-tooth artifacts at low bitrates, while enabling the quality recovery through its inverse process at high bitrates within the standard bitstream format. The main purpose of this paper is to present a system that yields high quality recovery for an MJP2 codec. We demonstrate that our invertible deinterlacer can be embedded into the discrete wavelet transform employed in MJP2. As a result, the energy gain factor to control rate-distortion characteristics can be compensated for optimal compression. Simulation results show that the recovery of quality is improved by, for example, more than 2.0 dB in peak signal-to-noise ratio by applying our proposed gain compensation when decoding 8-bit grayscale Football sequence at 2.0 bpp. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • List of reviewers

    Page(s): 2192 - 2197
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing Edics

    Page(s): 2198
    Save to Project icon | Request Permissions | PDF file iconPDF (52 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003