By Topic

Image Processing, IEEE Transactions on

Issue 5 • Date May 2014

Filter Results

Displaying Results 1 - 25 of 52
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (116 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (130 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): 1929 - 1930
    Save to Project icon | Request Permissions | PDF file iconPDF (449 KB)  
    Freely Available from IEEE
  • [Blank page]

    Page(s): B1931 - B1932
    Save to Project icon | Request Permissions | PDF file iconPDF (6 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): 1933 - 1935
    Save to Project icon | Request Permissions | PDF file iconPDF (448 KB)  
    Freely Available from IEEE
  • [Blank page]

    Page(s): B1936
    Save to Project icon | Request Permissions | PDF file iconPDF (5 KB)  
    Freely Available from IEEE
  • Saliency Tree: A Novel Saliency Detection Framework

    Page(s): 1937 - 1952
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (10422 KB) |  | HTML iconHTML  

    This paper proposes a novel saliency detection framework termed as saliency tree. For effective saliency measurement, the original image is first simplified using adaptive color quantization and region segmentation to partition the image into a set of primitive regions. Then, three measures, i.e., global contrast, spatial sparsity, and object prior are integrated with regional similarities to generate the initial regional saliency for each primitive region. Next, a saliency-directed region merging approach with dynamic scale control scheme is proposed to generate the saliency tree, in which each leaf node represents a primitive region and each non-leaf node represents a non-primitive region generated during the region merging process. Finally, by exploiting a regional center-surround scheme based node selection criterion, a systematic saliency tree analysis including salient node selection, regional saliency adjustment and selection is performed to obtain final regional saliency measures and to derive the high-quality pixel-wise saliency map. Extensive experimental results on five datasets with pixel-wise ground truths demonstrate that the proposed saliency tree model consistently outperforms the state-of-the-art saliency models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • LBP-Based Edge-Texture Features for Object Recognition

    Page(s): 1953 - 1964
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2991 KB) |  | HTML iconHTML  

    This paper proposes two sets of novel edge-texture features, Discriminative Robust Local Binary Pattern (DRLBP) and Ternary Pattern (DRLTP), for object recognition. By investigating the limitations of Local Binary Pattern (LBP), Local Ternary Pattern (LTP) and Robust LBP (RLBP), DRLBP and DRLTP are proposed as new features. They solve the problem of discrimination between a bright object against a dark background and vice-versa inherent in LBP and LTP. DRLBP also resolves the problem of RLBP whereby LBP codes and their complements in the same block are mapped to the same code. Furthermore, the proposed features retain contrast information necessary for proper representation of object contours that LBP, LTP, and RLBP discard. Our proposed features are tested on seven challenging data sets: INRIA Human, Caltech Pedestrian, UIUC Car, Caltech 101, Caltech 256, Brodatz, and KTH-TIPS2-a. Results demonstrate that the proposed features outperform the compared approaches on most data sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Space-Varying Convolution Using Matrix Source Coding With Applications to Camera Stray Light Reduction

    Page(s): 1965 - 1979
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3817 KB) |  | HTML iconHTML  

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical Model of Quantized DCT Coefficients: Application in the Steganalysis of Jsteg Algorithm

    Page(s): 1980 - 1993
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2531 KB) |  | HTML iconHTML  

    The goal of this paper is to propose a statistical model of quantized discrete cosine transform (DCT) coefficients. It relies on a mathematical framework of studying the image processing pipeline of a typical digital camera instead of fitting empirical data with a variety of popular models proposed in this paper. To highlight the accuracy of the proposed model, this paper exploits it for the detection of hidden information in JPEG images. By formulating the hidden data detection as a hypothesis testing, this paper studies the most powerful likelihood ratio test for the steganalysis of Jsteg algorithm and establishes theoretically its statistical performance. Based on the proposed model of DCT coefficients, a maximum likelihood estimator for embedding rate is also designed. Numerical results on simulated and real images emphasize the accuracy of the proposed model and the performance of the proposed test. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spatial Pooling of Heterogeneous Features for Image Classification

    Page(s): 1994 - 2008
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (10090 KB) |  | HTML iconHTML  

    In image classification tasks, one of the most successful algorithms is the bag-of-features (BoFs) model. Although the BoF model has many advantages, such as simplicity, generality, and scalability, it still suffers from several drawbacks, including the limited semantic description of local descriptors, lack of robust structures upon single visual words, and missing of efficient spatial weighting. To overcome these shortcomings, various techniques have been proposed, such as extracting multiple descriptors, spatial context modeling, and interest region detection. Though they have been proven to improve the BoF model to some extent, there still lacks a coherent scheme to integrate each individual module together. To address the problems above, we propose a novel framework with spatial pooling of complementary features. Our model expands the traditional BoF model on three aspects. First, we propose a new scheme for combining texture and edge-based local features together at the descriptor extraction level. Next, we build geometric visual phrases to model spatial context upon complementary features for midlevel image representation. Finally, based on a smoothed edgemap, a simple and effective spatial weighting scheme is performed to capture the image saliency. We test the proposed framework on several benchmark data sets for image classification. The extensive results show the superior performance of our algorithm over the state-of-the-art methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Heterogeneous Domain Adaptation and Classification by Exploiting the Correlation Subspace

    Page(s): 2009 - 2018
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2293 KB) |  | HTML iconHTML  

    We present a novel domain adaptation approach for solving cross-domain pattern recognition problems, i.e., the data or features to be processed and recognized are collected from different domains of interest. Inspired by canonical correlation analysis (CCA), we utilize the derived correlation subspace as a joint representation for associating data across different domains, and we advance reduced kernel techniques for kernel CCA (KCCA) if nonlinear correlation subspace are desirable. Such techniques not only makes KCCA computationally more efficient, potential over-fitting problems can be alleviated as well. Instead of directly performing recognition in the derived CCA subspace (as prior CCA-based domain adaptation methods did), we advocate the exploitation of domain transfer ability in this subspace, in which each dimension has a unique capability in associating cross-domain data. In particular, we propose a novel support vector machine (SVM) with a correlation regularizer, named correlation-transfer SVM, which incorporates the domain adaptation ability into classifier design for cross-domain recognition. We show that our proposed domain adaptation and classification approach can be successfully applied to a variety of cross-domain recognition tasks such as cross-view action recognition, handwritten digit recognition with different features, and image-to-text or text-to-image classification. From our empirical results, we verify that our proposed method outperforms state-of-the-art domain adaptation approaches in terms of recognition performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Click Prediction for Web Image Reranking Using Multimodal Sparse Coding

    Page(s): 2019 - 2032
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3267 KB) |  | HTML iconHTML  

    Image reranking is effective for improving the performance of a text-based image search. However, existing reranking algorithms are limited for two main reasons: 1) the textual meta-data associated with images is often mismatched with their actual visual content and 2) the extracted visual features do not accurately describe the semantic similarities between images. Recently, user click information has been used in image reranking, because clicks have been shown to more accurately describe the relevance of retrieved images to search queries. However, a critical problem for click-based methods is the lack of click data, since only a small number of web images have actually been clicked on by users. Therefore, we aim to solve this problem by predicting image clicks. We propose a multimodal hypergraph learning-based sparse coding method for image click prediction, and apply the obtained click data to the reranking of images. We adopt a hypergraph to build a group of manifolds, which explore the complementarity of different features through a group of weights. Unlike a graph that has an edge between two vertices, a hyperedge in a hypergraph connects a set of vertices, and helps preserve the local smoothness of the constructed sparse codes. An alternating optimization procedure is then performed, and the weights of different modalities and the sparse codes are simultaneously obtained. Finally, a voting strategy is used to describe the predicted click as a binary event (click or no click), from the images' corresponding sparse codes. Thorough empirical studies on a large-scale database including nearly 330 K images demonstrate the effectiveness of our approach for click prediction when compared with several other methods. Additional image reranking experiments on real-world data show the use of click prediction is beneficial to improving the performance of prominent graph-based image reranking algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Images as Occlusions of Textures: A Framework for Segmentation

    Page(s): 2033 - 2046
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3402 KB) |  | HTML iconHTML  

    We propose a new mathematical and algorithmic framework for unsupervised image segmentation, which is a critical step in a wide variety of image processing applications. We have found that most existing segmentation methods are not successful on histopathology images, which prompted us to investigate segmentation of a broader class of images, namely those without clear edges between the regions to be segmented. We model these images as occlusions of random images, which we call textures, and show that local histograms are a useful tool for segmenting them. Based on our theoretical results, we describe a flexible segmentation framework that draws on existing work on nonnegative matrix factorization and image deconvolution. Results on synthetic texture mosaics and real histology images show the promise of the method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cross-Indexing of Binary SIFT Codes for Large-Scale Image Search

    Page(s): 2047 - 2057
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4341 KB) |  | HTML iconHTML  

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image-Difference Prediction: From Color to Spectral

    Page(s): 2058 - 2068
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2758 KB) |  | HTML iconHTML  

    We propose a new strategy to evaluate the quality of multi and hyperspectral images, from the perspective of human perception. We define the spectral image difference as the overall perceived difference between two spectral images under a set of specified viewing conditions (illuminants). First, we analyze the stability of seven image-difference features across illuminants, by means of an information-theoretic strategy. We demonstrate, in particular, that in the case of common spectral distortions (spectral gamut mapping, spectral compression, spectral reconstruction), chromatic features vary much more than achromatic ones despite considering chromatic adaptation. Then, we propose two computationally efficient spectral image difference metrics and compare them to the results of a subjective visual experiment. A significant improvement is shown over existing metrics such as the widely used root-mean square error. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full-Reference Quality Estimation for Images With Different Spatial Resolutions

    Page(s): 2069 - 2080
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2601 KB) |  | HTML iconHTML  

    Multimedia communication is becoming pervasive because of the progress in wireless communications and multimedia coding. Estimating the quality of the visual content accurately is crucial in providing satisfactory service. State of the art visual quality assessment approaches are effective when the input image and reference image have the same resolution. However, finding the quality of an image that has spatial resolution different than that of the reference image is still a challenging problem. To solve this problem, we develop a quality estimator (QE), which computes the quality of the input image without resampling the reference or the input images. In this paper, we begin by identifying the potential weaknesses of previous approaches used to estimate the quality of experience. Next, we design a QE to estimate the quality of a distorted image with a lower resolution compared with the reference image. We also propose a subjective test environment to explore the success of the proposed algorithm in comparison with other QEs. When the input and test images have different resolutions, the subjective tests demonstrate that in most cases the proposed method works better than other approaches. In addition, the proposed algorithm also performs well when the reference image and the test image have the same resolution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Saliency-Based Selection of Gradient Vector Flow Paths for Content Aware Image Resizing

    Page(s): 2081 - 2095
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6877 KB) |  | HTML iconHTML  

    Content-aware image resizing techniques allow to take into account the visual content of images during the resizing process. The basic idea beyond these algorithms is the removal of vertical and/or horizontal paths of pixels (i.e., seams) containing low salient information. In this paper, we present a method which exploits the gradient vector flow (GVF) of the image to establish the paths to be considered during the resizing. The relevance of each GVF path is straightforward derived from an energy map related to the magnitude of the GVF associated to the image to be resized. To make more relevant, the visual content of the images during the content-aware resizing, we also propose to select the generated GVF paths based on their visual saliency properties. In this way, visually important image regions are better preserved in the final resized image. The proposed technique has been tested, both qualitatively and quantitatively, by considering a representative data set of 1000 images labeled with corresponding salient objects (i.e., ground-truth maps). Experimental results demonstrate that our method preserves crucial salient regions better than other state-of-the-art algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Single-Image Superresolution of Natural Stochastic Textures Based on Fractional Brownian Motion

    Page(s): 2096 - 2108
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3402 KB) |  | HTML iconHTML  

    Texture enhancement presents an ongoing challenge, in spite of the considerable progress made in recent years. Whereas most of the effort has been devoted so far to enhancement of regular textures, stochastic textures that are encountered in most natural images, still pose an outstanding problem. The purpose of enhancement of stochastic textures is to recover details, which were lost during the acquisition of the image. In this paper, a texture model, based on fractional Brownian motion (fBm), is proposed. The model is global and does not entail using image patches. The fBm is a self-similar stochastic process. Self-similarity is known to characterize a large class of natural textures. The fBm-based model is evaluated and a single-image regularized superresolution algorithm is derived. The proposed algorithm is useful for enhancement of a wide range of textures. Its performance is compared with single-image superresolution methods and its advantages are highlighted. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online Glocal Transfer for Automatic Figure-Ground Segmentation

    Page(s): 2109 - 2121
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3224 KB) |  | HTML iconHTML  

    This paper addresses the problem of automatic figure-ground segmentation, which aims at automatically segmenting out all foreground objects from background. The underlying idea of this approach is to transfer segmentation masks of globally and locally (glocally) similar exemplars into the query image. For this purpose, we propose a novel high-level image representation method named as object-oriented descriptor. Using this descriptor, a set of exemplar images glocally similar to the query image is retrieved. Then, using over-segmented regions of these retrieved exemplars, a discriminative classifier is learned on-the-fly and subsequently used to predict foreground probability for the query image. Finally, the optimal segmentation is obtained by combining the online prediction with typical energy optimization of Markov random field. The proposed approach has been extensively evaluated on three datasets, including Pascal VOC 2010, VOC 2011 segmentation challenges, and iCoseg dataset. Experiments show that the proposed approach outperforms state-of-the-art methods and has the potential to segment large-scale images containing unknown objects, which never appear in the exemplar images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning Joint Intensity-Depth Sparse Representations

    Page(s): 2122 - 2132
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2177 KB) |  | HTML iconHTML  

    This paper presents a method for learning overcomplete dictionaries of atoms composed of two modalities that describe a 3D scene: 1) image intensity and 2) scene depth. We propose a novel joint basis pursuit (JBP) algorithm that finds related sparse features in two modalities using conic programming and we integrate it into a two-step dictionary learning algorithm. The JBP differs from related convex algorithms because it finds joint sparsity models with different atoms and different coefficient values for intensity and depth. This is crucial for recovering generative models where the same sparse underlying causes (3D features) give rise to different signals (intensity and depth). We give a bound for recovery error of sparse coefficients obtained by JBP, and show numerically that JBP is superior to the group lasso algorithm. When applied to the Middlebury depth-intensity database, our learning algorithm converges to a set of related features, such as pairs of depth and intensity edges or image textures and depth slants. Finally, we show that JBP outperforms state of the art methods on depth inpainting for time-of-flight and Microsoft Kinect 3D data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Regularized Approach for Geodesic-Based Semisupervised Multimanifold Learning

    Page(s): 2133 - 2147
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4596 KB) |  | HTML iconHTML  

    Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Residual Component Analysis of Hyperspectral Images—Application to Joint Nonlinear Unmixing and Nonlinearity Detection

    Page(s): 2148 - 2158
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (3575 KB) |  | HTML iconHTML  

    This paper presents a nonlinear mixing model for joint hyperspectral image unmixing and nonlinearity detection. The proposed model assumes that the pixel reflectances are linear combinations of known pure spectral components corrupted by an additional nonlinear term, affecting the end members and contaminated by an additive Gaussian noise. A Markov random field is considered for nonlinearity detection based on the spatial structure of the nonlinear terms. The observed image is segmented into regions where nonlinear terms, if present, share similar statistical properties. A Bayesian algorithm is proposed to estimate the parameters involved in the model yielding a joint nonlinear unmixing and nonlinearity detection algorithm. The performance of the proposed strategy is first evaluated on synthetic data. Simulations conducted with real data show the accuracy of the proposed unmixing and nonlinearity detection strategy for the analysis of hyperspectral images. View full abstract»

    Open Access
  • MsLRR: A Unified Multiscale Low-Rank Representation for Image Segmentation

    Page(s): 2159 - 2167
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1235 KB)  

    In this paper, we present an efficient multiscale low-rank representation for image segmentation. Our method begins with partitioning the input images into a set of superpixels, followed by seeking the optimal superpixel-pair affinity matrix, both of which are performed at multiple scales of the input images. Since low-level superpixel features are usually corrupted by image noise, we propose to infer the low-rank refined affinity matrix. The inference is guided by two observations on natural images. First, looking into a single image, local small-size image patterns tend to recur frequently within the same semantic region, but may not appear in semantically different regions. The internal image statistics are referred to as replication prior, and we quantitatively justified it on real image databases. Second, the affinity matrices at different scales should be consistently solved, which leads to the cross-scale consistency constraint. We formulate these two purposes with one unified formulation and develop an efficient optimization procedure. The proposed representation can be used for both unsupervised or supervised image segmentation tasks. Our experiments on public data sets demonstrate the presented method can substantially improve segmentation accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MIMO Radar 3D Imaging Based on Combined Amplitude and Total Variation Cost Function With Sequential Order One Negative Exponential Form

    Page(s): 2168 - 2183
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (12484 KB)  

    In inverse synthetic aperture radar (ISAR) imaging, a target is usually regarded as consist of a few strong (specular) scatterers and the distribution of these strong scatterers is sparse in the imaging volume. In this paper, we propose to incorporate the sparse signal recovery method in 3D multiple-input multiple-output radar imaging algorithm. Sequential order one negative exponential (SOONE) function, which forms homotopy between ℓ1 and ℓ0 norms, is proposed to measure the sparsity. Gradient projection is used to solve a constrained nonconvex SOONE function minimization problem and recover the sparse signal. However, while the gradient projection method is computationally simple, it is not robust when a matrix in the algorithm is ill conditioned. We thus further propose using diagonal loading and singular value decomposition methods to improve the robustness of the algorithm. In order to handle targets with large flat surfaces, a combined amplitude and total-variation objective function is also proposed to regularize the shapes of the flat surfaces. Simulation results show that the proposed gradient projection of SOONE function method is better than orthogonal matching pursuit, CoSaMp, ℓ1-magic, Bayesian method with Laplace prior, smoothed ℓ0 method, and ℓ1-ℓs in high SNR cases for recovery of ±1 random spikes sparse signal. The quality of the simulated 3D images and real data ISAR images obtained using the new method is better than that of the conventional correlation method and minimum ℓ2 norm method, and competitive to the aforementioned sparse signal recovery algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003