<![CDATA[ IEEE Transactions on Image Processing - new TOC ]]>
http://ieeexplore.ieee.org
TOC Alert for Publication# 83 2016June 23<![CDATA[PiCode: A New Picture-Embedding 2D Barcode]]>258344434587089<![CDATA[Super-Resolution of Dynamic Scenes Using Sampling Rate Diversity]]>2583459347411656<![CDATA[A Unified Framework for Salient Structure Detection by Contour-Guided Visual Search]]>258347534885250<![CDATA[Merge Frame Design for Video Stream Switching Using Piecewise Constant Functions]]>258348935044078<![CDATA[Image Zoom Completion]]>258350535177019<![CDATA[Learning Discriminatively Reconstructed Source Data for Object Recognition With Few Examples]]>258351835324882<![CDATA[Iterative Refinement of Possibility Distributions by Learning for Pixel-Based Classification]]>258353335452667<![CDATA[Facial Sketch Synthesis Using 2D Direct Combined Model-Based Face-Specific Markov Network]]>258354635614914<![CDATA[A Non-Local Low-Rank Approach to Enforce Integrability]]>258356235713235<![CDATA[Online Deformable Object Tracking Based on Structure-Aware Hyper-Graph]]>258357235846499<![CDATA[Inter-Layer Prediction of Color in High Dynamic Range Image Scalable Compression]]>258358535962453<![CDATA[Contour Completion Without Region Segmentation]]>258359736115969<![CDATA[Fast Multispectral Imaging by Spatial Pixel-Binning and Spectral Unmixing]]> or more improvement in computational efficiency.]]>258361236256636<![CDATA[Dual Diversified Dynamical Gaussian Process Latent Variable Model for Video Repairing]]> GPLVM) to tackle the video repairing issue. For preservation purposes, videos have to be conserved on media. However, storing on media, such as films and hard disks, can suffer from unexpected data loss, for instance, physical damage. So repairing of missing or damaged pixels is essential for better video maintenance. Most methods seek to fill in missing holes by synthesizing similar textures from local patches (the neighboring pixels), consecutive frames, or the whole video. However, these can introduce incorrect contexts, especially when the missing hole or number of damaged frames is large. Furthermore, simple texture synthesis can introduce artifacts in undamaged and recovered areas. To address aforementioned problems, we introduce two diversity encouraging priors to both of inducing points and latent variables for considering the variety in existing videos. In GPLVM, the inducing points constitute a smaller subset of observed data, while latent variables are a low-dimensional representation of observed data. Since they have a strong correlation with the observed data, it is essential that both of them can capture distinct aspects of and fully represent the observed data. The dual diversity encouraging priors ensure that the trained inducing points and latent variables are more diverse and resistant for context-aware and artifacts-free-based video repairing. The defined objective function in our proposed model is initially not analytically tractable and must be solved by variational inference. Finally, experimental testing results illustrate the robustness and effectiveness of our method for damaged video repairing.]]>258362636374917<![CDATA[Correction to “Filtering Chromatic Aberration for Wide Acceptance Angle Electrostatic Lenses II—Experimental Evaluation and Software-Based Imaging Energy Analyzer”]]>[1], the following support information should have been included in the first footnote.]]>2583638363853<![CDATA[Multi-View Object Extraction With Fractional Boundaries]]>258363936544606<![CDATA[Effective Decompression of JPEG Document Images]]>258365536702695<![CDATA[Improving Intra Prediction in High-Efficiency Video Coding]]>258367136824404<![CDATA[Fast Single Image Super-Resolution Using a New Analytical Solution for <inline-formula> <tex-math notation="LaTeX">$ell _{2}$ </tex-math></inline-formula>–<inline-formula> <tex-math notation="LaTeX">$ell _{2}$ </tex-math></inline-formula> Problems]]>$ell _{2}$ -regularized quadratic model, i.e., an $ell _{2}$ –$ell _{2}$ optimization problem. The flexibility of the proposed SR scheme is shown through the use of various priors/regularizations, ranging from generic image priors to learning-based approaches. In the case of non-Gaussian priors, we show how the analytical solution derived from the Gaussian case can be embedded into traditional splitting frameworks, allowing the computation cost of existing algorithms to be decreased significantly. Simulation results conducted on several images with different priors illustrate the effectiveness of our fast SR approach compared with existing techniques.]]>258368336974431<![CDATA[Connected Component Model for Multi-Object Tracking]]>258369837113169<![CDATA[Noise Power Spectrum Measurements in Digital Imaging With Gain Nonuniformity Correction]]>$n$ for averaging can reduce the remaining photon noise in the gain map and yield accurate NPS values. However, for practical finite $n$ , the photon noise also significantly inflates NPS. In this paper, a nonuniform-gain image formation model is proposed and the performance of the gain correction is theoretically analyzed in terms of the signal-to-noise ratio (SNR). It is shown that the SNR is $ {textit{O}}left ({sqrt {n}}right )$ . An NPS measurement algorithm based on the gain map is then proposed for any given $n$ . Under a weak nonuniform gain assumption, another measurement algorithm based on the image difference is also proposed. For real radiography image detectors, the proposed algorithms are compared with traditional detrending and subtraction methods, and it is shown that as few as two images ($n=1$ ) can provide an accurate NPS because of the compensation constant $(1+1/n)$ .]]>258371237221542<![CDATA[FRESH—FRI-Based Single-Image Super-Resolution Algorithm]]>258372337357200<![CDATA[Joint Segmentation and Deconvolution of Ultrasound Images Using a Hierarchical Bayesian Model Based on Generalized Gaussian Priors]]>in vivo US images.]]>258373637504389<![CDATA[Modeling the Quality of Videos Displayed With Local Dimming Backlight at Different Peak White and Ambient Light Levels]]>258375137612063<![CDATA[Understanding Deep Representations Learned in Modeling Users Likes]]>$sim 15$ %–20%. Beyond this test-set performance, an attempt is made to qualitatively understand the representations learned by the deep architecture used to model user likes.]]>258376237745128