<![CDATA[ IEEE Transactions on Circuits and Systems for Video Technology - new TOC ]]>
http://ieeexplore.ieee.org
TOC Alert for Publication# 76 2018May 17<![CDATA[Table of contents]]>285C1C4147<![CDATA[IEEE Transactions on Circuits and Systems for Video Technology publication information]]>285C2C285<![CDATA[Blind Dual Watermarking for Color Images’ Authentication and Copyright Protection]]>285104710552607<![CDATA[Collaborative Visual Cryptography Schemes]]>$k,n$ )-conventional visual cryptography (VC) scheme is designed to share one secret and each participant takes one share. When some common participants are involved in multiple VC schemes for multiple secrets, each needs to take multiple shares. This procedure needs more shares, which is inconvenient. It is desirable that the collaboration between the VC schemes can allow each common participant to keep only one share. Simply merging or gluing together two traditional ($k_{1}$ , $n_{1}$ )- and ($k_{2}$ , $n_{2}$ )-VC schemes, after making their pixel expansions the same, might be able to facilitate the collaboration and allow each common participant to keep only one share. But there is a security risk that when a subset of $k_{1}$ participants are from the collection of noncommon participants, some from scheme 1 and some from scheme 2, they can reconstruct secret 1, which is inconsistent with the intention of the original scheme. Similarly, $k_{2}$ noncommon participants could reconstruct secret 2. This shortcoming is inherited from the brute-force combination of traditional schemes. Therefore, a more sophisticated mechanism is required; this is the main task of this paper. In this paper, we first transform collaborative VC (CVC) schemes into the multiple secrets VC scheme with a general access structure. The construction of the basis matrices in CVC scheme between two VC schemes is formulated into an integer linear programming problem that minimizes th-
pixel expansion under the corresponding security and contrast constraints. Also the collaboration among more VC schemes is constructed. Finally, the experimental results illustrate the construction procedure of the CVC scheme and demonstrate the effectiveness of the CVC scheme.]]>285105610701847<![CDATA[Isophote-Constrained Autoregressive Model With Adaptive Window Extension for Image Interpolation]]>285107110865522<![CDATA[Fast Volume Seam Carving With Multipass Dynamic Programming]]>285108711016616<![CDATA[Toward Always-On Mobile Object Detection: Energy Versus Performance Tradeoffs for Embedded HOG Feature Extraction]]>$19times $ reduction in I/O energy and a $3.3times $ reduction in back-end detection energy compared with conventional object detection pipelines.]]>285110211154828<![CDATA[Salient Region Detection via Discriminative Dictionary Learning and Joint Bayesian Inference]]>285111611296440<![CDATA[Dense and Sparse Labeling With Multidimensional Features for Saliency Detection]]>285113011434417<![CDATA[Deep Recurrent Regression for Facial Landmark Detection]]>285114411575843<![CDATA[Robust Stereoscopic Crosstalk Prediction]]>$V_{mathrm{ dispc}}$ and $V_{mathrm{ dlogc}}$ ). Metric $V_{mathrm{ dispc}}$ considers the effect of the disparity map and the color difference map, while $V_{mathrm{ dlogc}}$ addresses the influence of the color contrast map. The prediction performance is evaluated using various types of stereoscopic crosstalk images. By incorporating $V_{mathrm{ dispc}}$ and $V_{mathrm{ dlogc}}$ , the new metric $V_{mathrm{ pdlc}}$ is proposed to achieve a higher correlation with the perceived subject crosstalk scores. Experimental results show that the new metrics achieve better performance than previous methods, which indicate that color information is one key factor for crosstalk visible prediction. Furthermore, we construct a new data set to evaluate our new metrics.]]>285115811681739<![CDATA[Fast Hash-Based Inter-Block Matching for Screen Content Coding]]>285116911823051<![CDATA[Access Types Effect on Internet Video Services and Its Implications on CDN Caching]]>285118311963028<![CDATA[Optimizing the Detection Performance of Smart Camera Networks Through a Probabilistic Image-Based Model]]>285119712114345<![CDATA[A Survey of Content-Aware Video Analysis for Sports]]>285121212314521<![CDATA[3D Feature Constrained Reconstruction for Low-Dose CT Imaging]]>285123212478734<![CDATA[EgoSampling: Wide View Hyperlapse From Egocentric Videos]]>285124812594775<![CDATA[Introducing IEEE Collabratec]]>285126012602151<![CDATA[Learning has no boundaries]]>28512611261555<![CDATA[IEEE Global History Network]]>285126212623210<![CDATA[IEEE Transactions on Circuits and Systems for Video Technology publication information]]>285C3C3104