IEEE Transactions on Circuits and Systems for Video Technology
- Vol: 22 Issue: 1
- Vol: 22 Issue: 2
- Vol: 22 Issue: 3
- Vol: 22 Issue: 4
- Vol: 22 Issue: 5
- Vol: 22 Issue: 6
- Vol: 22 Issue: 7
- Vol: 22 Issue: 8
- Vol: 22 Issue: 9
- Vol: 22 Issue: 10
- Vol: 22 Issue: 11
- Vol: 22 Issue: 12
- Vol: 21 Issue: 1
- Vol: 21 Issue: 2
- Vol: 21 Issue: 3
- Vol: 21 Issue: 4
- Vol: 21 Issue: 5
- Vol: 21 Issue: 6
- Vol: 21 Issue: 7
- Vol: 21 Issue: 8
- Vol: 21 Issue: 9
- Vol: 21 Issue: 10
- Vol: 21 Issue: 11
- Vol: 21 Issue: 12
- Vol: 20 Issue: 1
- Vol: 20 Issue: 2
- Vol: 20 Issue: 3
- Vol: 20 Issue: 4
- Vol: 20 Issue: 5
- Vol: 20 Issue: 6
- Vol: 20 Issue: 7
- Vol: 20 Issue: 8
- Vol: 20 Issue: 9
- Vol: 20 Issue: 10
- Vol: 20 Issue: 11
- Vol: 20 Issue: 12
- Vol: 19 Issue: 1
- Vol: 19 Issue: 2
- Vol: 19 Issue: 3
- Vol: 19 Issue: 4
- Vol: 19 Issue: 5
- Vol: 19 Issue: 6
- Vol: 19 Issue: 7
- Vol: 19 Issue: 8
- Vol: 19 Issue: 9
- Vol: 19 Issue: 10
- Vol: 19 Issue: 11
- Vol: 19 Issue: 12
- Vol: 8 Issue: 1
- Vol: 8 Issue: 2
- Vol: 8 Issue: 3
- Vol: 8 Issue: 4
- Vol: 8 Issue: 5
- Vol: 8 Issue: 6
- Vol: 8 Issue: 7
- Vol: 8 Issue: 8
- Vol: 18 Issue: 1
- Vol: 18 Issue: 2
- Vol: 18 Issue: 3
- Vol: 18 Issue: 4
- Vol: 18 Issue: 5
- Vol: 18 Issue: 6
- Vol: 18 Issue: 7
- Vol: 18 Issue: 8
- Vol: 18 Issue: 9
- Vol: 18 Issue: 10
- Vol: 18 Issue: 11
- Vol: 18 Issue: 12
- Vol: 17 Issue: 1
- Vol: 17 Issue: 2
- Vol: 17 Issue: 3
- Vol: 17 Issue: 4
- Vol: 17 Issue: 5
- Vol: 17 Issue: 6
- Vol: 17 Issue: 7
- Vol: 17 Issue: 8
- Vol: 17 Issue: 9
- Vol: 17 Issue: 10
- Vol: 17 Issue: 11
- Vol: 17 Issue: 12
- Vol: 16 Issue: 1
- Vol: 16 Issue: 2
- Vol: 16 Issue: 3
- Vol: 16 Issue: 4
- Vol: 16 Issue: 5
- Vol: 16 Issue: 6
- Vol: 16 Issue: 7
- Vol: 16 Issue: 8
- Vol: 16 Issue: 9
- Vol: 16 Issue: 10
- Vol: 16 Issue: 11
- Vol: 16 Issue: 12
- Vol: 15 Issue: 1
- Vol: 15 Issue: 2
- Vol: 15 Issue: 3
- Vol: 15 Issue: 4
- Vol: 15 Issue: 5
- Vol: 15 Issue: 6
- Vol: 15 Issue: 7
- Vol: 15 Issue: 8
- Vol: 15 Issue: 9
- Vol: 15 Issue: 10
- Vol: 15 Issue: 11
- Vol: 15 Issue: 12
- Vol: 14 Issue: 1
- Vol: 14 Issue: 2
- Vol: 14 Issue: 3
- Vol: 14 Issue: 4
- Vol: 14 Issue: 5
- Vol: 14 Issue: 6
- Vol: 14 Issue: 7
- Vol: 14 Issue: 8
- Vol: 14 Issue: 9
- Vol: 14 Issue: 10
- Vol: 14 Issue: 11
- Vol: 14 Issue: 12
- Vol: 13 Issue: 1
- Vol: 13 Issue: 2
- Vol: 13 Issue: 3
- Vol: 13 Issue: 4
- Vol: 13 Issue: 5
- Vol: 13 Issue: 6
- Vol: 13 Issue: 7
- Vol: 13 Issue: 8
- Vol: 13 Issue: 9
- Vol: 13 Issue: 10
- Vol: 13 Issue: 11
- Vol: 13 Issue: 12
- Vol: 12 Issue: 1
- Vol: 12 Issue: 2
- Vol: 12 Issue: 3
- Vol: 12 Issue: 4
- Vol: 12 Issue: 5
- Vol: 12 Issue: 6
- Vol: 12 Issue: 7
- Vol: 12 Issue: 8
- Vol: 12 Issue: 9
- Vol: 12 Issue: 10
- Vol: 12 Issue: 11
- Vol: 12 Issue: 12
- Vol: 9 Issue: 1
- Vol: 9 Issue: 2
- Vol: 9 Issue: 3
- Vol: 9 Issue: 4
- Vol: 9 Issue: 5
- Vol: 9 Issue: 6
- Vol: 9 Issue: 7
- Vol: 9 Issue: 8
- Vol: 11 Issue: 1
- Vol: 11 Issue: 2
- Vol: 11 Issue: 3
- Vol: 11 Issue: 4
- Vol: 11 Issue: 5
- Vol: 11 Issue: 6
- Vol: 11 Issue: 7
- Vol: 11 Issue: 8
- Vol: 11 Issue: 9
- Vol: 11 Issue: 10
- Vol: 11 Issue: 11
- Vol: 11 Issue: 12
- Vol: 10 Issue: 1
- Vol: 10 Issue: 2
- Vol: 10 Issue: 3
- Vol: 10 Issue: 4
- Vol: 10 Issue: 5
- Vol: 10 Issue: 6
- Vol: 10 Issue: 7
- Vol: 10 Issue: 8
- Vol: 28 Issue: 1
- Vol: 28 Issue: 2
- Vol: 28 Issue: 3
- Vol: 28 Issue: 4
- Vol: 28 Issue: 5
- Vol: 28 Issue: 6
- Vol: 28 Issue: 7
- Vol: 28 Issue: 8
- Vol: 28 Issue: 9
- Vol: 28 Issue: 10
- Vol: 28 Issue: 11
- Vol: 28 Issue: 12
- Vol: 27 Issue: 1
- Vol: 27 Issue: 2
- Vol: 27 Issue: 3
- Vol: 27 Issue: 4
- Vol: 27 Issue: 5
- Vol: 27 Issue: 6
- Vol: 27 Issue: 7
- Vol: 27 Issue: 8
- Vol: 27 Issue: 9
- Vol: 27 Issue: 10
- Vol: 27 Issue: 11
- Vol: 27 Issue: 12
- Vol: 26 Issue: 1
- Vol: 26 Issue: 2
- Vol: 26 Issue: 3
- Vol: 26 Issue: 4
- Vol: 26 Issue: 5
- Vol: 26 Issue: 6
- Vol: 26 Issue: 7
- Vol: 26 Issue: 8
- Vol: 26 Issue: 9
- Vol: 26 Issue: 10
- Vol: 26 Issue: 11
- Vol: 26 Issue: 12
- Vol: 25
- Vol: 25 Issue: 1
- Vol: 25 Issue: 2
- Vol: 25 Issue: 3
- Vol: 25 Issue: 4
- Vol: 25 Issue: 5
- Vol: 25 Issue: 6
- Vol: 25 Issue: 7
- Vol: 25 Issue: 8
- Vol: 25 Issue: 9
- Vol: 25 Issue: 10
- Vol: 25 Issue: 11
- Vol: 25 Issue: 12
- Vol: 22 Issue: 1
- Vol: 22 Issue: 2
- Vol: 22 Issue: 3
- Vol: 22 Issue: 4
- Vol: 22 Issue: 5
- Vol: 22 Issue: 6
- Vol: 22 Issue: 7
- Vol: 22 Issue: 8
- Vol: 22 Issue: 9
- Vol: 22 Issue: 10
- Vol: 22 Issue: 11
- Vol: 22 Issue: 12
- Vol: 21 Issue: 1
- Vol: 21 Issue: 2
- Vol: 21 Issue: 3
- Vol: 21 Issue: 4
- Vol: 21 Issue: 5
- Vol: 21 Issue: 6
- Vol: 21 Issue: 7
- Vol: 21 Issue: 8
- Vol: 21 Issue: 9
- Vol: 21 Issue: 10
- Vol: 21 Issue: 11
- Vol: 21 Issue: 12
- Vol: 20 Issue: 1
- Vol: 20 Issue: 2
- Vol: 20 Issue: 3
- Vol: 20 Issue: 4
- Vol: 20 Issue: 5
- Vol: 20 Issue: 6
- Vol: 20 Issue: 7
- Vol: 20 Issue: 8
- Vol: 20 Issue: 9
- Vol: 20 Issue: 10
- Vol: 20 Issue: 11
- Vol: 20 Issue: 12
- Vol: 19 Issue: 1
- Vol: 19 Issue: 2
- Vol: 19 Issue: 3
- Vol: 19 Issue: 4
- Vol: 19 Issue: 5
- Vol: 19 Issue: 6
- Vol: 19 Issue: 7
- Vol: 19 Issue: 8
- Vol: 19 Issue: 9
- Vol: 19 Issue: 10
- Vol: 19 Issue: 11
- Vol: 19 Issue: 12
- Vol: 8 Issue: 1
- Vol: 8 Issue: 2
- Vol: 8 Issue: 3
- Vol: 8 Issue: 4
- Vol: 8 Issue: 5
- Vol: 8 Issue: 6
- Vol: 8 Issue: 7
- Vol: 8 Issue: 8
- Vol: 18 Issue: 1
- Vol: 18 Issue: 2
- Vol: 18 Issue: 3
- Vol: 18 Issue: 4
- Vol: 18 Issue: 5
- Vol: 18 Issue: 6
- Vol: 18 Issue: 7
- Vol: 18 Issue: 8
- Vol: 18 Issue: 9
- Vol: 18 Issue: 10
- Vol: 18 Issue: 11
- Vol: 18 Issue: 12
- Vol: 17 Issue: 1
- Vol: 17 Issue: 2
- Vol: 17 Issue: 3
- Vol: 17 Issue: 4
- Vol: 17 Issue: 5
- Vol: 17 Issue: 6
- Vol: 17 Issue: 7
- Vol: 17 Issue: 8
- Vol: 17 Issue: 9
- Vol: 17 Issue: 10
- Vol: 17 Issue: 11
- Vol: 17 Issue: 12
- Vol: 16 Issue: 1
- Vol: 16 Issue: 2
- Vol: 16 Issue: 3
- Vol: 16 Issue: 4
- Vol: 16 Issue: 5
- Vol: 16 Issue: 6
- Vol: 16 Issue: 7
- Vol: 16 Issue: 8
- Vol: 16 Issue: 9
- Vol: 16 Issue: 10
- Vol: 16 Issue: 11
- Vol: 16 Issue: 12
- Vol: 15 Issue: 1
- Vol: 15 Issue: 2
- Vol: 15 Issue: 3
- Vol: 15 Issue: 4
- Vol: 15 Issue: 5
- Vol: 15 Issue: 6
- Vol: 15 Issue: 7
- Vol: 15 Issue: 8
- Vol: 15 Issue: 9
- Vol: 15 Issue: 10
- Vol: 15 Issue: 11
- Vol: 15 Issue: 12
- Vol: 14 Issue: 1
- Vol: 14 Issue: 2
- Vol: 14 Issue: 3
- Vol: 14 Issue: 4
- Vol: 14 Issue: 5
- Vol: 14 Issue: 6
- Vol: 14 Issue: 7
- Vol: 14 Issue: 8
- Vol: 14 Issue: 9
- Vol: 14 Issue: 10
- Vol: 14 Issue: 11
- Vol: 14 Issue: 12
- Vol: 13 Issue: 1
- Vol: 13 Issue: 2
- Vol: 13 Issue: 3
- Vol: 13 Issue: 4
- Vol: 13 Issue: 5
- Vol: 13 Issue: 6
- Vol: 13 Issue: 7
- Vol: 13 Issue: 8
- Vol: 13 Issue: 9
- Vol: 13 Issue: 10
- Vol: 13 Issue: 11
- Vol: 13 Issue: 12
- Vol: 12 Issue: 1
- Vol: 12 Issue: 2
- Vol: 12 Issue: 3
- Vol: 12 Issue: 4
- Vol: 12 Issue: 5
- Vol: 12 Issue: 6
- Vol: 12 Issue: 7
- Vol: 12 Issue: 8
- Vol: 12 Issue: 9
- Vol: 12 Issue: 10
- Vol: 12 Issue: 11
- Vol: 12 Issue: 12
- Vol: 9 Issue: 1
- Vol: 9 Issue: 2
- Vol: 9 Issue: 3
- Vol: 9 Issue: 4
- Vol: 9 Issue: 5
- Vol: 9 Issue: 6
- Vol: 9 Issue: 7
- Vol: 9 Issue: 8
- Vol: 11 Issue: 1
- Vol: 11 Issue: 2
- Vol: 11 Issue: 3
- Vol: 11 Issue: 4
- Vol: 11 Issue: 5
- Vol: 11 Issue: 6
- Vol: 11 Issue: 7
- Vol: 11 Issue: 8
- Vol: 11 Issue: 9
- Vol: 11 Issue: 10
- Vol: 11 Issue: 11
- Vol: 11 Issue: 12
- Vol: 10 Issue: 1
- Vol: 10 Issue: 2
- Vol: 10 Issue: 3
- Vol: 10 Issue: 4
- Vol: 10 Issue: 5
- Vol: 10 Issue: 6
- Vol: 10 Issue: 7
- Vol: 10 Issue: 8
- Vol: 28 Issue: 1
- Vol: 28 Issue: 2
- Vol: 28 Issue: 3
- Vol: 28 Issue: 4
- Vol: 28 Issue: 5
- Vol: 28 Issue: 6
- Vol: 28 Issue: 7
- Vol: 28 Issue: 8
- Vol: 28 Issue: 9
- Vol: 28 Issue: 10
- Vol: 28 Issue: 11
- Vol: 28 Issue: 12
- Vol: 27 Issue: 1
- Vol: 27 Issue: 2
- Vol: 27 Issue: 3
- Vol: 27 Issue: 4
- Vol: 27 Issue: 5
- Vol: 27 Issue: 6
- Vol: 27 Issue: 7
- Vol: 27 Issue: 8
- Vol: 27 Issue: 9
- Vol: 27 Issue: 10
- Vol: 27 Issue: 11
- Vol: 27 Issue: 12
- Vol: 26 Issue: 1
- Vol: 26 Issue: 2
- Vol: 26 Issue: 3
- Vol: 26 Issue: 4
- Vol: 26 Issue: 5
- Vol: 26 Issue: 6
- Vol: 26 Issue: 7
- Vol: 26 Issue: 8
- Vol: 26 Issue: 9
- Vol: 26 Issue: 10
- Vol: 26 Issue: 11
- Vol: 26 Issue: 12
- Vol: 25
- Vol: 25 Issue: 1
- Vol: 25 Issue: 2
- Vol: 25 Issue: 3
- Vol: 25 Issue: 4
- Vol: 25 Issue: 5
- Vol: 25 Issue: 6
- Vol: 25 Issue: 7
- Vol: 25 Issue: 8
- Vol: 25 Issue: 9
- Vol: 25 Issue: 10
- Vol: 25 Issue: 11
- Vol: 25 Issue: 12
Volume 28 Issue 12 • Dec. 2018
- Already Purchased?
- Subscription Options
Sponsor
Filter Results
-
-
IEEE Transactions on Circuits and Systems for Video Technology publication information
|
PDF (86 KB)
-
Image Denoising via Low Rank Regularization Exploiting Intra and Inter Patch Correlation
Publication Year: 2018, Page(s):3321 - 3332
Cited by: Papers (1)In image restoration tasks, image priors generally utilize correlation within image contents to predict the latent image signal. In this paper, we propose to jointly exploit both intra- and inter-patch correlation of the input image, so as to further reduce the uncertainty of the unknown signal, and thus improve the prediction of the latent image. The proposed scheme evolves from the low-rank regu... View full abstract»
-
Shape-Preserving Object Depth Control for Stereoscopic Images
Publication Year: 2018, Page(s):3333 - 3344
Cited by: Papers (2)In the field of 3-D technology, it is interesting as well as meaningful issue to control object depth in 3-D space. Recently, some depth control methods for stereoscopic images have been proposed, which usually employ depth map or directly process color images to implement depth control. There are two main disadvantages for these methods. First, the results of these methods usually suffer from obj... View full abstract»
-
Learning Parts-Based and Global Representation for Image Classification
Publication Year: 2018, Page(s):3345 - 3360Nonnegative matrix factorization (NMF), known as a famous matrix factorization technique, has been widely used in pattern recognition and computer vision. NMF represents the input data matrix as a product of two nonnegative factors. As NMF is based on the Euclidean distance, which is sensitive to noise or errors in the data, some robust NMF methods are proposed. Mainly focusing on parts-based repr... View full abstract»
-
MSMCT: Multi-State Multi-Camera Tracker
Publication Year: 2018, Page(s):3361 - 3376Visual tracking of multiple persons simultaneously is an important tool for group behaviour analysis. In this paper, we demonstrate that multi-target tracking in a network of non-overlapping cameras can be formulated in a framework, where the association among all given target hypotheses both within and between cameras is performed simultaneously. Our approach helps to overcome the fragility of mu... View full abstract»
-
Once for All: A Two-Flow Convolutional Neural Network for Visual Tracking
Publication Year: 2018, Page(s):3377 - 3386
Cited by: Papers (1)The main challenges of visual object tracking arise from the arbitrary appearance of the objects that need to be tracked. Most existing algorithms try to solve this problem by training a new model to regenerate or classify each tracked object. As a result, the model needs to be initialized and retrained for each new object. In this paper, we propose to track different objects in an object-independ... View full abstract»
-
Rate-Distortion Optimized Sparse Coding With Ordered Dictionary for Image Set Compression
Publication Year: 2018, Page(s):3387 - 3397
Cited by: Papers (3)Image set compression has recently emerged as an active research topic due to the rapidly increasing demand in cloud storage. In this paper, we propose a novel framework for image set compression based on the rate-distortion optimized sparse coding. Specifically, given a set of similar images, one representative image is first identified according to the similarity among these images, and a dictio... View full abstract»
-
Fast Integer Motion Estimation With Bottom-Up Motion Vector Prediction for an HEVC Encoder
Publication Year: 2018, Page(s):3398 - 3411
Cited by: Papers (1)Although advanced motion vector prediction (AMVP) modes based on motion estimation (ME) are selected significantly less due to the merge mode newly adopted in the high-efficiency video coding (HEVC), integer ME (IME) still occupies a large amount of computation in HEVC because the HEVC supports a highly flexible block partitioning structure. Introduction of the merge mode in HEVC substantially aff... View full abstract»
-
Improved Efficiency on Adaptive Arithmetic Coding for Data Compression Using Range-Adjusting Scheme, Increasingly Adjusting Step, and Mutual-Learning Scheme
Publication Year: 2018, Page(s):3412 - 3423Context-based adaptive arithmetic coding (CAAC) has high coding efficiency and is adopted by the majority of advanced compression algorithms. In this paper, five new techniques are proposed to further improve the performance of CAAC. They make the frequency table (the table used to estimate the probability distribution of data according to the past input) of CAAC converge to the true probability d... View full abstract»
-
Adaptive Quantization Parameter Selection For H.265/HEVC by Employing Inter-Frame Dependency
Publication Year: 2018, Page(s):3424 - 3436
Cited by: Papers (2)Rate-distortion optimization (RDO) is widely applied in video coding, which aims at minimizing the coding distortion at a target bitrate. Conventionally, RDO is performed independently on each individual frame to avoid high computational complexity. However, extensive use of temporal/spatial predictions result in strong coding dependencies among neighboring frames, which make the current RDO be no... View full abstract»
-
Object Shape Approximation and Contour Adaptive Depth Image Coding for Virtual View Synthesis
Publication Year: 2018, Page(s):3437 - 3451
Cited by: Papers (1)A depth image provides partial geometric information of a 3D scene, namely the shapes of physical objects as observed from a particular viewpoint. This information is important when synthesizing images of different virtual camera viewpoints via depth-image-based rendering (DIBR). It has been shown that depth images can be efficiently coded using contour-adaptive codecs that preserve edge sharpness... View full abstract»
-
Efficient H.264-to-HEVC Transcoding Based on Motion Propagation and Post-Order Traversal of Coding Tree Units
Publication Year: 2018, Page(s):3452 - 3466In this paper, we propose a fast H.264-to-HEVC transcoder composed of a motion propagation algorithm and a fast mode decision framework. The motion propagation algorithm creates a motion vector candidate list at the coding tree unit (CTU) level and, thereafter, selects the best candidate at the prediction unit level. This method eliminates computational redundancy by pre-computing the prediction e... View full abstract»
-
Pipelines for HDR Video Coding Based on Luminance Independent Chromaticity Preprocessing
Publication Year: 2018, Page(s):3467 - 3477We consider the chromaticity in high dynamic range (HDR) video coding and show the advantages of a constant luminance color space for encoding. For this, we introduce two constant luminance HDR video coding pipelines, which convert the source video to linear Yu'v'. A content dependent scaling of the chromaticity components serves as color quality parameter. This reduces perceivable color artifacts... View full abstract»
-
A Novel Video Coding Framework Using a Self-Adaptive Dictionary
Publication Year: 2018, Page(s):3478 - 3491In this paper, we propose to use a self-adaptive redundant dictionary, consisting of all possible inter and intra prediction candidates, to directly represent the frame blocks in a video sequence. The self-adaptive dictionary generalizes the conventional predictive coding approach by allowing adaptive linear combinations of prediction candidates, which is solved by an rate-distortion aware L0-norm... View full abstract»
-
Replicating Coded Content in Crowdsourcing-Based CDN Systems
Publication Year: 2018, Page(s):3492 - 3503Recently, crowdsourcing-based content delivery networks (CDN) emerge as a promising technology that can distribute massive video content to a vast number of Internet users by crawling bandwidth and storage resources from Internet end devices. Any ordinary Internet users with excessive resources can be recruited into such systems as mini-servers. Different from edge servers equipped with dedicated ... View full abstract»
-
Distribution Sensitive Product Quantization
Publication Year: 2018, Page(s):3504 - 3515Product quantization (PQ) seems to have become the most efficient framework of performing approximate nearest neighbor (ANN) search for high-dimensional data. However, almost all existing PQ-based ANN techniques uniformly allocate precious bit budget to each subspace. This is not optimal, because data are often not evenly distributed among different subspaces. A better strategy is to achieve an im... View full abstract»
-
Threshold-Guided Design and Optimization for Harris Corner Detector Architecture
Publication Year: 2018, Page(s):3516 - 3526
Cited by: Papers (1)High-speed corner detection is an essential step in many real-time computer vision applications, e.g., object recognition, motion analysis, and stereo matching. Hardware implementation of corner detection algorithms, such as the Harris corner detector (HCD) has become a viable solution for meeting real-time requirements of the applications. A major challenge lies in the design of power, energy and... View full abstract»
-
Screen Orientation Aware DRAM Architecture for Mobile Video and Graphic Applications
Publication Year: 2018, Page(s):3527 - 3538The state-of-the-art mobile image and graphic applications demand not only a lot of computing power, but also high-quality memory services. Moreover, depending on the screen orientations of mobile systems, image and graphic data can be accessed in a complicated manner. Since conventional dynamic random access memories (DRAMs) do not provide successive image and graphic pixels on the same column ad... View full abstract»
-
Linear Disentangled Representation Learning for Facial Actions
Publication Year: 2018, Page(s):3539 - 3544Limited annotated data available for the recognition of facial expression and particularly action units makes it hard to train a deep network which can learn disentangled invariant features. However, a supervised linear model is undemanding in terms of training data. In this paper, we propose an elegant linear model to untangle facial actions from expressive face videos which contain a mixture of ... View full abstract»
-
A New Distortion Function Design for JPEG Steganography Using the Generalized Uniform Embedding Strategy
Publication Year: 2018, Page(s):3545 - 3549Nowadays, the most prevailing approach to steganography is the minimal embedding distortion framework, which includes an optimizable distortion function for each cover element and an encoding method to minimize the distortion. With the emergence of Syndrome-Trellis Code, the distortion function plays an increasingly important role in modern adaptive image steganography. In this letter, a new disto... View full abstract»
-
Special Issue on Large-scale Visual Sensor Networks: Architectures and Applications
|
PDF (2036 KB)
-
IEEE Transactions on Circuits and Systems for Video Technology information for authors
|
PDF (3406 KB)
-
2018 Index IEEE Transactions on Circuits and Systems for Video Technology Vol. 28
|
PDF (317 KB)
-
IEEE Transactions on Circuits and Systems for Video Technology publication information
|
PDF (105 KB)
Aims & Scope
IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) covers the circuits and systems aspects of all video technologies. General, theoretical, and application-oriented papers with a circuits and systems perspective are encouraged for publication in TCSVT on or related to image/video acquisition, representation, presentation and display; processing, filtering and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication and networking; storage, retrieval, indexing and search; and/or hardware and software design and implementation.
Meet Our Editors
Editor-in-Chief
Shipeng Li
iFLYTEK Co. Ltd.
No. 666 West Wangjiang Road
Hi-Tech Zone, Hefei, China 230088
Peer Review Support Services
Desiree Noel
IEEE Publishing Operations
d.noel@ieee.org
732-562-2644