IEEE Transactions on Circuits and Systems for Video Technology
- Vol: 22 Issue: 1
- Vol: 22 Issue: 2
- Vol: 22 Issue: 3
- Vol: 22 Issue: 4
- Vol: 22 Issue: 5
- Vol: 22 Issue: 6
- Vol: 22 Issue: 7
- Vol: 22 Issue: 8
- Vol: 22 Issue: 9
- Vol: 22 Issue: 10
- Vol: 22 Issue: 11
- Vol: 22 Issue: 12
- Vol: 21 Issue: 1
- Vol: 21 Issue: 2
- Vol: 21 Issue: 3
- Vol: 21 Issue: 4
- Vol: 21 Issue: 5
- Vol: 21 Issue: 6
- Vol: 21 Issue: 7
- Vol: 21 Issue: 8
- Vol: 21 Issue: 9
- Vol: 21 Issue: 10
- Vol: 21 Issue: 11
- Vol: 21 Issue: 12
- Vol: 20 Issue: 1
- Vol: 20 Issue: 2
- Vol: 20 Issue: 3
- Vol: 20 Issue: 4
- Vol: 20 Issue: 5
- Vol: 20 Issue: 6
- Vol: 20 Issue: 7
- Vol: 20 Issue: 8
- Vol: 20 Issue: 9
- Vol: 20 Issue: 10
- Vol: 20 Issue: 11
- Vol: 20 Issue: 12
- Vol: 19 Issue: 1
- Vol: 19 Issue: 2
- Vol: 19 Issue: 3
- Vol: 19 Issue: 4
- Vol: 19 Issue: 5
- Vol: 19 Issue: 6
- Vol: 19 Issue: 7
- Vol: 19 Issue: 8
- Vol: 19 Issue: 9
- Vol: 19 Issue: 10
- Vol: 19 Issue: 11
- Vol: 19 Issue: 12
- Vol: 8 Issue: 1
- Vol: 8 Issue: 2
- Vol: 8 Issue: 3
- Vol: 8 Issue: 4
- Vol: 8 Issue: 5
- Vol: 8 Issue: 6
- Vol: 8 Issue: 7
- Vol: 8 Issue: 8
- Vol: 18 Issue: 1
- Vol: 18 Issue: 2
- Vol: 18 Issue: 3
- Vol: 18 Issue: 4
- Vol: 18 Issue: 5
- Vol: 18 Issue: 6
- Vol: 18 Issue: 7
- Vol: 18 Issue: 8
- Vol: 18 Issue: 9
- Vol: 18 Issue: 10
- Vol: 18 Issue: 11
- Vol: 18 Issue: 12
- Vol: 17 Issue: 1
- Vol: 17 Issue: 2
- Vol: 17 Issue: 3
- Vol: 17 Issue: 4
- Vol: 17 Issue: 5
- Vol: 17 Issue: 6
- Vol: 17 Issue: 7
- Vol: 17 Issue: 8
- Vol: 17 Issue: 9
- Vol: 17 Issue: 10
- Vol: 17 Issue: 11
- Vol: 17 Issue: 12
- Vol: 16 Issue: 1
- Vol: 16 Issue: 2
- Vol: 16 Issue: 3
- Vol: 16 Issue: 4
- Vol: 16 Issue: 5
- Vol: 16 Issue: 6
- Vol: 16 Issue: 7
- Vol: 16 Issue: 8
- Vol: 16 Issue: 9
- Vol: 16 Issue: 10
- Vol: 16 Issue: 11
- Vol: 16 Issue: 12
- Vol: 15 Issue: 1
- Vol: 15 Issue: 2
- Vol: 15 Issue: 3
- Vol: 15 Issue: 4
- Vol: 15 Issue: 5
- Vol: 15 Issue: 6
- Vol: 15 Issue: 7
- Vol: 15 Issue: 8
- Vol: 15 Issue: 9
- Vol: 15 Issue: 10
- Vol: 15 Issue: 11
- Vol: 15 Issue: 12
- Vol: 14 Issue: 1
- Vol: 14 Issue: 2
- Vol: 14 Issue: 3
- Vol: 14 Issue: 4
- Vol: 14 Issue: 5
- Vol: 14 Issue: 6
- Vol: 14 Issue: 7
- Vol: 14 Issue: 8
- Vol: 14 Issue: 9
- Vol: 14 Issue: 10
- Vol: 14 Issue: 11
- Vol: 14 Issue: 12
- Vol: 13 Issue: 1
- Vol: 13 Issue: 2
- Vol: 13 Issue: 3
- Vol: 13 Issue: 4
- Vol: 13 Issue: 5
- Vol: 13 Issue: 6
- Vol: 13 Issue: 7
- Vol: 13 Issue: 8
- Vol: 13 Issue: 9
- Vol: 13 Issue: 10
- Vol: 13 Issue: 11
- Vol: 13 Issue: 12
- Vol: 12 Issue: 1
- Vol: 12 Issue: 2
- Vol: 12 Issue: 3
- Vol: 12 Issue: 4
- Vol: 12 Issue: 5
- Vol: 12 Issue: 6
- Vol: 12 Issue: 7
- Vol: 12 Issue: 8
- Vol: 12 Issue: 9
- Vol: 12 Issue: 10
- Vol: 12 Issue: 11
- Vol: 12 Issue: 12
- Vol: 9 Issue: 1
- Vol: 9 Issue: 2
- Vol: 9 Issue: 3
- Vol: 9 Issue: 4
- Vol: 9 Issue: 5
- Vol: 9 Issue: 6
- Vol: 9 Issue: 7
- Vol: 9 Issue: 8
- Vol: 11 Issue: 1
- Vol: 11 Issue: 2
- Vol: 11 Issue: 3
- Vol: 11 Issue: 4
- Vol: 11 Issue: 5
- Vol: 11 Issue: 6
- Vol: 11 Issue: 7
- Vol: 11 Issue: 8
- Vol: 11 Issue: 9
- Vol: 11 Issue: 10
- Vol: 11 Issue: 11
- Vol: 11 Issue: 12
- Vol: 10 Issue: 1
- Vol: 10 Issue: 2
- Vol: 10 Issue: 3
- Vol: 10 Issue: 4
- Vol: 10 Issue: 5
- Vol: 10 Issue: 6
- Vol: 10 Issue: 7
- Vol: 10 Issue: 8
- Vol: 28 Issue: 1
- Vol: 28 Issue: 2
- Vol: 28 Issue: 3
- Vol: 28 Issue: 4
- Vol: 28 Issue: 5
- Vol: 28 Issue: 6
- Vol: 28 Issue: 7
- Vol: 28 Issue: 8
- Vol: 28 Issue: 9
- Vol: 28 Issue: 10
- Vol: 28 Issue: 11
- Vol: 28 Issue: 12
- Vol: 27 Issue: 1
- Vol: 27 Issue: 2
- Vol: 27 Issue: 3
- Vol: 27 Issue: 4
- Vol: 27 Issue: 5
- Vol: 27 Issue: 6
- Vol: 27 Issue: 7
- Vol: 27 Issue: 8
- Vol: 27 Issue: 9
- Vol: 27 Issue: 10
- Vol: 27 Issue: 11
- Vol: 27 Issue: 12
- Vol: 26 Issue: 1
- Vol: 26 Issue: 2
- Vol: 26 Issue: 3
- Vol: 26 Issue: 4
- Vol: 26 Issue: 5
- Vol: 26 Issue: 6
- Vol: 26 Issue: 7
- Vol: 26 Issue: 8
- Vol: 26 Issue: 9
- Vol: 26 Issue: 10
- Vol: 26 Issue: 11
- Vol: 26 Issue: 12
- Vol: 25
- Vol: 25 Issue: 1
- Vol: 25 Issue: 2
- Vol: 25 Issue: 3
- Vol: 25 Issue: 4
- Vol: 25 Issue: 5
- Vol: 25 Issue: 6
- Vol: 25 Issue: 7
- Vol: 25 Issue: 8
- Vol: 25 Issue: 9
- Vol: 25 Issue: 10
- Vol: 25 Issue: 11
- Vol: 25 Issue: 12
- Vol: 22 Issue: 1
- Vol: 22 Issue: 2
- Vol: 22 Issue: 3
- Vol: 22 Issue: 4
- Vol: 22 Issue: 5
- Vol: 22 Issue: 6
- Vol: 22 Issue: 7
- Vol: 22 Issue: 8
- Vol: 22 Issue: 9
- Vol: 22 Issue: 10
- Vol: 22 Issue: 11
- Vol: 22 Issue: 12
- Vol: 21 Issue: 1
- Vol: 21 Issue: 2
- Vol: 21 Issue: 3
- Vol: 21 Issue: 4
- Vol: 21 Issue: 5
- Vol: 21 Issue: 6
- Vol: 21 Issue: 7
- Vol: 21 Issue: 8
- Vol: 21 Issue: 9
- Vol: 21 Issue: 10
- Vol: 21 Issue: 11
- Vol: 21 Issue: 12
- Vol: 20 Issue: 1
- Vol: 20 Issue: 2
- Vol: 20 Issue: 3
- Vol: 20 Issue: 4
- Vol: 20 Issue: 5
- Vol: 20 Issue: 6
- Vol: 20 Issue: 7
- Vol: 20 Issue: 8
- Vol: 20 Issue: 9
- Vol: 20 Issue: 10
- Vol: 20 Issue: 11
- Vol: 20 Issue: 12
- Vol: 19 Issue: 1
- Vol: 19 Issue: 2
- Vol: 19 Issue: 3
- Vol: 19 Issue: 4
- Vol: 19 Issue: 5
- Vol: 19 Issue: 6
- Vol: 19 Issue: 7
- Vol: 19 Issue: 8
- Vol: 19 Issue: 9
- Vol: 19 Issue: 10
- Vol: 19 Issue: 11
- Vol: 19 Issue: 12
- Vol: 8 Issue: 1
- Vol: 8 Issue: 2
- Vol: 8 Issue: 3
- Vol: 8 Issue: 4
- Vol: 8 Issue: 5
- Vol: 8 Issue: 6
- Vol: 8 Issue: 7
- Vol: 8 Issue: 8
- Vol: 18 Issue: 1
- Vol: 18 Issue: 2
- Vol: 18 Issue: 3
- Vol: 18 Issue: 4
- Vol: 18 Issue: 5
- Vol: 18 Issue: 6
- Vol: 18 Issue: 7
- Vol: 18 Issue: 8
- Vol: 18 Issue: 9
- Vol: 18 Issue: 10
- Vol: 18 Issue: 11
- Vol: 18 Issue: 12
- Vol: 17 Issue: 1
- Vol: 17 Issue: 2
- Vol: 17 Issue: 3
- Vol: 17 Issue: 4
- Vol: 17 Issue: 5
- Vol: 17 Issue: 6
- Vol: 17 Issue: 7
- Vol: 17 Issue: 8
- Vol: 17 Issue: 9
- Vol: 17 Issue: 10
- Vol: 17 Issue: 11
- Vol: 17 Issue: 12
- Vol: 16 Issue: 1
- Vol: 16 Issue: 2
- Vol: 16 Issue: 3
- Vol: 16 Issue: 4
- Vol: 16 Issue: 5
- Vol: 16 Issue: 6
- Vol: 16 Issue: 7
- Vol: 16 Issue: 8
- Vol: 16 Issue: 9
- Vol: 16 Issue: 10
- Vol: 16 Issue: 11
- Vol: 16 Issue: 12
- Vol: 15 Issue: 1
- Vol: 15 Issue: 2
- Vol: 15 Issue: 3
- Vol: 15 Issue: 4
- Vol: 15 Issue: 5
- Vol: 15 Issue: 6
- Vol: 15 Issue: 7
- Vol: 15 Issue: 8
- Vol: 15 Issue: 9
- Vol: 15 Issue: 10
- Vol: 15 Issue: 11
- Vol: 15 Issue: 12
- Vol: 14 Issue: 1
- Vol: 14 Issue: 2
- Vol: 14 Issue: 3
- Vol: 14 Issue: 4
- Vol: 14 Issue: 5
- Vol: 14 Issue: 6
- Vol: 14 Issue: 7
- Vol: 14 Issue: 8
- Vol: 14 Issue: 9
- Vol: 14 Issue: 10
- Vol: 14 Issue: 11
- Vol: 14 Issue: 12
- Vol: 13 Issue: 1
- Vol: 13 Issue: 2
- Vol: 13 Issue: 3
- Vol: 13 Issue: 4
- Vol: 13 Issue: 5
- Vol: 13 Issue: 6
- Vol: 13 Issue: 7
- Vol: 13 Issue: 8
- Vol: 13 Issue: 9
- Vol: 13 Issue: 10
- Vol: 13 Issue: 11
- Vol: 13 Issue: 12
- Vol: 12 Issue: 1
- Vol: 12 Issue: 2
- Vol: 12 Issue: 3
- Vol: 12 Issue: 4
- Vol: 12 Issue: 5
- Vol: 12 Issue: 6
- Vol: 12 Issue: 7
- Vol: 12 Issue: 8
- Vol: 12 Issue: 9
- Vol: 12 Issue: 10
- Vol: 12 Issue: 11
- Vol: 12 Issue: 12
- Vol: 9 Issue: 1
- Vol: 9 Issue: 2
- Vol: 9 Issue: 3
- Vol: 9 Issue: 4
- Vol: 9 Issue: 5
- Vol: 9 Issue: 6
- Vol: 9 Issue: 7
- Vol: 9 Issue: 8
- Vol: 11 Issue: 1
- Vol: 11 Issue: 2
- Vol: 11 Issue: 3
- Vol: 11 Issue: 4
- Vol: 11 Issue: 5
- Vol: 11 Issue: 6
- Vol: 11 Issue: 7
- Vol: 11 Issue: 8
- Vol: 11 Issue: 9
- Vol: 11 Issue: 10
- Vol: 11 Issue: 11
- Vol: 11 Issue: 12
- Vol: 10 Issue: 1
- Vol: 10 Issue: 2
- Vol: 10 Issue: 3
- Vol: 10 Issue: 4
- Vol: 10 Issue: 5
- Vol: 10 Issue: 6
- Vol: 10 Issue: 7
- Vol: 10 Issue: 8
- Vol: 28 Issue: 1
- Vol: 28 Issue: 2
- Vol: 28 Issue: 3
- Vol: 28 Issue: 4
- Vol: 28 Issue: 5
- Vol: 28 Issue: 6
- Vol: 28 Issue: 7
- Vol: 28 Issue: 8
- Vol: 28 Issue: 9
- Vol: 28 Issue: 10
- Vol: 28 Issue: 11
- Vol: 28 Issue: 12
- Vol: 27 Issue: 1
- Vol: 27 Issue: 2
- Vol: 27 Issue: 3
- Vol: 27 Issue: 4
- Vol: 27 Issue: 5
- Vol: 27 Issue: 6
- Vol: 27 Issue: 7
- Vol: 27 Issue: 8
- Vol: 27 Issue: 9
- Vol: 27 Issue: 10
- Vol: 27 Issue: 11
- Vol: 27 Issue: 12
- Vol: 26 Issue: 1
- Vol: 26 Issue: 2
- Vol: 26 Issue: 3
- Vol: 26 Issue: 4
- Vol: 26 Issue: 5
- Vol: 26 Issue: 6
- Vol: 26 Issue: 7
- Vol: 26 Issue: 8
- Vol: 26 Issue: 9
- Vol: 26 Issue: 10
- Vol: 26 Issue: 11
- Vol: 26 Issue: 12
- Vol: 25
- Vol: 25 Issue: 1
- Vol: 25 Issue: 2
- Vol: 25 Issue: 3
- Vol: 25 Issue: 4
- Vol: 25 Issue: 5
- Vol: 25 Issue: 6
- Vol: 25 Issue: 7
- Vol: 25 Issue: 8
- Vol: 25 Issue: 9
- Vol: 25 Issue: 10
- Vol: 25 Issue: 11
- Vol: 25 Issue: 12
Volume 27 Issue 10 • Oct. 2017
- Already Purchased?
- Subscription Options
Sponsor
Filter Results
-
-
IEEE Transactions on Circuits and Systems for Video Technology publication information
|
PDF (71 KB)
-
Reducing Image Compression Artifacts by Structural Sparse Representation and Quantization Constraint Prior
Publication Year: 2017, Page(s):2057 - 2071
Cited by: Papers (8)The block discrete cosine transform (BDCT) has been widely used in current image and video coding standards, owing to its good energy compaction and decorrelation properties. However, because of independent quantization of DCT coefficients in each block, BDCT usually gives rise to visually annoying blocking compression artifacts, especially at low bit rates. In this paper, to reduce blocking artif... View full abstract»
-
Variable Bandwidth Weighting for Texture Copy Artifact Suppression in Guided Depth Upsampling
Publication Year: 2017, Page(s):2072 - 2085
Cited by: Papers (3)In this paper, we mathematically analyze one of the most challenging issues in color image-guided depth upsampling: the texture copy artifacts. The optimal guidance weights denoted by balanced weights are proposed to best suppress texture copy artifacts. To both suppress texture copy artifacts and preserve depth discontinuities, a new general weighting scheme called variable bandwidth weighting is... View full abstract»
-
Depth Estimation Using an Infrared Dot Projector and an Infrared Color Stereo Camera
Kensuke Hisatomi ; Masanori Kano ; Kensuke Ikeya ; Miwa Katayama ; Tomoyuki Mishina ; Yuichi Iwadate ; Kiyoharu AizawaPublication Year: 2017, Page(s):2086 - 2097This paper proposes a method of estimating depth from two kinds of stereo images: color stereo images and infrared stereo images. An infrared dot pattern is projected on a scene by a projector so that infrared cameras can capture the scene textured by the dots and the depth can be estimated even where the surface is not textured. The cost volumes are calculated for the infrared and color stereo im... View full abstract»
-
A Novel Hybrid Kinect-Variety-Based High-Quality Multiview Rendering Scheme for Glass-Free 3D Displays
Publication Year: 2017, Page(s):2098 - 2117This paper presents a new hybrid Kinect-variety-based synthesis scheme that renders artifact-free multiple views for autostereoscopic/automultiscopic displays. The proposed approach does not explicitly require dense scene depth information for synthesizing novel views from arbitrary viewpoints. Instead, the integrated framework first constructs a consistent minimal image-space parameterization of ... View full abstract»
-
Foreground Removal Approach for Hole Filling in 3D Video and FVV Synthesis
Publication Year: 2017, Page(s):2118 - 2131
Cited by: Papers (3)The depth-image-based rendering is a key technique for 3D video and free viewpoint video synthesis. One of the critical problems in current synthesis methods is that the background (BG) occluded by the foreground objects might be exposed in the new view, and some holes are produced in the synthesized video. However, most of the traditional hole-filling approaches may bring some blurry effect or ar... View full abstract»
-
Image Segmentation Using Linked Mean-Shift Vectors and Global/Local Attributes
Publication Year: 2017, Page(s):2132 - 2140
Cited by: Papers (1)This paper proposes novel noniterative mean-shift-based image segmentation that uses global and local attributes. The existing mean-shift-based methods use a fixed range bandwidth, and hence their accuracy is dependent on the range spectrum of an image. To resolve this dependency, this paper proposes to modify the range kernel in the mean-shift process to be anisotropic. The modification is conduc... View full abstract»
-
Residual-Consensus Driven Linear Matching
Publication Year: 2017, Page(s):2141 - 2152Linear matching (LM) is a simple and effective method for solving image matching problems. In many cases, image matching problems are nonlinear due to involvement of the geometric transformations; therefore, an essential step for utilizing linear models for image matching is to linearize the geometric transformation matrices that introduce nonlinear terms into image matching problems. Existing LM ... View full abstract»
-
Graph Regularized and Locality-Constrained Coding for Robust Visual Tracking
Publication Year: 2017, Page(s):2153 - 2164
Cited by: Papers (5)Visual tracking is complicated due to factors, such as occlusion, background clutter, abrupt target motion, and illumination variations, among others. In recent years, subspace representation and sparse coding techniques have demonstrated significant improvements in tracking. However, performance gain in tracking has been at the expense of losing locality and similarity attributes among the instan... View full abstract»
-
Group Structure Preserving Pedestrian Tracking in a Multicamera Video Network
Publication Year: 2017, Page(s):2165 - 2176
Cited by: Papers (1)Pedestrian tracking in video has been a popular research topic with many practical applications. In order to improve tracking performance, many ideas have been proposed, among which the use of geometric information is one of the most popular directions in recent research. In this paper, we propose a novel multicamera pedestrian tracking framework, which incorporates the structural information of p... View full abstract»
-
Low-Rank-Based Nonlocal Adaptive Loop Filter for High-Efficiency Video Compression
Publication Year: 2017, Page(s):2177 - 2188
Cited by: Papers (11)In video coding, the in-loop filtering has emerged as a key module due to its significant improvement on compression performance since H.264/Advanced Video Coding. Existing incorporated in-loop filters in video coding standards mainly take advantage of the local smoothness prior model used for images. In this paper, we propose a novel adaptive loop filter utilizing image nonlocal prior knowledge b... View full abstract»
-
SSIM-Motivated Two-Pass VBR Coding for HEVC
Publication Year: 2017, Page(s):2189 - 2203We propose a structural similarity (SSIM)motivated two-pass variable bit rate control algorithm for High Efficiency Video Coding. Given a bit rate budget, the available bits are optimally allocated at group of pictures (GoP), frame, and coding unit (CU) levels by hierarchically constructing a perceptually uniform space with an SSIM-inspired divisive normalization mechanism. The Lagrange multiplier... View full abstract»
-
Online-Learning-Based Mode Prediction Method for Quality Scalable Extension of the High Efficiency Video Coding (HEVC) Standard
Publication Year: 2017, Page(s):2204 - 2215
Cited by: Papers (4)SHVC, the scalable extension of High Efficiency Video Coding (HEVC), uses advanced inter-layer prediction features in addition to the advanced compression tools of HEVC to improve the compression performance. Using combined features has brought us improved compression performance at the cost of huge computational complexity for the SHVC encoder. This complexity is mainly because of the the inter/i... View full abstract»
-
Adaptive Search Range for HEVC Motion Estimation Based on Depth Information
Publication Year: 2017, Page(s):2216 - 2230
Cited by: Papers (5)High Efficiency Video Coding achieves twofold coding efficiency improvement compared with its predecessor H.264/MPEG-4 Advanced Video Coding. However, it suffers from high computational complexity due to its quad-tree structure in motion estimation (ME). This paper exposes the use of depth maps in the multiview video plus depth format for relieving the computational burden. The depth map provides ... View full abstract»
-
Perceptually Driven Nonuniform Asymmetric Coding of Stereoscopic 3D Video
Publication Year: 2017, Page(s):2231 - 2245
Cited by: Papers (2)Asymmetric stereoscopic video coding has already proven its effectiveness in reducing the bandwidth required for stereoscopic 3D delivery without degrading the visual quality. This approach, in which the left and right views are encoded with different levels of quality, relies on the perceptual theory of binocular suppression. However, to ensure comfortable 3D viewing, the just-noticeable level of... View full abstract»
-
Complexity Reduction by Modified Scale-Space Construction in SIFT Generation Optimized for a Mobile GPU
Publication Year: 2017, Page(s):2246 - 2259
Cited by: Papers (3)Scale-invariant feature transform (SIFT) is one of the most widely used local features for computer vision in mobile devices. A mobile graphic processing unit (GPU) is often used to run computer-vision applications using SIFT features, but the performance in such a case is not powerful enough to generate SIFT features in real time. This paper proposes an efficient scheme to optimize the SIFT algor... View full abstract»
-
A Low-Complexity Pedestrian Detection Framework for Smart Video Surveillance Systems
Publication Year: 2017, Page(s):2260 - 2273
Cited by: Papers (3)Pedestrian detection is a key problem in computer vision and is currently addressed with increasingly complex solutions involving compute-intensive features and classification schemes. In this scope, histogram of oriented gradients (HOG) in conjunction with linear support vector machine (SVM) classifier is considered to be the single most discriminative feature that has been adopted as a stand-alo... View full abstract»
-
Decomposing Joint Distortion for Adaptive Steganography
Publication Year: 2017, Page(s):2274 - 2280
Cited by: Papers (5)Recent advances on adaptive steganography imply that the security of steganography can be improved by exploiting the mutual impact of modifications between adjacent cover elements, such as pixels of images, which is called a nonadditive distortion model. In this paper, we propose a framework for nonadditive distortion steganography by defining joint distortion on pixel blocks. To reduce the comple... View full abstract»
-
Aims & Scope
IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) covers the circuits and systems aspects of all video technologies. General, theoretical, and application-oriented papers with a circuits and systems perspective are encouraged for publication in TCSVT on or related to image/video acquisition, representation, presentation and display; processing, filtering and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication and networking; storage, retrieval, indexing and search; and/or hardware and software design and implementation.
Meet Our Editors
Editor-in-Chief
Shipeng Li
iFLYTEK Co. Ltd.
No. 666 West Wangjiang Road
Hi-Tech Zone, Hefei, China 230088
Peer Review Support Services
Desiree Noel
IEEE Publishing Operations
d.noel@ieee.org
732-562-2644