IEEE Transactions on Circuits and Systems for Video Technology
- Vol: 22 Issue: 1
- Vol: 22 Issue: 2
- Vol: 22 Issue: 3
- Vol: 22 Issue: 4
- Vol: 22 Issue: 5
- Vol: 22 Issue: 6
- Vol: 22 Issue: 7
- Vol: 22 Issue: 8
- Vol: 22 Issue: 9
- Vol: 22 Issue: 10
- Vol: 22 Issue: 11
- Vol: 22 Issue: 12
- Vol: 21 Issue: 1
- Vol: 21 Issue: 2
- Vol: 21 Issue: 3
- Vol: 21 Issue: 4
- Vol: 21 Issue: 5
- Vol: 21 Issue: 6
- Vol: 21 Issue: 7
- Vol: 21 Issue: 8
- Vol: 21 Issue: 9
- Vol: 21 Issue: 10
- Vol: 21 Issue: 11
- Vol: 21 Issue: 12
- Vol: 20 Issue: 1
- Vol: 20 Issue: 2
- Vol: 20 Issue: 3
- Vol: 20 Issue: 4
- Vol: 20 Issue: 5
- Vol: 20 Issue: 6
- Vol: 20 Issue: 7
- Vol: 20 Issue: 8
- Vol: 20 Issue: 9
- Vol: 20 Issue: 10
- Vol: 20 Issue: 11
- Vol: 20 Issue: 12
- Vol: 19 Issue: 1
- Vol: 19 Issue: 2
- Vol: 19 Issue: 3
- Vol: 19 Issue: 4
- Vol: 19 Issue: 5
- Vol: 19 Issue: 6
- Vol: 19 Issue: 7
- Vol: 19 Issue: 8
- Vol: 19 Issue: 9
- Vol: 19 Issue: 10
- Vol: 19 Issue: 11
- Vol: 19 Issue: 12
- Vol: 8 Issue: 1
- Vol: 8 Issue: 2
- Vol: 8 Issue: 3
- Vol: 8 Issue: 4
- Vol: 8 Issue: 5
- Vol: 8 Issue: 6
- Vol: 8 Issue: 7
- Vol: 8 Issue: 8
- Vol: 18 Issue: 1
- Vol: 18 Issue: 2
- Vol: 18 Issue: 3
- Vol: 18 Issue: 4
- Vol: 18 Issue: 5
- Vol: 18 Issue: 6
- Vol: 18 Issue: 7
- Vol: 18 Issue: 8
- Vol: 18 Issue: 9
- Vol: 18 Issue: 10
- Vol: 18 Issue: 11
- Vol: 18 Issue: 12
- Vol: 17 Issue: 1
- Vol: 17 Issue: 2
- Vol: 17 Issue: 3
- Vol: 17 Issue: 4
- Vol: 17 Issue: 5
- Vol: 17 Issue: 6
- Vol: 17 Issue: 7
- Vol: 17 Issue: 8
- Vol: 17 Issue: 9
- Vol: 17 Issue: 10
- Vol: 17 Issue: 11
- Vol: 17 Issue: 12
- Vol: 16 Issue: 1
- Vol: 16 Issue: 2
- Vol: 16 Issue: 3
- Vol: 16 Issue: 4
- Vol: 16 Issue: 5
- Vol: 16 Issue: 6
- Vol: 16 Issue: 7
- Vol: 16 Issue: 8
- Vol: 16 Issue: 9
- Vol: 16 Issue: 10
- Vol: 16 Issue: 11
- Vol: 16 Issue: 12
- Vol: 15 Issue: 1
- Vol: 15 Issue: 2
- Vol: 15 Issue: 3
- Vol: 15 Issue: 4
- Vol: 15 Issue: 5
- Vol: 15 Issue: 6
- Vol: 15 Issue: 7
- Vol: 15 Issue: 8
- Vol: 15 Issue: 9
- Vol: 15 Issue: 10
- Vol: 15 Issue: 11
- Vol: 15 Issue: 12
- Vol: 14 Issue: 1
- Vol: 14 Issue: 2
- Vol: 14 Issue: 3
- Vol: 14 Issue: 4
- Vol: 14 Issue: 5
- Vol: 14 Issue: 6
- Vol: 14 Issue: 7
- Vol: 14 Issue: 8
- Vol: 14 Issue: 9
- Vol: 14 Issue: 10
- Vol: 14 Issue: 11
- Vol: 14 Issue: 12
- Vol: 13 Issue: 1
- Vol: 13 Issue: 2
- Vol: 13 Issue: 3
- Vol: 13 Issue: 4
- Vol: 13 Issue: 5
- Vol: 13 Issue: 6
- Vol: 13 Issue: 7
- Vol: 13 Issue: 8
- Vol: 13 Issue: 9
- Vol: 13 Issue: 10
- Vol: 13 Issue: 11
- Vol: 13 Issue: 12
- Vol: 12 Issue: 1
- Vol: 12 Issue: 2
- Vol: 12 Issue: 3
- Vol: 12 Issue: 4
- Vol: 12 Issue: 5
- Vol: 12 Issue: 6
- Vol: 12 Issue: 7
- Vol: 12 Issue: 8
- Vol: 12 Issue: 9
- Vol: 12 Issue: 10
- Vol: 12 Issue: 11
- Vol: 12 Issue: 12
- Vol: 9 Issue: 1
- Vol: 9 Issue: 2
- Vol: 9 Issue: 3
- Vol: 9 Issue: 4
- Vol: 9 Issue: 5
- Vol: 9 Issue: 6
- Vol: 9 Issue: 7
- Vol: 9 Issue: 8
- Vol: 11 Issue: 1
- Vol: 11 Issue: 2
- Vol: 11 Issue: 3
- Vol: 11 Issue: 4
- Vol: 11 Issue: 5
- Vol: 11 Issue: 6
- Vol: 11 Issue: 7
- Vol: 11 Issue: 8
- Vol: 11 Issue: 9
- Vol: 11 Issue: 10
- Vol: 11 Issue: 11
- Vol: 11 Issue: 12
- Vol: 10 Issue: 1
- Vol: 10 Issue: 2
- Vol: 10 Issue: 3
- Vol: 10 Issue: 4
- Vol: 10 Issue: 5
- Vol: 10 Issue: 6
- Vol: 10 Issue: 7
- Vol: 10 Issue: 8
- Vol: 28 Issue: 1
- Vol: 28 Issue: 2
- Vol: 28 Issue: 3
- Vol: 28 Issue: 4
- Vol: 28 Issue: 5
- Vol: 28 Issue: 6
- Vol: 28 Issue: 7
- Vol: 28 Issue: 8
- Vol: 28 Issue: 9
- Vol: 28 Issue: 10
- Vol: 28 Issue: 11
- Vol: 28 Issue: 12
- Vol: 27 Issue: 1
- Vol: 27 Issue: 2
- Vol: 27 Issue: 3
- Vol: 27 Issue: 4
- Vol: 27 Issue: 5
- Vol: 27 Issue: 6
- Vol: 27 Issue: 7
- Vol: 27 Issue: 8
- Vol: 27 Issue: 9
- Vol: 27 Issue: 10
- Vol: 27 Issue: 11
- Vol: 27 Issue: 12
- Vol: 26 Issue: 1
- Vol: 26 Issue: 2
- Vol: 26 Issue: 3
- Vol: 26 Issue: 4
- Vol: 26 Issue: 5
- Vol: 26 Issue: 6
- Vol: 26 Issue: 7
- Vol: 26 Issue: 8
- Vol: 26 Issue: 9
- Vol: 26 Issue: 10
- Vol: 26 Issue: 11
- Vol: 26 Issue: 12
- Vol: 25
- Vol: 25 Issue: 1
- Vol: 25 Issue: 2
- Vol: 25 Issue: 3
- Vol: 25 Issue: 4
- Vol: 25 Issue: 5
- Vol: 25 Issue: 6
- Vol: 25 Issue: 7
- Vol: 25 Issue: 8
- Vol: 25 Issue: 9
- Vol: 25 Issue: 10
- Vol: 25 Issue: 11
- Vol: 25 Issue: 12
- Vol: 22 Issue: 1
- Vol: 22 Issue: 2
- Vol: 22 Issue: 3
- Vol: 22 Issue: 4
- Vol: 22 Issue: 5
- Vol: 22 Issue: 6
- Vol: 22 Issue: 7
- Vol: 22 Issue: 8
- Vol: 22 Issue: 9
- Vol: 22 Issue: 10
- Vol: 22 Issue: 11
- Vol: 22 Issue: 12
- Vol: 21 Issue: 1
- Vol: 21 Issue: 2
- Vol: 21 Issue: 3
- Vol: 21 Issue: 4
- Vol: 21 Issue: 5
- Vol: 21 Issue: 6
- Vol: 21 Issue: 7
- Vol: 21 Issue: 8
- Vol: 21 Issue: 9
- Vol: 21 Issue: 10
- Vol: 21 Issue: 11
- Vol: 21 Issue: 12
- Vol: 20 Issue: 1
- Vol: 20 Issue: 2
- Vol: 20 Issue: 3
- Vol: 20 Issue: 4
- Vol: 20 Issue: 5
- Vol: 20 Issue: 6
- Vol: 20 Issue: 7
- Vol: 20 Issue: 8
- Vol: 20 Issue: 9
- Vol: 20 Issue: 10
- Vol: 20 Issue: 11
- Vol: 20 Issue: 12
- Vol: 19 Issue: 1
- Vol: 19 Issue: 2
- Vol: 19 Issue: 3
- Vol: 19 Issue: 4
- Vol: 19 Issue: 5
- Vol: 19 Issue: 6
- Vol: 19 Issue: 7
- Vol: 19 Issue: 8
- Vol: 19 Issue: 9
- Vol: 19 Issue: 10
- Vol: 19 Issue: 11
- Vol: 19 Issue: 12
- Vol: 8 Issue: 1
- Vol: 8 Issue: 2
- Vol: 8 Issue: 3
- Vol: 8 Issue: 4
- Vol: 8 Issue: 5
- Vol: 8 Issue: 6
- Vol: 8 Issue: 7
- Vol: 8 Issue: 8
- Vol: 18 Issue: 1
- Vol: 18 Issue: 2
- Vol: 18 Issue: 3
- Vol: 18 Issue: 4
- Vol: 18 Issue: 5
- Vol: 18 Issue: 6
- Vol: 18 Issue: 7
- Vol: 18 Issue: 8
- Vol: 18 Issue: 9
- Vol: 18 Issue: 10
- Vol: 18 Issue: 11
- Vol: 18 Issue: 12
- Vol: 17 Issue: 1
- Vol: 17 Issue: 2
- Vol: 17 Issue: 3
- Vol: 17 Issue: 4
- Vol: 17 Issue: 5
- Vol: 17 Issue: 6
- Vol: 17 Issue: 7
- Vol: 17 Issue: 8
- Vol: 17 Issue: 9
- Vol: 17 Issue: 10
- Vol: 17 Issue: 11
- Vol: 17 Issue: 12
- Vol: 16 Issue: 1
- Vol: 16 Issue: 2
- Vol: 16 Issue: 3
- Vol: 16 Issue: 4
- Vol: 16 Issue: 5
- Vol: 16 Issue: 6
- Vol: 16 Issue: 7
- Vol: 16 Issue: 8
- Vol: 16 Issue: 9
- Vol: 16 Issue: 10
- Vol: 16 Issue: 11
- Vol: 16 Issue: 12
- Vol: 15 Issue: 1
- Vol: 15 Issue: 2
- Vol: 15 Issue: 3
- Vol: 15 Issue: 4
- Vol: 15 Issue: 5
- Vol: 15 Issue: 6
- Vol: 15 Issue: 7
- Vol: 15 Issue: 8
- Vol: 15 Issue: 9
- Vol: 15 Issue: 10
- Vol: 15 Issue: 11
- Vol: 15 Issue: 12
- Vol: 14 Issue: 1
- Vol: 14 Issue: 2
- Vol: 14 Issue: 3
- Vol: 14 Issue: 4
- Vol: 14 Issue: 5
- Vol: 14 Issue: 6
- Vol: 14 Issue: 7
- Vol: 14 Issue: 8
- Vol: 14 Issue: 9
- Vol: 14 Issue: 10
- Vol: 14 Issue: 11
- Vol: 14 Issue: 12
- Vol: 13 Issue: 1
- Vol: 13 Issue: 2
- Vol: 13 Issue: 3
- Vol: 13 Issue: 4
- Vol: 13 Issue: 5
- Vol: 13 Issue: 6
- Vol: 13 Issue: 7
- Vol: 13 Issue: 8
- Vol: 13 Issue: 9
- Vol: 13 Issue: 10
- Vol: 13 Issue: 11
- Vol: 13 Issue: 12
- Vol: 12 Issue: 1
- Vol: 12 Issue: 2
- Vol: 12 Issue: 3
- Vol: 12 Issue: 4
- Vol: 12 Issue: 5
- Vol: 12 Issue: 6
- Vol: 12 Issue: 7
- Vol: 12 Issue: 8
- Vol: 12 Issue: 9
- Vol: 12 Issue: 10
- Vol: 12 Issue: 11
- Vol: 12 Issue: 12
- Vol: 9 Issue: 1
- Vol: 9 Issue: 2
- Vol: 9 Issue: 3
- Vol: 9 Issue: 4
- Vol: 9 Issue: 5
- Vol: 9 Issue: 6
- Vol: 9 Issue: 7
- Vol: 9 Issue: 8
- Vol: 11 Issue: 1
- Vol: 11 Issue: 2
- Vol: 11 Issue: 3
- Vol: 11 Issue: 4
- Vol: 11 Issue: 5
- Vol: 11 Issue: 6
- Vol: 11 Issue: 7
- Vol: 11 Issue: 8
- Vol: 11 Issue: 9
- Vol: 11 Issue: 10
- Vol: 11 Issue: 11
- Vol: 11 Issue: 12
- Vol: 10 Issue: 1
- Vol: 10 Issue: 2
- Vol: 10 Issue: 3
- Vol: 10 Issue: 4
- Vol: 10 Issue: 5
- Vol: 10 Issue: 6
- Vol: 10 Issue: 7
- Vol: 10 Issue: 8
- Vol: 28 Issue: 1
- Vol: 28 Issue: 2
- Vol: 28 Issue: 3
- Vol: 28 Issue: 4
- Vol: 28 Issue: 5
- Vol: 28 Issue: 6
- Vol: 28 Issue: 7
- Vol: 28 Issue: 8
- Vol: 28 Issue: 9
- Vol: 28 Issue: 10
- Vol: 28 Issue: 11
- Vol: 28 Issue: 12
- Vol: 27 Issue: 1
- Vol: 27 Issue: 2
- Vol: 27 Issue: 3
- Vol: 27 Issue: 4
- Vol: 27 Issue: 5
- Vol: 27 Issue: 6
- Vol: 27 Issue: 7
- Vol: 27 Issue: 8
- Vol: 27 Issue: 9
- Vol: 27 Issue: 10
- Vol: 27 Issue: 11
- Vol: 27 Issue: 12
- Vol: 26 Issue: 1
- Vol: 26 Issue: 2
- Vol: 26 Issue: 3
- Vol: 26 Issue: 4
- Vol: 26 Issue: 5
- Vol: 26 Issue: 6
- Vol: 26 Issue: 7
- Vol: 26 Issue: 8
- Vol: 26 Issue: 9
- Vol: 26 Issue: 10
- Vol: 26 Issue: 11
- Vol: 26 Issue: 12
- Vol: 25
- Vol: 25 Issue: 1
- Vol: 25 Issue: 2
- Vol: 25 Issue: 3
- Vol: 25 Issue: 4
- Vol: 25 Issue: 5
- Vol: 25 Issue: 6
- Vol: 25 Issue: 7
- Vol: 25 Issue: 8
- Vol: 25 Issue: 9
- Vol: 25 Issue: 10
- Vol: 25 Issue: 11
- Vol: 25 Issue: 12
Volume 27 Issue 4 • April 2017
Sponsor

Filter Results
-
-
IEEE Transactions on Circuits and Systems for Video Technology publication information
Publication Year: 2017, Page(s): C2
|
PDF (86 KB)
-
Introduction to the Special Section on Augmented Video
Publication Year: 2017, Page(s):713 - 715
-
Video Stabilization for Strict Real-Time Applications
Publication Year: 2017, Page(s):716 - 724
Cited by: Papers (12)Offline or deferred solutions are frequently employed for high quality and reliable results in current video stabilization. However, neither of these solutions can be used for strict real-time applications. In this paper, we propose a practical and robust algorithm for real-time video stabilization. To achieve this, a novel and efficient motion model based on inter-frame homography estimation is p... View full abstract»
-
Weighted Low-Rank Decomposition for Robust Grayscale-Thermal Foreground Detection
Publication Year: 2017, Page(s):725 - 738
Cited by: Papers (4)This paper investigates how to fuse grayscale and thermal video data for detecting foreground objects in challenging scenarios. To this end, we propose an intuitive yet effective method called weighted low-rank decomposition (WELD), which adaptively pursues the cross-modality low-rank representation. Specifically, we form two data matrices by accumulating sequential frames from the grayscale and t... View full abstract»
-
Light-Field Depth Estimation via Epipolar Plane Image Analysis and Locally Linear Embedding
Yongbing Zhang ; Huijin Lv ; Yebin Liu ; Haoqian Wang ; Xingzheng Wang ; Qian Huang ; Xinguang Xiang ; Qionghai DaiPublication Year: 2017, Page(s):739 - 747
Cited by: Papers (14)In this paper, we propose a novel method for 4D light-field (LF) depth estimation exploiting the special linear structure of an epipolar plane image (EPI) and locally linear embedding (LLE). Without high computational complexity, depth maps are locally estimated by locating the optimal slope of each line segmentation on the EPIs, which are projected by the corresponding scene points. For each pixe... View full abstract»
-
Depth Estimation by Parameter Transfer With a Lightweight Model for Single Still Images
Publication Year: 2017, Page(s):748 - 759
Cited by: Papers (4)In this paper, we propose a novel method for automatic depth estimation from color images using parameter transfer. By modeling the correlation between color images and their depth maps with a set of parameters, we get a database of parameter sets. Given an input image, we extract the high-level features to find the best matched image sets from the database. Then the set of parameters correspondin... View full abstract»
-
Video-Based Outdoor Human Reconstruction
Publication Year: 2017, Page(s):760 - 770
Cited by: Papers (5)A human body scanning system of great practical convenience, which can be used in an outdoor environment, is proposed. The system uses only a single conventional video camera without the aid of special sensors or controlled illuminations. We leverage the structure from motion calibration results directly and improve the available video-based dense 3D reconstruction by integrating the surface smoot... View full abstract»
-
SPA: Sparse Photorealistic Animation Using a Single RGB-D Camera
Publication Year: 2017, Page(s):771 - 783
Cited by: Papers (1)Photorealistic animation is a desirable technique for computer games and movie production. We propose a new method to synthesize plausible videos of human actors with new motions using a single cheap RGB-D camera. A small database is captured in a usual office environment, which happens only once for synthesizing different motions. We propose a marker-less performance capture method using sparse d... View full abstract»
-
A Hybrid Approach for Facial Performance Analysis and Editing
Publication Year: 2017, Page(s):784 - 797
Cited by: Papers (1)As of today, fine-grained editing of facial performances in movie and video production requires either retouching every single frame or creating a highly detailed CGI model of the actor, both of which is restricted to high-budget productions. In this paper, we present an example-based approach for facial performance editing that achieves realistic results with standard equipment and very little ma... View full abstract»
-
An Integrated Platform for Live 3D Human Reconstruction and Motion Capturing
Dimitrios S. Alexiadis ; Anargyros Chatzitofis ; Nikolaos Zioulis ; Olga Zoidi ; Georgios Louizis ; Dimitrios Zarpalas ; Petros DarasPublication Year: 2017, Page(s):798 - 813
Cited by: Papers (13)The latest developments in 3D capturing, processing, and rendering provide means to unlock novel 3D application pathways. The main elements of an integrated platform, which target tele-immersion and future 3D applications, are described in this paper, addressing the tasks of real-time capturing, robust 3D human shape/appearance reconstruction, and skeleton-based motion tracking. More specifically,... View full abstract»
-
A Mixed Reality Telepresence System for Collaborative Space Operation
Allen J. Fairchild ; Simon P. Campion ; Arturo S. García ; Robin Wolff ; Terrence Fernando ; David J. RobertsPublication Year: 2017, Page(s):814 - 827
Cited by: Papers (7)This paper presents a mixed reality (MR) system that results from the integration of a telepresence system and an application to improve collaborative space exploration. The system combines free viewpoint video with immersive projection technology to support nonverbal communication (NVC), including eye gaze, interpersonal distance, and facial expression. Importantly, these features can be interpre... View full abstract»
-
Design, Implementation, and Evaluation of a Point Cloud Codec for Tele-Immersive Video
Publication Year: 2017, Page(s):828 - 842
Cited by: Papers (30)We present a generic and real-time time-varying point cloud codec for 3D immersive video. This codec is suitable for mixed reality applications in which 3D point clouds are acquired at a fast rate. In this codec, intra frames are coded progressively in an octree subdivision. To further exploit interframe dependencies, we present an inter-prediction algorithm that partitions the octree voxel space ... View full abstract»
-
Magic Glasses: From 2D to 3D
Publication Year: 2017, Page(s):843 - 854
Cited by: Papers (1)This paper proposes a virtual 3D eyeglasses try-on system driven by a 2D Internet image of a human face wearing with a pair of eyeglasses. The main technical challenge of this system is the automatic 3D eyeglasses model reconstruction from the 2D glasses on a frontal human face. Against this challenge, this paper first proposes an eyeglasses segmentation method using a convolutional neural network... View full abstract»
-
Light Field Compressed Sensing Over a Disparity-Aware Dictionary
Publication Year: 2017, Page(s):855 - 865
Cited by: Papers (7)Light field (LF) acquisition faces the challenge of extremely bulky data. Available hardware solutions usually compromise the sensor resource between spatial and angular resolutions. In this paper, a compressed sensing framework is proposed for the sampling and reconstruction of a high-resolution LF based on a coded aperture camera. First, an LF dictionary based on perspective shifting is proposed... View full abstract»
-
Visual Tracking via Probabilistic Hypergraph Ranking
Publication Year: 2017, Page(s):866 - 879
Cited by: Papers (2)Online object tracking is a challenging issue because the appearance of an object tends to change due to intrinsic or extrinsic factors. In this paper, we propose a tracking algorithm based on probabilistic hypergraph ranking. First, three types of hypergraphs are constructed to encode local affinity information. Then, a probabilistic hypergraph is built by combining three distinct hypergraphs lin... View full abstract»
-
Consistency-Constrained Nonnegative Coding for Tracking
Publication Year: 2017, Page(s):880 - 891
Cited by: Papers (1)A novel visual object tracking method based on consistency-constrained nonnegative coding (CNC) is proposed in this paper. For the purpose of computational efficiency, superpixels are first extracted from each observed video frame. And then CNC is performed based on those obtained superpixels, where the locality on manifold is preserved by enforcing the temporal and spatial smoothness. The coding ... View full abstract»
-
Extended Selective Encryption of H.264/AVC (CABAC)- and HEVC-Encoded Video Streams
Publication Year: 2017, Page(s):892 - 906
Cited by: Papers (10)This paper proposes an extended selective encryption (SE) method for both H.264/advanced video coding (AVC) (CABAC) and High Efficiency Video Coding (HEVC) streams, addressing the main security issue that SE is facing: content protection, related to the amount of information leakage through a protected video. Our contribution is the improvement in the visual distortion induced by SE approaches. Pr... View full abstract»
-
Real-Time Feature-Based Video Stabilization on FPGA
Publication Year: 2017, Page(s):907 - 919
Cited by: Papers (10)Digital video stabilization is an important video enhancement technology that aims to remove unwanted camera vibrations from video sequences. Trading off between stabilization performance and real-time hardware implementation feasibility, this paper presents a feature-based full-frame video stabilization method and a novel complete fully pipelined architectural design to implement it on field-prog... View full abstract»
-
Automatically Creating Adaptive Video Summaries Using Constraint Satisfaction Programming: Application to Sport Content
Publication Year: 2017, Page(s):920 - 934This paper addresses automatic video summarization. We propose a novel approach that relies on constraint satisfaction programming (CSP). An expert defines the general rules for summary generation. These rules are written as constraints. The (final) user can define additional constraints or enter high-level parameters of predefined constraints. This has many advantages. It clearly separates summar... View full abstract»
-
-
-
Aims & Scope
IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) covers the circuits and systems aspects of all video technologies. General, theoretical, and application-oriented papers with a circuits and systems perspective are encouraged for publication in TCSVT on or related to image/video acquisition, representation, presentation and display; processing, filtering and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication and networking; storage, retrieval, indexing and search; and/or hardware and software design and implementation.
Meet Our Editors
Editor-in-Chief
Shipeng Li
iFLYTEK Co. Ltd.
No. 666 West Wangjiang Road
Hi-Tech Zone, Hefei, China 230088
Peer Review Support Services
Desiree Noel
IEEE Publishing Operations
d.noel@ieee.org
732-562-2644
Abstract
Media