By Topic

Circuits and Systems for Video Technology, IEEE Transactions on

Issue 7 • Date Oct. 2000

Filter Results

Displaying Results 1 - 21 of 21
  • Introduction to the special issue on recent advances in picture compression [Guest Editorial]

    Publication Year: 2000 , Page(s): 1013
    Save to Project icon | Request Permissions | PDF file iconPDF (6 KB)  
    Freely Available from IEEE
  • Rate-scalable object-based wavelet codec with implicit shape coding

    Publication Year: 2000 , Page(s): 1068 - 1079
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (808 KB)  

    In this paper, we present an embedded approach for coding image regions with arbitrary shapes. Our scheme takes a different approach by separating the objects in the transform domain instead of the image domain so that only one transform for the entire image is required. We define a new shape-adaptive embedded zerotree wavelet coding (SA-EZW) technique for encoding the coefficients corresponding to specific objects in gray-scale and color-image segments by implicitly representing their shapes, thereby forgoing the need for separately coding the region boundary. At our decoder, the shape information can be recovered without separate and explicit shape coding. The implicit shape coding enables the bit stream for the object to be fully rate scalable, since no explicit bit allocation is needed for the object shape. This makes it particularly suitable when content-based functionalities are desired in situations where the user bit rate is constrained and enables precise bit-rate control while avoiding the problem of contour coding. We show that our algorithm sufficiently addresses the issue of content-based scalability and improved coding efficiency when compared with the “chroma keying” technique, an implicit shape-coding technique which is adopted by the current MPEG-4 standard View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multi-metric objective picture-quality measurement model for MPEG video

    Publication Year: 2000 , Page(s): 1208 - 1213
    Cited by:  Papers (35)  |  Patents (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (116 KB)  

    Different coding schemes introduce different artifacts to the decoded pictures, making it difficult to design an objective quality model capable of measuring all of them. A feasible approach is to design a picture-quality model for each kind of known distortion, and combine the results from the models according to the perceptual impact of each type of impairment. In this letter, a multi-metric model comprising of a perceptual model and a blockiness detector is proposed, designed for MPEG video. Very high correlation between the objective scores from the model and the subjective assessment results has been achieved View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Motion vector size-compensation based method for very low bit-rate video coding

    Publication Year: 2000 , Page(s): 1192 - 1197
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (436 KB)  

    In this paper, a new method to achieve better compression efficiency in low bit-rate video coding is proposed. It is based on a global bit-rate reduction at a macroblock level, optimizing the number of bits to code each macroblock as a whole by means of motion vector and headers size compensation. The selection of the best motion vector and different coding modes for each block of the current picture is made depending not only on trying to choose the best prediction for the block, but also on the number of bits to code the associate headers, introducing some kind of penalization in the cost function. This method improves efficiency in video compression for all qualities, but especially for low-quality video coding, whose efficiency improvement can reach 17%. Its implementation is simple, and compatible with mast video-compression standards (H.263, MPEG, etc.). Results of the algorithm in a state-of-the-art H.263+ codec are presented, and demonstrate that the efficiency enhancement is achieved with minimal time-processing increase, and even decrease, in some conditions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TCP-friendly Internet video streaming employing variable frame-rate encoding and interpolation

    Publication Year: 2000 , Page(s): 1164 - 1177
    Cited by:  Papers (10)  |  Patents (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (760 KB)  

    A feedback-based Internet video transmission scheme based on ITU-T H.263+ is presented. The proposed system is capable of continually accommodating its stream size and managing the packet loss recovery in response to network condition changes. It consists of multiple components: TCP-friendly end-to-end congestion control and available bandwidth estimation, encoding frame-rate control and delay-based smoothing at the sender, media-aware packetization and packet loss recovery tied with congestion control, and quality recovery tools such as motion-compensated frame interpolation at the receiver. These components are designed to meet the low computational complexity requirement so that the whole system can operate in real-time. Among these, the video-aware congestion control known as a receiver-based congestion control mechanism, the variable frame-rate H.263+ encoding, and fast motion-compensated frame interpolation components are key features. Through a seamless integration, it is demonstrated that network adaptivity is enhanced enough to mitigate the packet loss and bandwidth fluctuation, resulting in a more smooth video experience at the receiver View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New fast binary pyramid motion estimation for MPEG2 and HDTV encoding

    Publication Year: 2000 , Page(s): 1015 - 1028
    Cited by:  Papers (10)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (556 KB)  

    A novel fast binary pyramid motion estimation (FBPME) algorithm is presented in this paper. The proposed FBPME scheme is based on binary multiresolution layers, exclusive-or (XOR) Boolean block matching, and a N-scale tiling search scheme. Each video frame is converted into a pyramid structure of K-1 binary layers with resolution decimation, plus one integer layer at the lowest resolution. At the lowest resolution layer, the N-scale tiling search is performed to select initial motion vector candidates. Motion vector fields are gradually refined with the XOR Boolean block-matching criterion and the N-scale tiling search schemes in higher binary layers. FBPME performs several thousands times faster than the conventional full-search block-matching scheme at the same PSNR performance and visual quality. It also dramatically reduces the bus bandwidth and on-chip memory requirement. Moreover, hardware complexity is low due to its binary nature. Fully functional software MPEG-2 MP@ML encoders and Advanced Television Standard Committee high definition television encoders based on the FBPME algorithm have been implemented. FBPME hardware architecture has been developed and is being incorporated into single-chip MPEG encoders. A wide range of video sequences at various resolutions has been tested. The proposed algorithm is also applicable to other digital video compression standards such as H.261, H.263, and MPEG4 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The coding ecology: image coding via competition among experts

    Publication Year: 2000 , Page(s): 1049 - 1058
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (984 KB)  

    We consider the image-coding problem as a competitive ecology of specialists, each vying for the task of coding a portion of an image. Each specialist, or expert, derives its competitive advantage from its ability to concisely describe an underlying visual event (e.g., shadows, motion, occluding objects). In this paper, the metaphor of an auction informs the design of a predictive coder. Experts locate candidate regions of support, characterize the activity within, and submit their proposals in the form of bids passed along to an auctioneer. We describe the designs for six such experts, and examine candidate strategies for the decision unit (the auctioneer). We define protocols for comparing the bids, and compare overall coding performance using several examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient image segmentation for region-based motion estimation and compensation

    Publication Year: 2000 , Page(s): 1029 - 1039
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (480 KB)  

    An intra-frame segmentation strategy to assist region-based motion estimation and compensation is presented. It is based on the multiresolution application of a histogram clustering and a probabilistic relaxation-labeling algorithm, followed by a local gradient-based bottom-up merging procedure. Specially suited for region-based video coding, it strongly differs from other proposals in that it generates arbitrary shaped image regions with pixel accuracy at a low computational cost, while allowing full reconstruction of the segmentation at the decoder without the transmission of any region description information View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Matching pursuits video coding: dictionaries and fast implementation

    Publication Year: 2000 , Page(s): 1103 - 1115
    Cited by:  Papers (18)  |  Patents (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (760 KB)  

    Matching pursuits over a basis of separable Gabor functions has been demonstrated to outperform DCT methods for displaced frame difference coding for video compression. Unfortunately, apart from very low bit-rate applications, the algorithm involves an extremely high computational load. This paper contains an original contribution to the issues of dictionary selection and fast implementation for matching pursuits video coding. First, it is shown that the PSNR performance of existing matching pursuits codecs can be improved and the implementation cost reduced by a better selection of dictionary functions. Secondly, dictionary factorization is put forward to further reduce implementation costs. A reduction of the computational load by a factor of 20 is achieved compared to implementations reported to date. For a majority of test conditions, this reduction is supplemented by an improvement in reconstruction quality. Finally, a pruned full-search algorithm is introduced, which offers significant quality gains compared to the better-known heuristic fast-search algorithm, while keeping the computational cost low View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Binary subband decomposition and concatenated arithmetic coding

    Publication Year: 2000 , Page(s): 1059 - 1067
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB)  

    This paper proposes a new subband coding approach to compression of document images, which is based on nonlinear binary subband decomposition followed by the concatenated arithmetic coding. We choose to use the sampling-exclusive OR (XOR) subband decomposition to exploit its beneficial characteristics to conserve the alphabet size of symbols and provide a small region of support while providing the perfect reconstruction property. We propose a concatenated arithmetic coding scheme to alleviate the degradation of predictability caused by subband decomposition, where three high-pass subband coefficients at the same location are concatenated and then encoded by an octave arithmetic coder. The proposed concatenated arithmetic coding is performed based on a conditioning context properly selected by exploiting the nature of the sampling-XOR subband filter bank as well as taking the advantage of noncausal prediction capability of subband coding. We also introduce a unicolor map to efficiently represent large uniform regions frequently appearing in document images. Simulation results show that each of the functional blocks proposed in the paper performs very well, and consequently, the proposed subband coder provides good compression of document images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Apparent 3-D camera velocity-extraction and applications

    Publication Year: 2000 , Page(s): 1185 - 1191
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB)  

    In this paper, we describe a robust method for the extraction of the apparent 3-D camera velocity and 3-D scene structure information. Our method performs the extraction of the apparent 3-D camera velocity in a fully automated way without any knowledge about 3-D scene content information as used in current methods. This has the advantage that it can be used to fully automate the generation of natural-looking virtual/augmented environments, as well as in video-database browsing. First, we describe our method for the robust extraction of 3-D parameters. This method is a combination of the eight-point method in structure-from-motion with a statistical technique to automatically select feature points in the image, irrespective of 3-D content information. Second, we discuss two applications which use the results of the 3-D parameter extraction. The first application is the generation of sprite layers using 3-D camera velocity information to represent an eight-parameter perspective image-to-sprite mapping plus 3-D scene depth information for the sprite layering. The second application is the use of 3-D camera velocity for the indexing of large video databases according to a set of seven independent types of camera motion. In this application, we discuss a formal video description structure-the camera-motion descriptor-which was successfully included in the working draft of the MPEG-7 video standard View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Encoding and reconstruction of incomplete 3-D video objects

    Publication Year: 2000 , Page(s): 1198 - 1207
    Cited by:  Papers (1)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB)  

    A new approach for compact representation, MPEG-4 encoding, and reconstruction of video objects captured by an uncalibrated system of multiple cameras is presented. The method is based on the incomplete 3-D (I3D) technique, which was initially investigated for stereo video objects captured by parallel cameras. Non-overlapping portions of the object are extracted from the reference views, each view having the corresponding portion with the highest resolution. This way, the redundancy of the initial multiview data is reduced. The areas which are extracted from the basis views are denoted as areas of interest. The output of the analysis stage, i.e., the areas of interest and the corresponding parts of the disparity fields are encoded in the MPEG-4 bitstream. Disparity fields define the correspondence relations between the reference views. The view synthesis is performed by disparity-oriented reprojection of the areas of interest into the virtual view plane and can be seen as an intermediate postprocessing stage between the decoder and the scene compositor. This work performs an extension from parallel stereo views to arbitrary configured multi-views with new analysis and synthesis algorithms. Moreover, a two-way interaction is built between the analysis and reconstruction stages, which provides the tradeoff between the final image quality and amount of data transmitted. The focus is on a low-complexity solution enabling online processing capability while preserving the MPEG-4 compatibility of the I3D representation. It is finally shown that our method yields quite convincing results despite the minimal data used and the approximations involved View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spatio-temporal scalability for MPEG video coding

    Publication Year: 2000 , Page(s): 1088 - 1093
    Cited by:  Papers (18)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB)  

    The existing and standardized solutions for spatial scalability are not satisfactory, therefore new approaches are very actively being explored. The goal of this paper is to improve spatial scalability of MPEG-2 for progressive video. In order to avoid problems with too large bitstreams of the base layer produced by some of the hitherto proposed spatially scalable coders, spatio-temporal scalability is proposed for video compression systems. It is assumed that a coder produces two bitstreams, where the base-layer bitstream corresponds to pictures with reduced both spatial and temporal resolution while the enhancement layer bitstream is used to transmit the information needed to retrieve images with full spatial and temporal resolution. In the base layer, temporal resolution reduction is obtained by B-frame data partitioning, i.e., by placing each second frame (B-frame) in the enhancement layer. Subband (wavelet) analysis is used to provide spatial decomposition of the signal. Full compatibility with the MPEG-2 standard is ensured in the base layer, as compared to single-layer MPEG-2 encoding at bit rates below 6 Mbits/s, the bitrate overhead for scalability is less than 15% in most cases View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A two-layered wavelet-based algorithm for efficient lossless and lossy image compression

    Publication Year: 2000 , Page(s): 1094 - 1102
    Cited by:  Papers (16)  |  Patents (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB)  

    In this paper, we propose a wavelet-based image-coding scheme allowing lossless and lossy compression, simultaneously. Our two-layered approach utilizes the best of two worlds: it uses a highly performing wavelet-based or wavelet packet-based coding technique for lossy compression in the low bit range as a first stage. For the second (optional) stage, we extend the concept of reversible integer wavelet transforms to the more flexible class of adaptive reversible integer wavelet packet transforms which are based on the generation of a whole library of bases, from which the best representation for a given residue between the reconstructed lossy compressed image and the original image is chosen using a fast-search algorithm. We present experimental results demonstrating that our compression algorithm yields a rate-distortion performance similar or superior to the best currently published pure lossy still image-coding methods. At the same time, the lossless compression performance of our two-layered scheme is comparable to that of state-of-the-art pure lossless image-coding schemes. Compared to other combined lossy/lossless coding schemes such as the emerging JPEG-2000 still image-coding standard PSNR improvements up to 3 dB are achieved for a set of standard test images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fast full-search motion-estimation algorithm using representative pixels and adaptive matching scan

    Publication Year: 2000 , Page(s): 1040 - 1048
    Cited by:  Papers (29)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (164 KB)  

    A full-search based block-matching algorithm for motion estimation has a major problem of significant computational load. To solve this problem, extensive research in fast-motion estimation algorithms have been carried out. However, most of them have some degradation in the predicted image from the reduced computation. To decrease the amount of significant computation of the full-search algorithm, we propose a fast block-matching algorithm based on an adaptive matching scan and representative pixels without any degradation of the predicted image. By using Taylor series expansion, we obtain the representative pixels and show that the block-matching errors from the reference block and candidate blocks are proportional to the block complexity. With the derived result, we propose a fast full-search algorithm with adaptive scan direction in block matching. Experimentally, our proposed algorithm is very efficient in terms of computational speedup, and is the fastest among all the conventional full-search algorithms. Therefore, our algorithm is useful in VLSI implementation of video encoders for real-time encoding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint rate control with look-ahead for multi-program video coding

    Publication Year: 2000 , Page(s): 1159 - 1163
    Cited by:  Papers (14)  |  Patents (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (124 KB)  

    In this paper, we present a new joint rate control algorithm for a multi-program video compression system using MPEG-2 compatible video encoders. The proposed joint rate control is based on both the feedback and look-ahead approaches. It dynamically distributes the channel bandwidth among the program encoders according to the relative complexities of the programs using picture and coding statistics. As opposed to previous works in this area, our algorithm does not restrict the encoders to operate with identical group of pictures (GOP) structures, i.e. the GOP boundaries need not be synchronized among the different encoders. The proposed algorithm allows adaptive distribution of the channel bandwidth among the programs, even at the start of encoding. Furthermore, it assures quick reaction to scene changes, where a feedback approach requires an unavoidable delay. Experimental results show that the proposed joint rate control with look-ahead results in improved picture quality in comparison with a pure feedback approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A software-based real-time MPEG-2 video encoder

    Publication Year: 2000 , Page(s): 1178 - 1184
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB)  

    Dedicated hardware previously has been required to perform real-time MPEG-2 video encoding. However, with increases in clock frequency and the introduction of video-specific instruction sets, general-purpose processors can now approximate the function and performance of single-function hardware. In this paper, we describe a software-only MPEG-2 (MP@ML) video encoder implemented on a personal computer using an IntelTM Pentium(R) III processor. This encoder is capable of real-time operation while consuming less than 70% of the processor. The main contribution of this work is a set of algorithmic simplifications that significantly reduces the computational load of the encoding process while only slightly degrading the subjective video quality compared to encoders that are more exhaustive View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spatial scalable video coding using a combined subband-DCT approach

    Publication Year: 2000 , Page(s): 1080 - 1087
    Cited by:  Papers (16)  |  Patents (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB)  

    A combined subband-DCT approach for spatial scalable video coding is presented. The high-resolution input signal is decomposed into four spatial subband signals. The low-frequency subband is used as the low-resolution signal and is separately coded in the base-layer bitstream, and the high-frequency subband signals are coded in the enhancement-layer bitstream. The low-resolution signal is reconstructed from the base-layer bitstream and the high-resolution signal is reconstructed using both the base- and the enhancement-layer bitstream. Similar to MPEG, DCT-based hybrid coding techniques are applied for the coding of the subband signals, but an improved motion-compensated prediction is used for the low-resolution signal. Additionally, SNR scalability is introduced to allow a flexible bit allocation for the base and the enhancement layer. Experimental results at a bit rate of 6 Mbit/s show that the reference coder MPEG spatial scalable profile (SSP) leads to a loss of more than 2.2-dB peak signal-to-noise ratio (PSNR) compared with nonscalable MPEG-2 coding at the same bit rate, whereas the proposed combined subband-DCT scheme is able to achieve a decrease of less than 0.4 dB in PSNR View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A frame-layer bit allocation for H.263+

    Publication Year: 2000 , Page(s): 1154 - 1158
    Cited by:  Papers (29)  |  Patents (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (124 KB)  

    In typical block-based video coding, the rate-control scheme allocates a target number of bits to each frame of a video sequence and selects the block quantization parameters to meet the frame targets. In this work, we present a new technique for assigning such targets. This method has been adopted in the test model TMN10 of H.263+, but it is applicable to any video coder and is particularly useful for those that use B frames. Our approach selects the frame targets using formulas that result from combining an analytical rate-distortion optimization and a heuristic technique that compensates for the distortion dependency among frames. The method does not require pre-analyses, and encodes each frame only once; hence, it is geared toward low-complexity real-time video coding. We compare this new frame-layer bit allocation in TMN10 to that in MPEG-2's TM5 for a variety of bit rates and video sequences View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal 2-D hierarchical content-based mesh design and update for object-based video

    Publication Year: 2000 , Page(s): 1135 - 1153
    Cited by:  Papers (8)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3488 KB)  

    Representation of video objects (VOs) using hierarchical 2-D content-based meshes for accurate tracking and level of detail (LOD) rendering have been previously proposed, where a simple suboptimal hierarchical mesh design algorithm was employed. However, it was concluded that the performance of the tracking and rendering very much depends on how well each level of the hierarchical mesh structure fits the VO under consideration. To this effect, this paper proposes an optimized design of hierarchical 2-D content-based meshes with a shape-adaptive simplification and a temporal update mechanism for object-based video. Particular contributions of this work are: (1) analysis of optimal number of nodes for the initial fine level-of-detail mesh design; (2) adaptive shape simplification across hierarchy levels; (3) optimization of the interior-node decimation method to remove only a maximal independent set to preserve Delaunay topology across hierarchy levels for better bitrate versus quality performance; and (4) a mesh-update mechanism which serves to update a temporally 2-D dynamic mesh in case of occlusion due to 3-D motion and self-occlusion. The proposed optimized and temporally updated hierarchical mesh representation can be applied in object-based video coding, retrieval, and manipulation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Noise estimation for blocking artifacts reduction in DCT coded images

    Publication Year: 2000 , Page(s): 1116 - 1120
    Cited by:  Papers (13)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1200 KB)  

    This paper proposes a postprocessing algorithm that can reduce the blocking artifacts in discrete cosine transform (DCT) coded images. To analyze blocking artifacts as noise components residing across two neighboring blocks, we use 1-D pixel vectors made of pixel rows or columns across two neighboring blocks. We model the blocky noise in each pixel vector as a shape vector weighted by the boundary discontinuity. The boundary discontinuity of each vector is estimated from the difference between the pixel gradient across the block boundary and that of the internal pixels. We make minimum mean squared error (MMSE) estimates of the shape vectors, indexed by the local image activity, based on the noise statistics prior to postprocessing. Once the estimated shape vectors are stored in the decoder, the proposed algorithm eliminates the noise components by simply subtracting from each pixel vector an appropriate shape vector multiplied by the boundary discontinuity. The experimental results show that the proposed algorithm is highly effective in reducing blocking artifacts in both subjective and objective viewpoints, at low computational burden View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The emphasis is focused on, but not limited to:
1. Video A/D and D/ A
2. Video Compression Techniques and Signal Processing
3. Multi-Dimensional Filters and Transforms
4. High Speed Real-Tune Circuits
5. Multi-Processors Systems—Hardware and Software
6. VLSI Architecture and Implementation for Video Technology 

 

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dan Schonfeld
Multimedia Communications Laboratory
ECE Dept. (M/C 154)
University of Illinois at Chicago (UIC)
Chicago, IL 60607-7053
tcsvt-eic@tcad.polito.it

Managing Editor
Jaqueline Zelkowitz
tcsvt@tcad.polito.it