Skip to Main Content
An error exponent for woven convolutional codes (WCC) with one tailbiting component code is derived. This error exponent is compared with that of the original WCC. It is shown that for WCC with outer warp, a better error exponent is obtained if the inner code is terminated with the tailbiting method. Furthermore, it is shown that the decoding error probability decreases exponentially with the square of the memory of the constituent convolutional encoders, while the decoding complexity grows exponentially only with the memory.