Abstract:
Convolutional Sparse Representations (CSRs) ap-proximate an original signal with a sum of convolutions of dictio-nary filters and their coefficient maps. Convolutional Di...Show MoreMetadata
Abstract:
Convolutional Sparse Representations (CSRs) ap-proximate an original signal with a sum of convolutions of dictio-nary filters and their coefficient maps. Convolutional Dictionary Learning (CDL) is a problem to get a set of convolutional dictio-naries for CSRs. An effective way to get high fidelity dictionaries for any images is using enormous images for learning; however, there is a limitation of memory capacity for normal CDL. This paper tackles robust dictionary design with the l_{1} norm error on the error term instead of the l_{2} norm, which is generally used for CDL, for an enormous number of learning images. Furthermore, our method employs a consensus framework to decrease the memory consumption. The number of learning images without the consensus frame work for dictionary learning is up to about 100 at most, but our method obtains the dictionaries using more learning images: 1,000 and 10,000 in the experiments. As for the dictionary fidelity, the dictionary designed with the l_{1} error term for 100 test images generates about 3 dB higher PSNR images than that with the l_{2} error term at equivalent sparseness of coefficients.
Published in: 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
Date of Conference: 14-17 December 2021
Date Added to IEEE Xplore: 03 February 2022
ISBN Information:
ISSN Information:
Conference Location: Tokyo, Japan