Loading web-font TeX/Math/Italic
A Consensus Framework for Convolutional Dictionary Learning based on L1 Norm Error | IEEE Conference Publication | IEEE Xplore

A Consensus Framework for Convolutional Dictionary Learning based on L1 Norm Error


Abstract:

Convolutional Sparse Representations (CSRs) ap-proximate an original signal with a sum of convolutions of dictio-nary filters and their coefficient maps. Convolutional Di...Show More

Abstract:

Convolutional Sparse Representations (CSRs) ap-proximate an original signal with a sum of convolutions of dictio-nary filters and their coefficient maps. Convolutional Dictionary Learning (CDL) is a problem to get a set of convolutional dictio-naries for CSRs. An effective way to get high fidelity dictionaries for any images is using enormous images for learning; however, there is a limitation of memory capacity for normal CDL. This paper tackles robust dictionary design with the l_{1} norm error on the error term instead of the l_{2} norm, which is generally used for CDL, for an enormous number of learning images. Furthermore, our method employs a consensus framework to decrease the memory consumption. The number of learning images without the consensus frame work for dictionary learning is up to about 100 at most, but our method obtains the dictionaries using more learning images: 1,000 and 10,000 in the experiments. As for the dictionary fidelity, the dictionary designed with the l_{1} error term for 100 test images generates about 3 dB higher PSNR images than that with the l_{2} error term at equivalent sparseness of coefficients.
Date of Conference: 14-17 December 2021
Date Added to IEEE Xplore: 03 February 2022
ISBN Information:

ISSN Information:

Conference Location: Tokyo, Japan

Funding Agency:

References is not available for this document.

Select All
1.
J. Mairal, F. R. Bach and J. Ponce, “Sparse Modeling for Image and Vision Processing,” Now Foundations and Trends, 2014, DOI: 10.1561/0600000058.
2.
B. Wohlberg, “Efficient Algorithms for Convolutional Sparse Representations,” in IEEE Transactions on Image Processing, vol. 25, no. 1, pp. 301–315, Jan. 2016, doi: 10.1109/TIP.2015.2495260.
3.
C. Garcia-Cardona and B. Wohlberg, “Convolutional Dictionary Learning: A Comparative Review and New Algorithms,” IEEE Transactions on Computational Imaging, vol. 4, no. 3, pp. 366–381, Sep. 2018, DOI: 10.1109/TCI.2018.2840334.
4.
S. Wang, Q. Liu, Y. Xia, P. Dong, J. Luo, Q. Huang, and D.D. Feng, “Dictionary learning based impulse noise removal via L1-L1 minimization,” Signal Processing, vol. 93, no. 9, pp. 2696–2708, 2013, ISSN 0165–1684, DOI: 10.1016/j.sigpro.2013.03.005.
5.
S. Mukherjee, R. Basu, and C. S. Seelamantula, “11-K-SVD: A robust dictionary learning algorithm with simultaneous up-date,” Signal Processing, vol. 123, pp. 42–52, 2016, ISSN 0165–168, DOI: 10.1016/j.sigpro.20 15. 12.008.
6.
B. Wohlberg, “Convolutional sparse representations as an image model for impulse noise restoration,” 2016 IEEE 12th Image, Video, and Mul-tidimensional Signal Processing Workshop (IVMSP), Bordeaux, France, 2016, pp. 1–5, DOI: 10.1109/IVMSPW.2016.7528229.
7.
S. Boyd N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and sta-tistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine learning vol. 3, no. 1, pp. 1–122, 2010, DOI: 10.1561/2200000016.
8.
N. Parikh and S. Boyd, “Proximal algorithms,” Foundations and Trends in Optimization, vol. 1, no. 3, pp. 127–239, 2014. DOI: 10.1561/2400000003.
9.
B. Wohlberg, “SPORCO ”, http://brendt.wohlberg.net/software/SPORCO/, July 2021.
10.
O. Russakovsky *, J. Deng *, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A.C. Berg and L. Fei-Fei “ImageNet Large Scale Visual Recognition Challenge,” IJCV, 2015.

Contact IEEE to Subscribe

References

References is not available for this document.