Improving Inference Latency and Energy of Network-on-Chip based Convolutional Neural Networks through Weights Compression | IEEE Conference Publication | IEEE Xplore