Accelerating Deep Learning Inference via Model Parallelism and Partial Computation Offloading | IEEE Journals & Magazine | IEEE Xplore