Exploring In-Memory Accelerators and FPGAs for Latency-Sensitive DNN Inference on Edge Servers | IEEE Conference Publication | IEEE Xplore