Accelerating Native Inference Model Performance in Edge Devices using TensorRT | IEEE Conference Publication | IEEE Xplore