Multi-Node Inference Architectures for Low-Latency LLM Serving | IEEE Conference Publication | IEEE Xplore