QuantLLM: A Hybrid Classical-Quantum LLM Transformer with Adaptive Routing Framework for Inference Latency Minimization | IEEE Conference Publication | IEEE Xplore