Edge Intelligence Optimization for Large Language Model Inference with Batching and Quantization | IEEE Conference Publication | IEEE Xplore