Towards Efficient Key-Value Cache Management for Prefix Prefilling in LLM Inference | IEEE Conference Publication | IEEE Xplore