GPTQT: Quantize Large Language Models Twice to Push the Efficiency | IEEE Conference Publication | IEEE Xplore