Abstract:
Large Language Models (LLMs) have gained much attention in the Software Engineering (SE) community, specifically for code-related tasks. Though a common approach is to fi...Show MoreMetadata
Abstract:
Large Language Models (LLMs) have gained much attention in the Software Engineering (SE) community, specifically for code-related tasks. Though a common approach is to fine-tune these models fully, it is a computationally heavy and time-consuming process that is not accessible to all. More importantly, with billions of parameters in the models, fully fine-tuning them for new tasks or domains is infeasible or inefficient. This technical briefing covers the alternative approach-Parameter Efficient Fine Tuning (PEFT), discussing the state-of-the-art techniques and reflecting on the few studies of using PEFT in Software Engineering and how changing the current PEFT architectures in natural language processing could enhance the performance for code-related tasks.
Published in: 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)
Date of Conference: 14-20 April 2024
Date Added to IEEE Xplore: 20 June 2024
ISBN Information: