Loading [a11y]/accessibility-menu.js
Technical Briefing on Parameter Efficient Fine-Tuning of (Large) Language Models for Code-Intelligence | IEEE Conference Publication | IEEE Xplore

Technical Briefing on Parameter Efficient Fine-Tuning of (Large) Language Models for Code-Intelligence


Abstract:

Large Language Models (LLMs) have gained much attention in the Software Engineering (SE) community, specifically for code-related tasks. Though a common approach is to fi...Show More

Abstract:

Large Language Models (LLMs) have gained much attention in the Software Engineering (SE) community, specifically for code-related tasks. Though a common approach is to fine-tune these models fully, it is a computationally heavy and time-consuming process that is not accessible to all. More importantly, with billions of parameters in the models, fully fine-tuning them for new tasks or domains is infeasible or inefficient. This technical briefing covers the alternative approach-Parameter Efficient Fine Tuning (PEFT), discussing the state-of-the-art techniques and reflecting on the few studies of using PEFT in Software Engineering and how changing the current PEFT architectures in natural language processing could enhance the performance for code-related tasks.
Date of Conference: 14-20 April 2024
Date Added to IEEE Xplore: 20 June 2024
ISBN Information:

ISSN Information:

Conference Location: Lisbon, Portugal

Contact IEEE to Subscribe

References

References is not available for this document.