Fine-Tuning Pre-Trained Language Model for Urgency Classification on Food Safety Feedback | IEEE Conference Publication | IEEE Xplore

Fine-Tuning Pre-Trained Language Model for Urgency Classification on Food Safety Feedback


Abstract:

Singapore Food Agency (SFA) receives hundreds of food safety feedback reports regarding Singapore's food safety every week, which can be time-consuming and costly to mana...Show More

Abstract:

Singapore Food Agency (SFA) receives hundreds of food safety feedback reports regarding Singapore's food safety every week, which can be time-consuming and costly to manage them. Prompt response to urgent food safety feedback is crucial in cases of food poisoning outbreaks. Automating the task of feedback urgency classification can help SFA officers to prioritise feedback efficiently and effectively, so that they can respond quickly to urgent cases. In this paper, we propose an approach to fine-tune a pre-trained language model based on BERT, which is a sequence classification task, for feedback urgency classification. In addition, to speed up the labeling of task-specific feedback data, we also propose a process that utilizes the zero-shot text classification and decision tree methods for data labeling with minimal human supervision. We have conducted experiments to evaluate the proposed fine-tuned BERT model and compared with the DistilBERT and XLNet models for the feedback urgency classification task. The performance results show that the proposed fine-tuned BERT model has achieved promising performance and outperformed the fine-tuned DistilBERT and XLNet models by 7% and 5%, respectively in macro-averaged F1-score.
Date of Conference: 06-07 September 2023
Date Added to IEEE Xplore: 27 October 2023
ISBN Information:
Conference Location: Bandung, Indonesia

Contact IEEE to Subscribe

References

References is not available for this document.