I. Introduction
Butterflies are pivotal for ecosystems, especially as pollina-tors aiding plant reproduction and biodiversity. However, while they contribute positively to the environment, some species can also pose significant threats such as pests, which may cause damage to crops and forests. This duality underscores the need for effective butterfly species management, beginning with accurate species identification. Among the methods for their automatic recognition, imaging-based techniques have proven to be highly effective over the years. Traditionally, these systems have relied on RGB images captured by standard cameras [1]. Using traditional machine learning techniques, models are designed with a variety of features extracted from labeled butterfly images. The Gray Level Co-occurrence Matrix (GLCM) is a widely-used method for texture features extraction of butterfly images, which helps to create accurate models using K-Nearest Neighbors (K-NN) [2], Multinomial Logistic Regression (MLR) [3], and Rough Set approach [4]. Texture features using Local Binary Patterns (LBP) have also been explored with a Support Vector Machine (SVM) classifier [5]. Deep learning techniques, through the application of Convolutional Neural Networks (CNN) were studied [6]. Zhu et al. [7] applied CNN on an image database of 10 butterfly species. In contrast, in the case of a large species database-i.e., with more than 80 species-advanced deep learning techniques like YOLO [8] and ResNet [9] have proven to be effective for classification purpose.