Abstract:
Following an exponential increase in the number of applications created every year, there are currently over 2.5 million apps in the Google Play Store. Consequently, ther...Show MoreMetadata
Abstract:
Following an exponential increase in the number of applications created every year, there are currently over 2.5 million apps in the Google Play Store. Consequently, there has been a sharp rise in the number of apps downloaded by users on their devices. However, limited research has been done on navigability, grouping, and searching of applications on these devices. Current methods of app classification require manual labelling or extracting information from the app store. Such methods are not only resource-intensive and time-consuming but are also not scalable. To overcome these issues, the authors propose a novel architecture for classification of applications into categories, utilizing only the information available in their application packages (APKs) - consequently removing any external dependency and making the entire process completely on-device. A multimodal deep learning approach is followed in a 2-phase training scheme by independently training neural models on distinct sets of information extracted from the APKs and assimilating and fine-tuning the learned weights to incorporate combined knowledge. Our experiments show significant improvement in the evaluation metrics for app classification and clustering over the set benchmarks. The proposed architecture enables a fully on-device solution for app categorization.
Date of Conference: 03-05 February 2020
Date Added to IEEE Xplore: 12 March 2020
ISBN Information:
Print on Demand(PoD) ISSN: 2325-6516