Abstract:
Machine learning technologies have invaded our daily lives with a wide range of applications such as fraud detection, product recommendations, data mining, and image reco...Show MoreMetadata
Abstract:
Machine learning technologies have invaded our daily lives with a wide range of applications such as fraud detection, product recommendations, data mining, and image recognition. Businesses that need to process and analyze huge amounts of data are competing to become the first to adopt these solutions. Nonetheless, developing classification algorithms using machine learning frameworks is time-consuming, costly, and requires a team with technical capabilities. To reduce these expenses, market leader cloud providers have started to offer the Machine Learning-as-a-service (MLaaS) cloud delivery model. However, businesses and users are still faced with the challenge of deciding on which platform to adopt. In this paper, we evaluate the machine learning classifiers and performance of BigML, Microsoft Azure ML Studio, IBM Watson ML Studio, and Google AutoML Table platforms on the classification of multi-class datasets based on the average-micro F-score, training time, and cost to enable users to make a more informed decision. Since the choice of classifiers can have a crucial impact on the average-micro F-score, we trained all pre-built algorithms offered by each platform on given multi-class datasets to conduct a comprehensive investigation. The results show that Google AutoML provides the user with the highest average-micro F-score, but it is costly and requires more training time. This research will enable the developers of intelligent edge computing services that rely on MLaaS to select the most optimal platform for their applications needs.
Date of Conference: 13-15 August 2021
Date Added to IEEE Xplore: 12 October 2021
ISBN Information: