I. Introduction
Machine learning has become an essential tool for analyzing complex data and making predictions in various fields, including finance, healthcare, and technology. Artificial neural networks have been the dominant machine learning technique in recent years due to their high accuracy in predictive tasks. While ANNs (especially deep neural networks, DNNs) have been successful in many application domains, they have certain limitations, such as high complexity and lack of interpretability. These limitations have led researchers to explore alternative machine learning methods, including the Tsetlin machine (TM), a relatively new logic-based approach, proposed by Granmo in 2018 [1]. Recent studies have shown that TM provides a promising alternative to DNNs with several advantages. TM is an interpretable, low complexity algorithm and has a unique logic-based learning mechanism supporting parallelism that makes TM suitable for native hardware implementation, which promises much better performance than traditional DNNs. TM has been actively developed over the last few years and demonstrated competitive accuracy on a number of benchmarks [2]. In summary, higher energy efficiency, native hardware support and design productivity make TM attractive for embedded applications and hardware acceleration [3], [4].