Abstract:
The importance of "talking" and "eating" in preserving one’s well-being has been underscored. Automatic recognition of daily conversation and eating behavior has promisin...Show MoreMetadata
Abstract:
The importance of "talking" and "eating" in preserving one’s well-being has been underscored. Automatic recognition of daily conversation and eating behavior has promising implications for healthcare management and monitoring systems for the older population. In recent years, the performance of speech recognition models has improved dramatically. However, incorporating sounds produced during eating behaviors, particularly those associated with actions such as chewing and swallowing, into current speech recognition models is challenging. Moreover, the simultaneous recognition of speech and eating behaviors has not been accomplished with sufficient accuracy, despite their high relevance to speech. This study proposes a model for simultaneous speech and eating behavior recognition through multitask learning of two tasks: "speech recognition" and "eating behavior recognition." The model utilizes data collected using a biological sound collection device that makes direct contact with the skin. To validate the effectiveness of the proposed method, we conducted evaluation experiments on a publicly available dataset with respect to participants and corpus.
Date of Conference: 29 October 2024 - 01 November 2024
Date Added to IEEE Xplore: 28 November 2024
ISBN Information: