Abstract:
In this paper, the development of a multi-modal human assistant was tackled. This assistant could help humans based on a given utterance together with the help of machine...Show MoreMetadata
Abstract:
In this paper, the development of a multi-modal human assistant was tackled. This assistant could help humans based on a given utterance together with the help of machine vision. The utterance, Modern Standard Arabic (MSA) or Egyptian dialect, could be a question about something in the assistant's environment or a request that the assistant can accomplish by coupling with a robot in the upcoming work. The utterance was processed through a mix of previously used techniques such as natural language processing (NLP), sentence similarity, and pattern matching rather than using each one alone. The techniques are tweaked to evolve an algorithm that can deal with an utterance even if two languages or more are existing.
Published in: 2022 2nd International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC)
Date of Conference: 08-09 May 2022
Date Added to IEEE Xplore: 01 June 2022
ISBN Information: