Skip to Main Content
Multimodal feature fusion natural human-computer interaction involves complex intelligent architectures facing unexpected errors and mistakes made by users. These architectures should react to events that occur simultaneously with eventual redundancy from different input media. Intelligent agent based genetic architectures for multimedia multimodal dialog protocols are proposed. Global agents are decomposed into their relevant components, and each element is modeled separately using timed colored Petri nets. The elementary models are then linked together to obtain the full architecture. Generic components of the application are then monitored by an agent based expert system to perform dynamic changes in reconfiguration, adaptation and evolution at the architectural level. For validation purposes, the proposed multi-agent architecture and its dynamic reconfiguration are respectively applied on practical examples.
Date of Conference: 2002