Skip to Main Content
Alerting systems are a prevalent and integral part of modem cockpits. When alerting systems serve their intended roles in the cockpit, they can increase safety through monitoring for, and directing pilot attention to, developing hazards. However, numerous studies have identified several problematic types of interactions between pilot and alerting systems. Preventing problematic interactions between pilots and alerting systems requires a methodology that can capture and clarify the extent to which pilots will innately agree with and ultimately rely upon alerting systems. This paper details the development of Human-Automated Judgment Learning (HAJL), a new methodology attempting to provide these capabilities. The first section provides the background with regard to modeling and measuring pilot interaction with alerting systems. The next section presents the HAJL methodology. Then an initial experiment substantiating the HAJL methodology is described.