Skip to Main Content
Human-automated judge learning (HAJL) is a methodology providing a three-phase process, quantitative measures, and analytical methods to support design of information analysis automation. HAJL's measures capture the human and automation's judgment processes, relevant features of the environment, and the relationships between each. Specific measures include achievement of the human and the automation, conflict between them, compromise and adaptation by the human toward the automation, and the human's ability to predict the automation. HAJL's utility is demonstrated herein using a simplified air traffic conflict prediction task. HAJL was able to capture patterns of behavior within and across the three phases with measures of individual judgments and human-automation interaction. Its measures were also used for statistical tests of aggregate effects across human judges. Two between-subject manipulations were crossed to investigate HAJL's sensitivity to interventions in the human's training (sensor noise during training) and in display design (information from the automation about its judgment strategy). HAJL identified that the design intervention impacted conflict and compromise with the automation, participants learned from the automation over time, and those with higher individual judgment achievement were also better able to predict the automation.