I. Introduction
Existing scholarship has long identified gender biases in hiring practices. Human conscious and unconscious gender bias influences decision mechanisms and has harmed the representation of women in the labour force [1]–[5]. In the past two decades, however, the upsurge of computer-based, automated decision-making (ADM) in recruitment has become more prevalent. Quite predictably, given its supposed pragmatism, automation is assumed to be more impartial, scientific, and mathematical, and thereby is assumed to mitigate the very issue of human biases [6]. It appears that ADM has emerged as a solution to the increasing challenges of recruitment [7], [8]. However, literature has increasingly recognised the vulnerability of ADM in making fair decisions [7], [9], [10]; and attempted to dissect the issue of fairness from intersecting dimensions of race, gender, ability, sexuality, and others [8], [11–19].