I. Introduction
Student-facing learning analytics (SFLA) dashboards do not currently provide students with effective feedback for self-regulation of learning because their designs are not sufficiently underpinned by educational research and there is a lack of rigorous evaluation in authentic learning environments [1]. LA and SFLA dashboards claim to focus on the measurement, analysis, and reporting of learning [2], [3], so as to understand and optimize learning. However, LA research is rarely focused on learning [4]; designs of LA systems are seldom based on the educational theory [1]; and there is a lack of methodology for designing feedback automation [5]. For pragmatic reasons, national educational policy is driving the adoption of asynchronous, digitally mediated experiences [6], [7]. For effective learning during these experiences, a degree of challenge is required [8]. That challenge can cause disengagement if no feedback is provided to students when they need it [9], [10]. Such feedback could be provided by SFLA dashboards, yet the research focus has been on staff-facing LA [2] and only 18% of articles in a review of SFLA [11] actually report the effect of the system on student achievement, behavior, or skills.