A Pragmatic Framework for Assessing Learning Outcomes in Competency-Based Courses

Contribution: A competency assessment framework that enables learning analytics for course monitoring and continuous improvement. Our work fills the gap in systematic methods for competency assessment in higher education. Background: Many institutions are shifting toward competency-based education (CBE), thus encouraging their educators to start evaluating their students under this paradigm. Previous research shows that structured assessment models are fundamental in guiding educators toward this adoption. Intended Outcomes: An assessment model for CBE that is easy to adopt and use, while facilitating the application of learning analytics techniques. Application Design: The new framework considerably extends a prior model we proposed three years ago. Two engineering competency-based courses used the framework for assessment. Assessment rubrics were prepared and used for evaluating and collecting the students’ data progressively, thus enabling the use of learning analytics for decision-making. Findings: Thanks to the model: 1) students received a detailed report of their achievements, including a thorough explanation and justification of the evaluation criteria and 2) instructors could improve the course and provide objective evidence of their actions to quality assurance agencies. As a result, the framework is presently being used in 15 courses taught at eight different university degrees at the Pontifical Catholic University of Valparaiso (PUCV).


A Pragmatic Framework for Assessing Learning
Outcomes in Competency-Based Courses Hector Vargas , Ruben Heradio , Gonzalo Farias , Zhongcheng Lei , Member, IEEE, and Luis de la Torre Abstract-Contribution: A competency assessment framework that enables learning analytics for course monitoring and continuous improvement.Our work fills the gap in systematic methods for competency assessment in higher education.
Background: Many institutions are shifting toward competency-based education (CBE), thus encouraging their educators to start evaluating their students under this paradigm.Previous research shows that structured assessment models are fundamental in guiding educators toward this adoption.
Intended Outcomes: An assessment model for CBE that is easy to adopt and use, while facilitating the application of learning analytics techniques.
Application Design: The new framework considerably extends a prior model we proposed three years ago.Two engineering competency-based courses used the framework for assessment.Assessment rubrics were prepared and used for evaluating and collecting the students' data progressively, thus enabling the use of learning analytics for decision-making.
Findings: Thanks to the model: 1) students received a detailed report of their achievements, including a thorough explanation and justification of the evaluation criteria and 2) instructors could improve the course and provide objective evidence of their actions to quality assurance agencies.As a result, the framework is presently being used in 15 courses taught at eight different university degrees at the Pontifical Catholic University of Valparaiso (PUCV).Index Terms-Assessment, competency, engineering education, learning outcome.

I. INTRODUCTION
I N THE last years, academic institutions all over the world have declared their conversion toward a competency-based education (CBE) model [1], [2], [3], [4], [5].In CBE, unlike traditional education, students must not only acquire the disciplinary theory (knowledge) but also the skills and attitudes (experience) necessary to cope with real-world working problems [6], [7], [8], thus promising great benefits.Nevertheless, despite the long time this teaching model has been in the education system, its practical implementation still brings a variety of complexities that undermine its effectiveness and success.Without intending to make an exhaustive review of the subject, some arguments that support this statement are provided as follows.
A literature review on the use of CBE in engineering higher education [9] sought to identify the existing gaps toward a successful implementation.One of the gaps detected was the little consensus regarding how study programs should be structured and how competencies should be evaluated.The study also concluded that educators usually differ in their definitions of CBE, particularly in what constitutes competency mastery and what is considered an appropriate assessment.
An example of the consequences of the above was reported in [10].The study aimed to determine if the changes resulting from the conversion to a CBE model reached a practical level by analyzing teachers' conceptualizations of their teaching procedure.The research concluded that teachers had not internalized their role in CBE and that traditional education models still prevailed.Although teachers are critical to the success or failure of changes as important as this, they had not understood the CBE fundamentals in-depth nor their specific role as a catalyst.The teaching staff's worries about their own competency in CBE were previously documented in [11] and [12].
Curricula are typically defined at different abstraction levels, e.g., the global educational framework in a country (macro-curriculum), the career programs in each university (meso-curriculum), and the content and assessment plans of each subject in a career (micro-curriculum).Unfortunately, the actual implementation of learning outcomes and competencies at micro-curricular level often stray from their macrocurricular/meso-curricular theoretical designs [12], [13].Thus, new methodologies are needed to keep the CBE design consistent at all abstraction levels [14].
It is widely known in academic management that higher education institutions undergo frequent accreditation processes and updates to improve their curriculum and teaching methods, as well as to maintain their educational standards.To ensure quality, universities worldwide rely on external Quality Assurance Agencies for Higher Education to review their internal processes [15].These agencies strictly monitor the fulfillment of graduate profiles according to established quality standards.However, universities face challenges in providing concrete evidence that their curriculums are being rigorously implemented, particularly in CBE programs [16].Therefore, accountability is crucial for universities to keep their accreditation and show their commitment to quality education [17].
The current solutions to these problems seek, on the one hand, to create procedures to design CBE study plans whose macro/meso-curricular structures are aligned with the microcurricular level [18] and, on the other hand, to develop standardized mechanisms for students' assessment in these courses [19].The latter is a key point to overcome the difficulties of implementation since, if teachers correctly understand how to evaluate in CBE, the model's chances of success increase significantly since it is where the macro-curricular design is materialized.
Assessment in CBE is the measurement of students' competency against a standard of performance [20].Operationally, it is a process of collecting evidence to analyze students' progress and achievement.The literature suggests that CBE assessment practices differ widely among universities, and little work has focused on identifying best practices [21].Also, a variety of ad-hoc software tools [22], [23], [24], [25], [26] have been developed to facilitate the evaluation of students in the competency-based model.
This article complements related work by providing a uniformized and systematic way to assess students in CBE courses.Our approach generality has been tested over the last seven years in multiple courses at the Pontifical Catholic University of Valparaiso (PUCV), Chile.As a result, PUCV is currently fostering the usage of our assessment framework in all its courses.
Our framework offers the following contributions.1) Contribution 1: A systematic way to design competencybased courses and their evaluation strategies.2) Contribution 2: Straightforward and pragmatic guidance for teachers to make the most of CBE concepts.3) Contribution 3: Evidence on students' competency achievement, which is typically required by Quality Assurance Agencies.4) Contribution 4: Support for learning analytics that help to monitor and improve the courses.5) Contribution 5: A decompositional structure that can encompass all curriculum levels, thus supporting curriculum analysis and consistency checking at all levels.6) Contribution 6: A procedure for lowering the CBE adoption barrier, helping to convert traditional grades into competency grades with a CBE-based pedagogical sense.The remainder of this article is organized as follows.Section II presents our competency assessment framework.Section III illustrates its application in two engineering courses.Section IV summarizes and discusses our framework's benefits and drawbacks.Finally, Section V provides some concluding remarks.

II. PRAGMATIC CBE ASSESSMENT MODEL
Our Competency Assessment and Monitoring (C-A&M + ) framework supports: 1) specifying how students are evaluated, making explicit the rationale behind it in terms of competencies and learning outcomes, and providing students with detailed rubrics that justify their results and 2) tracking students' achievements to analyze the course as a whole thus enabling its improvement.

A. Students' Evaluation
Fig. 1 shows the metamodel [27] of C-A&M + , which is an evolution of the prior C-A&M framework we presented in [28].When C-A&M + is instantiated for a particular course, abstract elements are subsequently decomposed into more concrete elements, thus facilitating the top-down design of the course through stepwise refinements [29].Decompositions have associated weights that account for the percent contribution of each subelement (i.e., for their relative importance).These weights support the bottom-up assessment calculation of abstract elements by aggregating the assessments of more concrete elements through weighted averages.For instance, let us imagine that an element X is composed by three others (A, B, and C), with weights 20%, 30%, and 50%, respectively.Then, X's assessment value would be inferred from the assessment values of A-C's as X = 0.2A + 0.3B + 0.5C.
In C-A&M (our framework previous version), the assessment model described the need to create concrete assessment indicators associated with the Learning Outcomes (LO s ) so that these are afterward collected from the Assessment Tools (AT s ).However, this relationship was not explicitly defined, leaving the responsibility to the course teaching staff to do it.C-A&M + provides a structured methodology to define such indicators and how these can be strategically used to create assessment rubrics for AT s .This is a key point of C-A&M + because the evaluative cycle closes just here, when students receive their marks objectively assessed through rubrics used by teachers to provide the corresponding feedback.
The indicators are decomposed into two levels: General Indicators (GI s ) and Specific Indicators (SI s ).A GI corresponds to the measurement of an intermediate learning level between an LO and an SI, whereas an SI is the direct measurement of a specific knowledge, skill, or attitude in the student.

TABLE I ASSESSMENT RUBRIC FOR AN AT
Assessment rubrics for AT s can be easily derived from the indicators decomposition.Table I shows a template for building an assessment rubric for an AT based on the information specified by a C-A&M + model.The "Excellent" column represents the full achievement of all SI s of a particular GI, and so, the next columns represent the assigned score for the indicator in a decreasing manner.

B. Course's Evaluation
C-A&M + stores students' achievements at different abstraction levels, thus supporting the further analysis of the entire course from diverse perspectives to identify its strengths and weaknesses.For example, imagine the instructor wants to test if a course innovation introduced this year positively affects a particular competency C. As C-A&M + gathers all students' achievements in terms of competencies and learning outcomes, the instructor can summarize C results and compare them with prior years.Section III exemplifies how to do this kind of analysis.

III. EXAMPLES OF COURSES ASSESSED WITH C-A&M +
This section illustrates the use of C-A&M + with two engineering courses taught at PUCV.

A. Automatic Control Course
C-A&M + was applied to a compulsory course of the Automatic Control Course, allocated in the sixth semester of the Electronic Engineering master's degree at the PUCV Faculty of Engineering.Every year, around 90-100 students are enrolled in a 16-week course that covers the main topics of linear control systems.The lectures are 4 h per week and are delivered to the entire group of students.Following the lectures, students participate in weekly simulation sessions (2 h per week) in pairs, where they apply the control theory they learned using specialized software tools.The course focuses on the main control topics.
Unit models and simulates processes to represent their behavior, optimize their parameters, and improve their operating conditions.The starting point for applying the assessment model is to define the learning outcomes seeking to connect the contents of the course with the competencies.In this case, the following three learning outcomes were defined.
The student will be able to• • • LO 1.1 apply methodologies of control system analysis to solve discipline problems.LO 1.2 apply methodologies of control system design to solve discipline problems.LO 2.1 model and simulate control systems to solve discipline problems.Equation (1) summarizes C s ' decomposition into LO s .C 1 is developed with LO 1.1 , which is related to the control system analysis (units 1-4), and LO 1.2 , which is associated with the control systems design (units 5 and 6).Both learning outcomes contribute with the same weight (50%) to the competency.Likewise, C 2 is developed with LO 2.1 , which is related to the control system modeling and simulation.In this case, this single learning outcome contributes fully (100%) to the corresponding competency ( Subsequently, for each LO, a set of assessment indicators are defined.In this case, a one-to-one assessment strategy was applied, e.g., LO 1.1 = AT 1 . AT 1 and AT 2 correspond to traditional written tests (individual), while AT 3 is a simulation-based homework (in groups of two persons).Tables II-IV outline the general and specific indicators for AT 1 -AT 3 .
Analyzing Students' Outcomes for Course Improvement: C-A&M + was first implemented in the 5th edition of the Automatic Control course in 2016.It has proven to be useful for keeping students informed about their grades and evaluating their performance based on competencies and learning outcomes.Additionally, it has helped teachers monitor the progress of the course over time by measuring, recording, and tracking students' academic performance.This information is then used to implement corrective actions that can improve the attainment of competencies in future courses.teachers were deeply concerned about the low competency achievement in the initial course versions.For instance, in 2016 2nd sem., most students were unable to achieve C 1 .
C-A&M + has the advantage of supporting the transition between different assessment abstraction levels, which aids in identifying the root cause of an educational problem.
In Fig. 3, we can observe the progress of students with regards to LO 1.1 and LO 1.2 .It appears that LO 1.1 was the main factor contributing to the low achievement of C 1 in the second semester of 2016, as its median was only 3.4, while LO 1.2 had a median of 4.1.Therefore, teachers decided to focus on improving the teaching of LO 1.1 .
To achieve this, they made changes to the teaching materials for control system analysis and reorganized the course schedule to allow for more class time on LO 1.2 .This resulted in a slight improvement in the grades of students concerning LO 1.1 during the first semester of 2017, but it was achieved at the expense of lower performance for LO 1.2 due to the reduced class time.As a result, the teachers then shifted their efforts toward improving the students' performance in LO 1.2 .
As depicted in Fig. 3, it appears that teachers have been facing challenges in finding the right balance between LO 1.1  and LO 1.2 throughout the multiple course editions.As a result, the actions taken by teachers have resulted in an accumulation effect in C 1 , which has finally become noticeable in the first semester of 2018.
Fig. 4 shows another application scenario.In 2020, we evaluated the use of an interactive simulation tool called linear control system design (LCSD) [30], which is free and specifically designed for teaching the fundamentals of control engineering.In particular, we were interested in checking LCSD impact on LO 1.1 .In order to evaluate the impact of LCSD on student performance, we observed two distinct groups: a treatment group that used LCSD in their course, and a control group that did not.Based on the data shown in Fig. 4, the treatment group appears to have slightly higher scores than the control group.Further analysis was conducted by examining the differences between the two groups at the Specific Indicators level, as shown in Fig. 5 and detailed in Table II.This analysis revealed that LCSD had an impact notably positive on SI 5 , SI 6 , and SI 8 .

B. Control Laboratory Course
This section presents a Control Laboratory Course, which students take once the Automatic Control Course in Section III-A is passed.It is a 16-week course with 60-80 students per year.Lectures (2h per week to the whole group of students) cover the main topics about control applications from a practical point-of-view which are later applied in hands-on laboratory sessions weekly (4 h per week in groups of 10-14 students distributed in groups of two persons depending on the available lab setups).
Each lab session addresses multiple knowledge, skills, and attitudes that students should demonstrate, with the division of topics between lab sessions being as follows.
Lab 1:Process modeling and identification.Lab 2:Time response and performance specifications.The student will be able to    Equation ( 2) details the competency assessment calculations Students are required to provide four technical reports (AT 1 -AT 4 ) during the course, one for each lab module.Equation (3) describes AT s ' contribution to LO s .Tables V and VI summarize the general and specific indicators for AT 1 and AT 2 -AT 4 , respectively Educators should grade AT s as objectively as possible.For that, assessment rubrics are typically used.They are important not only for grading but also for providing feedback to students after grading.In this sense, our assessment methodology also provides a way to create rubrics from the indicator tables easily.For instance, Table VII   rubrics are available in most learning management systems (LMSs).Our university uses the widely known Moodle LMS, which supports rubric-based assessment [31].Fig. 7 shows a grading process example with Moodle rubrics for a report of the Control Laboratory Course.This view is available when the instructor selects a particular student's report to be evaluated from the students' grading table.The header, highlighted as 1 , offers contextual information about the course, the specific AT that is evaluated, the name and email of the student under evaluation, and the date when the report was submitted.The report is deployed on the left lower corner 2 , and the instructor can review it using the toolbar located at 3 .From here, the instructor moves through the report's pages, uses the icon-type indicators, and writes comments to trace the student's performance.On the other hand, the assessment rubric is displayed as a table embedded in the lower right corner of the screen or displayed larger by clicking its expand-arrowheads icon, as shown at 4 .For each criterion, the instructor selects an achievement level by clicking on the corresponding cell shaded in green color after selection.Additionally, the instructor can add comments and feedback files 5 .
It is important to mention that, unless there is a valid reason, students should have access to a rubric preview beforehand, so they are aware of the criteria for evaluation.After the instructor has evaluated all reports, Moodle can be used to gather the assessment indicators for each student and AT for performing learning analytics, as explained in the following section.
Analyzing Students' Outcomes for Course Improvement: The boxplots in Fig. 8 compare students' achievements at the levels of competencies, learning outcomes, and assessment tools.A commonly mistaken idea regarding distance learning laboratory courses is that competencies cannot be fully achieved as students do not manipulate actual equipment directly.In contrast, Fig. 8(a) shows that all course competencies were thoroughly achieved and with good performance.Moreover, the competencies concerning the theoretical and practical activities C 1 and C 2 show similar performance in both median and dispersion.
Obviously, if any learning outcome had been related to the direct contact with actual equipment or other practical issues only possible to be performed in situ, possibly this would not have been the case.In this sense, we think signal noise and disturbance emulation play a key role.
Finally, C 3 also performed very well since the teachers frequently insisted on these aspects during the course.
Competency achievement can be examined in more detail by analyzing the learning outcome results (see Fig. 8(b).For instance, students found that theoretical issues about controllers' design (LO 1.2 ) were more complicated than process modeling and identification (LO 1.1 ).This matches with what teachers expected given the AT s difficulty.The same happens with the use of computational tools for such purposes, where a better performance in the usage of software tools for process modeling and identification (LO 2.1 ) than controllers' analysis and design (LO 2.2 ) is observed.
This examination shows teachers that extra effort must be put into the controllers' analysis and design for the subsequent course editions.
Finally, the assessment tool results reveal that students achieved lower performance in AT 3 than in the other AT s , which could pop up alarming flags.However, as described in Section III-B, AT 3 and AT 4 concern controllers' analysis and design.In AT 3 , it is the first time that students encountered this challenge.As they gained experience, their performance improved in the next practical session AT 4 .

IV. RESULTS AND DISCUSSION
Our work was initiated to assist Chilean universities in transitioning to CBE.This transition became particularly crucial after the National Accreditation Commission (CNA-Chile) published the "New Quality Criteria and Standards for Education" [32] on September 30th, 2021.These criteria and standards adopt CBE and are mandatory for all Chilean universities to comply with starting from October 1st, 2023.
Implementing a CBE program involves not only assessing students' competencies and learning outcomes, but also monitoring courses to ensure their proper implementation and continuous improvement.However, as far as we know, there is no systematic approach that covers both aspects.Therefore, we devised C-A&M + .
Seizing that this article's first author is the coordinator of the two courses mentioned in Section III, we have been using C-A&M + in them since 2016.As outlined in Section III and reported in more detail in [28], [30], [33], [34], [35], and [36], C-A&M + has proved its usefulness: students are provided with a comprehensive report of their accomplishments, including a detailed explanation and justification of all evaluation criteria.Additionally, instructors have the opportunity to monitor and enhance the course over time, as well as offer objective proof of their actions to quality assurance agencies.
These two successful pilot applications motivated PUCV to encourage the use of C-A&M + in all its courses.For the moment, C-A&M + is being used in the following 15  In all of these courses, the instructors decided to use C-A&M + voluntarily.According to their feedback, C-A&M + supports the implementation of student assessments under CBE and allows for tracking of course progress.

V. CONCLUSION
Our article has presented a framework called C-A&M + for students' assessment within the increasingly popular CBE paradigm.C-A&M + not only assists instructors in evaluating students but also in monitoring courses for their continuous improvement.
The practicality of C-A&M + has been demonstrated through its application over seven years in two control engineering courses at PUCV in Chile.By breaking down competencies into learning outcomes, indicators, and assessment tools, C-A&M + facilitates the creation of rubrics that establish consistent grading criteria.In addition, C-A&M + supports the analysis of the courses at various levels of abstraction, resulting in their continual improvement over time.
As a result, PUCV is currently encouraging its teaching staff to use C-A&M + .At the moment, 15 courses across eight different university degrees have adopted C-A&M + .
Hector Vargas received the M.Sc.degree in electrical engineering from the De la Frontera University, Temuco, Chile, in 2001, and the Ph.D. degree in computer science from UNED, Madrid, Spain, in 2010.
Since 2010, he has been with the Electrical Engineering School, Pontificia Universidad Católica de Valparaíso, Valparaiso, Chile.His current research interests include dynamic system simulation, automatic control, industrial automation, IoT applications, and engineering education.
Ruben Heradio received the M.Sc.degree in computer science from the Polytechnic University of Madrid, Madrid, Spain, in 2000, and the Ph.D. degree in software engineering and computer systems from UNED, Madrid, in 2007.
He is currently a Full Professor with the Software Engineering and Computer Systems Department, UNED Computer Engineering School, UNED.His research interests include software engineering, computational logic, e-learning, and bibliometrics.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
Gonzalo Farias received the M.Sc.degree in computer science from the De la Frontera University, Temuco, Chile, in 2001, and the Ph.D. degree in computer science from UNED, Madrid, Spain, in 2010.
Since 2012, he has been with the Electrical Engineering School, Pontificia Universidad Catolica de Valparaiso, Valparaiso, Chile.His current research interests include machine learning, simulation and control of dynamic systems, and engineering education.
Zhongcheng Lei (Member, IEEE) received the B.S. degree in automation and the Ph.D. degree in mechatronic engineering from Wuhan University, Wuhan, China, in 2014 and 2019, respectively.
He is currently an Associate Researcher with the School of Electrical Engineering and Automation, Wuhan University.His current research interests include networked control systems, web-based remote and virtual laboratories, and digital twins.
Luis de la Torre received the M.Sc.degree in physics from the Complutense University of Madrid, Madrid, Spain, in 2008, and the Ph.D. degree in computer science from UNED, Madrid, in 2013.
He is a Professor with the Computer Science and Automatic Control Department, UNED.His current research interests include virtual and remote labs, distance education, and HTTP protocols and technologies for networked control systems with event-based control techniques.He has published over 20 articles in international journals on these and other topics.

Fig. 2
displays a boxplot that illustrates the progress of students' C 1 fulfillment in five consecutive course editions, from 2016 to 2018.On average, there were 60.4 students per course.It is worth noting that in the Chilean university system, grades are ranked from 1 to 7, 4 being the minimum score required to pass the course.Figs.2-8show a highlighted red line that indicates the boundary between the successful and unsuccessful achievement of competencies, learning outcomes, indicators, etc.The Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

Lab 3 :
Analysis and design of controllers I. Lab 4:Analysis and design of controllers II.Defining How Students are Assessed: This course contributes to the following three competencies of the graduation profile:The student• • • C 1 designs and conducts experiments to analyze and generate results related to the discipline.C 2 models and simulates processes to represent their behavior, optimize their parameters, and improve their operating conditions.C 3 communicates ideas clearly and coherently through their native language, into an academic context.Competencies C 1 -C 3 are decomposed into the following learning outcomes.

Fig. 7 .
Fig. 7. Example of assessment rubric and its digital implementation into Moodle.

Fig. 8 .
Fig. 8. Students' grades in the 2020 edition of the control laboratory course at the levels of (a) competencies, (b) learning outcomes, and (c) assessment tools.
In particular, the Automatic Control Course contributes to developing two of them, which are described as follows.
1: Introduction to automatic control.Unit 2: Time response and performance specifications.Unit 3: Control system analysis (time domain).Unit 4: Control system analysis (frequency domain).Unit 5: Design of classical controllers.Unit 6: State-space control.Defining How Students are Assessed: The current graduate profile of the study program promotes a total of 17 competencies (C s ).

TABLE II INDICATORS
AND WEIGHTS FOR AT 1 (Automatic Control Course)

TABLE III INDICATORS
AND WEIGHTS FOR AT 2 (Automatic Control Course)

TABLE IV INDICATORS
AND WEIGHTS FOR AT 3 (Automatic Control Course) Fig. 2. Students' results for C 1 (Automatic Control Course).
LO 3.2 develop technical reports whose writing and use of grammar present appropriate quality levels.Fig.6depicts the C-A&M + model for the Control Laboratory Course, thus providing an overview of the model decompositional relationships.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

TABLE V INDICATORS
AND WEIGHTS FOR AT 1 (Control Laboratory Course)

TABLE VI INDICATORS
AND WEIGHTS FOR AT 2 , AT 3 , AND AT 4 (Control Laboratory Course)

TABLE VII ASSESSMENT
RUBRIC FOR AT 2 , AT 3 , AND AT 4 (Control Laboratory Course) shows the assessment rubric defined for AT 2 , AT 3 , and AT 4 of the Control Laboratory Course based on Table VI.It is must be noticed that LO 1.2 and LO 2.2 are fully evaluated with AT 2 , AT 3 , and AT 4 while LO 3.1 and LO 3.2 are evaluated just in part (notice that LO s related to C 3 are evaluated in all AT s ).Similar rubrics for all AT s in both courses were created.If educators systematically used assessment rubrics, learning analytics techniques could be applied based on them.Although educators often use rubrics manually, digital assessment