A Nonradial Super Efficiency DEA Framework Using a MCDM to Measure the Research Efficiency of Disciplines at Chinese Universities

Evaluating the efficiency of scientific research plays an important role in accelerating technological innovation and optimizing the allocation of research resources. Most studies have focused on measuring research efficiency from a macro perspective, ignoring differences within disciplines. Furthermore, existing methods have failed to discriminate between evaluation results and the fact that research has variable returns to scale. To address this, in this paper we propose a multiple-criteria decision-making (MCDM) nonradial super efficiency data envelopment analysis (NRSDEA) model, which uses an output-oriented nonradial SDEA method to manage nonsolution problems and integer decision variable constraints. In addition, we used a Malmquist index to decompose the productivity changes of statistics discipline’s research efficiency at different universities in China. Finally, we verified the rationality and effectiveness of the proposed method.


I. INTRODUCTION
Universities are the core institutions responsible for scientific and technological innovation within a country, and they also provide innovative personnel training, which can affect scientific research levels and regional innovation. In recent years, research investments in Chinese universities have increased rapidly. However, the research efficiency gap has widened at the discipline level.
To allocate limited resources effectively and to promote development between similar disciplines, it is necessary to measure research efficiency [1], [2]. Many researchers have studied the overall efficiency of scientific research in universities, but rarely from a disciplinary perspective [3], [4]. In fact, discipline development is a valid way for universities to improve their scientific research capabilities. By hiring The associate editor coordinating the review of this manuscript and approving it for publication was Francisco J. Garcia-Penalvo . staff dedicated to teaching, training personnel, and conducting research on teaching, a discipline can promote innovation at both a university and regional level [5], [6]. Therefore, it is more practical and meaningful to evaluate the scientific research abilities of different universities or regions from the perspective of a particular discipline.
In this study, we propose a multiple-criteria decisionmaking (MCDM) method based on a nonradial super efficiency data envelopment analysis (NRSDEA) to examine the research efficiency of a discipline. The motivations for the study are as follows.
First, an output-oriented NRSDEA method is introduced to deal with the non-solution problems and integer decision variable constraints inherent in SDEA. SDEA is commonly used to measure efficiency, but some decision-making units (DMUs) are not feasible with variable returns to scale (VRS), resulting in low discrimination. Furthermore, in the case of discrete data, the efficiency results obtained through an SDEA may be impractical. Hence, we improved nonradial SDEA models with VRS conditions, mainly focusing on the rules governing research activities and integer programming in mixed-data circumstances.
Second, we combine a multiple-criteria decision-making method with NRSDEA, and then propose an MCDM-NRSDEA method. Research efficiency involves multiple factors, also called criteria, which usually have different weights of importance. The subjective weights obtained through the nonradial SDEA method are usually affected by human bias. To address this, we combined MCDM with an NRDSDEA method, which minimizes (or maximizes) the slack for each DMU and maximizes (or minimizes) performance to obtain a more neutral set of weights by investigating the lower and upper bounds of the possible weights.
Third, contrary to research that has studied this topic from a university or regional perspective, this paper captures the factors affecting research efficiency from a micro perspective. Most of the literature has ignored the significant differences between different disciplines in terms of scientific research efficiency. Here, we use the discipline of statistics as an example, designing a corresponding indicator system and analyzing scientific research efficiency, which will help relevant universities analyze the discipline's advantages and disadvantages so as to improve management.
The rest of the paper is organized as follows: Section 2 briefly reviews the literature. Section 3 introduces the methods used for this paper. Section 4 introduces the proposed MCDM-NRSDEA method, and explains the selection of indicators and research objects. Section 5 presents a case study, and section 6 concludes and discusses possible directions for future research.

II. LITERATURE REVIEW
In this section, we review the existing literature on the following issues: (1) the selection of decision-making units; (2) the buildup of efficiency measurement models; and (3) the establishment of input and output indicators.

A. SELECTION OF DECISION-MAKING UNITS (DMUs)
The objects used to evaluate the research efficiency of universities can be divided into three levels (according to the hierarchy of decision-making units): macro, medium, and micro. Macro research concerns the scientific research efficiency of entire regions, and the regions where schools are located are the corresponding DMUs [7], [8]. The medium research level assumes that universities represent DMUs and mainly compares the efficiency of scientific research at the university level, while micro research evaluates the research efficiency of departments or disciplines [9], [11].
It has been argued that concentrating on the micro units responsible for research production provides more detailed insight into the production efficiency of research activities [12]. However, because of the type of data available, most studies have analyzed research efficiency from a macro or medium perspective. Examples include an evaluation of research efficiency in European universities and an evaluation of innovation efficiency in various regions of China [13], [14].

B. BUILDUP OF MODELS
DEA-based models have primarily been used for efficiency evaluation methods [15]- [17]. They are more inclusive of both input and output indicators and can simultaneously evaluate the efficiency of decision-making units (DMUs) with multiple inputs and outputs. However, discrimination decreases in the evaluation results when DEA-based models have too many evaluation indicators [18], [19], i.e., the efficiency of most DMUs is 1. It is easy to get poor results when evaluating research efficiency using only the DEA model. To address this low discrimination problem, Anderson and Petersen [20] proposed the SDEA model. This model excludes the unit under evaluation from the reference set, which helps to distinguish between DMUs in terms of efficiency [21].
Nevertheless, two problems still exist when the SDEA model is applied to research efficiency measurements. First, the hypothesis of constant returns to scale (CRS) means the results are not consistent with the actual situation [22]. Given this, the BCC model was added to the SDEA model, and the hypothesis of CRS was replaced with VRS. However, an SDEA is not feasible for some DMUs if VRS exist [23]. Second, most SDEA models use radial measures to calculate the efficiency of DMUs, which assumes that inputs and outputs change proportionally. However, in reality, this assumption is inconsistent with many situations. It is more appropriate to use a nonradial SDEA model for efficiency analyses in complex situations [24]. The existing literature has determined that the weight of indicators obtained through nonradial SDEA is generally based on prior information and preferences [25], [26]. Therefore, the results are highly susceptible to subjective preferences.
Many scholars have thus tried to further expand DEAbased methods. The multiple-criteria decision-making method is one of the most popular methods of the last 20 years, focusing on aggregating the information of entities in a comprehensive way [27]- [31]. However, indicator weights are still an open question. Some scholars have argued that DEA and MCDM methods share certain similarities, so many researchers have combined DEA with MCDM methods to determine the appropriate weights of indicators [32]- [35].

C. THE ESTABLISHMENT OF AN INPUT-OUTPUT INDICATOR SYSTEM FOR SCIENTIFIC RESEARCH
Input and output indicator systems are a critical component in measuring research efficiency. Generally, scientific research inputs include intangible and tangible inputs. Since intangible inputs, such as policy preferences, are difficult to quantify, only tangible inputs are considered in most studies. Tangible inputs mainly include human and material capital. The former generally includes the number of students, scientific researchers, faculty, and staff, and the latter generally VOLUME 8, 2020 includes topics such as scientific research expenditures and the number of key laboratories [36]- [39].
Scientific research outputs reflect the achievements obtained, which can be measured both quantitatively and qualitatively. Quantitative measurements include any outputs that can be counted, such as the number of monographs, patents, academic papers, projects, and graduate students [40]. Although the quality of output cannot be examined quantitatively, alternative indicators can be used. For instance, the annual average number of citations, journal quality, and revenue generated from technology transfers can effectively reflect the quality of a paper [41].

III. EXPOSITION OF THE EXISTING METHODS
In this section, we introduce the existing DEA methods and address several problems outlined in the literature review.

A. TRADITIONAL DEA METHODOLOGY
A DEA is an established methodology that uses specific mathematical programming models to evaluate the relative efficiency of a set of comparable DMUs. The traditional DEA model is based on an assumption of CRS [42]. To weaken this assumption, a DEA model with VRS was established. Suppose each DMU has q input indicators and s output indicators; then, a DEA model (with VRS) can be expressed as follows: where n is the number of DMUs, and X ij is the value of the ith input indicator of the jth DMU. Y rj is the value of the rth output indicator of the jth DMU. X ij o is the value of the ith input indicator of the j 0 th DMU, Y rj o is the value of the rth output indicator of the j 0 th DMU. λ j indicates the weight of the jth DMU, ε is a non-Archimedean constant, θ refers to the efficiency score, s − i and s + r are the slack variables, which indicate the excesses of input i and shortfalls of output r, respectively.
However, the ability of the traditional DEA model to discriminate between evaluation results decreases when too many evaluation indicators are involved. Therefore, the efficiency score of multiple DMUs is 1. When evaluating the research efficiency of disciplines at various universities, the DEA model often provides poor evaluation results.

B. SUPER EFFICIENCY DEA (SDEA) METHODOLOGY
To address the problem of low discrimination, Anderson and Petersen [20] proposed a super efficiency DEA model. This model is constructed as follows: The elimination of inefficient observations (θ < 1) makes no difference in terms of the spanning of the reference point set, and the efficiency score of inefficient DMUs is the same as that of the CCR model. With an efficient DMU (θ = 1), the elimination of an efficient observation may change the production frontier. To maintain efficiency, the efficiency score can increase proportionally up to the value of the change in the input vector of the reference point.
However, Equation (2), a radial model, only describes the degree of input of radial expansion on the production frontier (with certain input conditions). The proportional change in each input is kept consistent, which does not comply with the law of production. For instance, researchers and funding can be regarded as input indicators, but changes in each of these categories are usually not proportional. Therefore, a nonradial model is more suitable for effectively evaluating realistic situations [43].

C. NONRADIAL SDEA METHODOLOGY
Considering the problems described above, many scholars have built a nonradial SDEA model [44]. The model is constructed as follows: where w r is the relative weight of the rth indicator, θ r 0 is an unknown multipliers attached to the outputs. Using linear programming, a nonradial DEA calculates the efficiency (θ r 0 , ∀r) for each output indicator and then integrates them into an efficiency score by weighting them. Obviously, the weights of the indicators must be determined beforehand when using a nonradial model. At present, most studies that have used nonradial models have determined the set of weights based on experience [45], [46], as with the AHP method and the Delphi method [47]. However, these methods inaccurately reflect the characteristics of each decision variable, and consequently, they are not likely to be a fair evaluation. Therefore, we decided that a DEA should be combined with MCDM to solve the weighting problem.

D. MALMQUIST PRODUCTIVITY INDEX
In the MCDM-NRSDEA model, the Malmquist productivity index is as follows: where (x t i , y t i ) denotes the input and output data-sets for the time period t, and D t i (x t i , y t i ) represents the efficiency scores obtained during time period t(with the datasets for that same time period).
Moreover, Effch is defined as the DMU efficiency ratio between period t +1 and period t, which is obtained by benchmarking the DMU against the frontier, and Effch > 1 demonstrates a DMU efficiency improvement for the time period between t and t +1. Correspondingly, Effch < 1 indicates that the efficiency of the DMU has decreased. Techch represents a frontier shift effect and is usually attributable to technical changes. This means that the frontier in period t + 1 moves unfavorably compared to the frontier in period t, and the frontier shift negatively affects efficiency when Techch < 1.
As long as Techch > 1, the DMU makes technical progress during the time period between t and t + 1 [48], [49].

IV. THE MCDM-NRSDEA METHOD
To solve the problems described above, we created a new MCDM-NRSDEA model, which is described in this section. The proposed model is composed of two submodels, an MCDM-DEA and an improved NRSDEA. The MCDM-DEA model calculates the weights used in the improved NRSDEA model, and the improved NRSDEA model considers the data attributes of certain variables, which can effectively improve discrimination and solve the infeasibility problems of an SDEA with VRS.
DEA-based models calculate the efficiency of DMUs over one period only, and can turn an invalid output into a dynamic status. Therefore, we introduced a Malmquist productivity index to explore the dynamic changes in efficiency for each DMU. Figure 1 depicts the framework of the proposed MCDM-NRSDEA method.
The steps of the MCDM-NRSDEA method are as follows: Step 1: the linear normalization of indicators The collected data should be normalized to eliminate the impact of the indicator units. A linear normalization method was thus adopted: where X ij denotes the i th indicator of the j th DMU; X i,max and X i,min denote the maximum value and minimum value of the i th indicator, respectively. After being normalized by Eq.(5), the processed data are substituted into the MCDM-DEA model to determine the weight of each indicator in the improved NRSDEA model.

Step2: Derive weights from the MCDM-DEA model
The construction of the MCDM-NRSDEA model involves the determination of weights. To ensure that research efficiency evaluations are fair, we calculated common weights by establishing an MCDM-DEA model [50].
Suppose N DMUs are evaluated by M indicators (m =  1, 2, . . . , M ), where X im denotes the normalized score for VOLUME 8, 2020 the mth indicator of the ith DMU, and K is a real decision variable. The model is as follows: where d i refers to the slack of the ith DMU and w 1 m denotes the weight of the mth indicator. The weights calculated using Eq. (6) where w 2 m denotes the weight of the mth indicator. Unlike Eq. (6), Eq. (7) seeks to maximize the slack lower bounds and thus minimize the upper bound of the sum of the indicator weights. Therefore, it seeks to identify a set of weights in which all DMUs have their worst possible performance. The ''worst'' weights can be considered to be the lower bound of the possible weights assigned to the indicators. Along with Eq. (6), the ''worst'' weights provide certain information. The weights obtained from Eq. (6) and Eq. (7) are linearly combined to obtain the final weights, without a loss of generality: The specification of α is dependent on the preference of the decision-maker and implies that the decision-maker makes a choice that falls between ''best'' and ''worst''. Without any available prior information, we can assume α = 0.5.
Step 3: research efficiency obtained through the improved NRSDEA method For a discipline with low research efficiency, output needs to be raised, rather than reduced. Therefore, an outputoriented DEA model is consistent with the long-term goal of improving the research level of a discipline. In addition, disciplines are defined by certain threshold attributes, which must comply with the minimum rules set by education authorities. Therefore, the members of a discipline located at the threshold tend to carry out low-intensity research activities to complete necessary teaching tasks, and non-increasing returns to scale are expected in this case. However, when discipline inputs exceed a second threshold, more human and material resources are devoted to research activities. The research outputs of disciplines (e.g., the number of papers, which is limited by journal space) result in variable returns to scale (VRS), which are beneficial to research activities.
To overcome the infeasibility problem of the VRS super efficiency model, the parameter δ i was used [23], which represents extra savings from the ith input variable of DMU 0 . Meanwhile, due to the data attributes of some indicators, integer data constraints were imposed on the model. Suppose there are n DMUs, and each DMU has q input indicators and s output indicators. Then X ij denotes the score of the ith input indicator for the jth DMU, and Y rj is the score of the rth output indicator for the jth DMU (j = 1, 2, . . . , n; i = 1, 2, . . . , q; r = 1, 2, . . . , s), an improved NRSDEA model was thus established: where X ij 0 is the value of the ith input indicator of the j 0 th DMU, Y rj 0 is the value of the rth output indicator of the j 0 th DMU, u rj 0 indicates the weight of the rth output indicator of the j 0 th DMU, v ij 0 indicates the weight of the ith input indicator of the j 0 th DMU, and v j 0 is a real number. w r denotes a given weight vector, ε is a non-Archimedean constant and M is a sufficiently large positive number. Without loss of generality, let M = 10 6 . The dual form of Eq. (9) can be expressed as: where is an integer decision variable, λ j indicates the weight of the jth DMU, s − i and s + r refer to the input and output slack, respectively; γ r equals the weight of the rth output indicator, which prevents the infeasibility problems of SDEA; and w r denotes the weight of γ r , which overcomes the shortcomings of traditional radial DEA models and makes full use of prior information from variables.
A determination of weights is provided by the MCDM-DEA model. Thus, we have where η is research efficiency. To comply with the rules governing different disciplines, the MCDM-NRSDEA model not only allows for the output to vary in different proportions, but also fully considers the data attributes of the variables. It also solves the infeasibility problem of the SDEA model and effectively evaluates the efficiency of research in each discipline.

Step4: research efficiency decomposed by the Malmquist productivity index
To further analyze the dynamic change characteristics of TFP and the efficiency indexes of different research activities, we added a Malmquist productivity index. According to Eq.(4), research efficiency can be broken down into pure technical efficiency changes and scale efficiency changes. Hence, research efficiency can be analyzed dynamically. In order to ensure the stability of the results, other methods were used for comparisons.

V. APPLICATIONS FOR RESEARCH EFFICIENCY EVALUATIONS OF VARIOUS DISCIPLINES
In this section, we explore the proposed MCMD-NRSDEA method by evaluating on the research efficiency of the statistics discipline in Chinese universities.

A. SELECTION OF INDICATORS AND DESCRIPTIONS OF DMUs
Research efficiency measures the proportional relationship between research inputs and outputs. The establishment of research input and output indicator systems are thus the goal of the evaluation. The ''Fourth Chinese University Subject Rankings (CUSR)'' list has an evaluation indicator system that includes faculty, resources, the quality of personnel training, the level of scientific research, social contributions, and discipline reputation. For this system, input indicators are denoted as NFSD, NSSD, ESTD, and CAD, and output indicators are denoted as NDJPD, NWPD, NPED, and NCPPD. Explanations of the indicators are shown in Table 1.
Notably, due to insufficient information about university disciplines in the science and technology statistical yearbooks, NFSD is derived from the on-boarding time of the current year t 0 . Annual NSSD is obtained through a calculation of the present ratio of NFSD to NSSD: ESTD is calculated as the ratio of NFSD to the number of faculty members at a university (NFSU), multiplied by the university's total expenditures on science and technology (ESTU): Similarly, CAD is equal to the ratio of NFSD to NFSU, multiplied by the area of the university campus (CAU): NCPPD is related to publication time. In general, the earlier a paper is published, the more likely it is to be cited. Therefore, this study used NCPPD to compare papers published at different times. NCPPD refers to the actual citations of papers during a specific and recent time period:

B. DETERMINATION OF DMUs
From the perspective of model requirements, DMUs must satisfy a homogeneity assumption. However, great differences exist between universities in terms of the development of various disciplines. For example, in the ''Fourth CUSR'', discipline grades ranged from C− to A+, and the publication only covered 70% of all participating universities. Differences in the development of disciplines result in outliers, which is not conducive to a fair evaluation. Moreover, due to the state's funding system, higherranking universities have more opportunities to receive government support, including preferential policies and financial subsidies. To obtain high-quality data, only universities that ranked in the top 50% nationally were included in the reference set. We chose universities awarded discipline grades greater than C+ by the Ministry of Education in the latest CUSR.
From the available data, we selected 20 universities with statistics departments that received a discipline grade of C+ or above in the ''Fourth CUSR'': Peking Univer-Using the MCDM-DEA model, the weights of eight subindicators from 2012 to 2019 were obtained and then applied to the improved NRSDEA model to measure the research efficiency of the statistics departments of the sample universities. Table 2 demonstrates that the overall research efficiency of the statistics departments of these sample universities was relatively high. More specifically, the average research efficiency remained 0.90 and 1.20 over the years included in the study. However, the efficiency of the sample universities also had a downward trend between 2013 and 2018. With the exception of 2013 and 2019, research efficiency has declined to varying degrees because the Chinese government has attached increasing importance to the scientific research and innovation capabilities of universities. Since the ''12th Five-Year Plan'', China has significantly increased its research funding and expanded the scale of research activities. However, improvements in research efficiency depend on a variety of factors, e.g., a university's background, the environment, and the bases for research. As an emerging science and technology powerhouse, China may be unable to achieve efficient growth in the short term.
In addition, the research efficiency of statistics departments depends on a university's total strength. As shown in Table 2 and Figure 3, from 2012 to 2019, the input and output levels of research from the statistics departments of grade A-class universities were generally higher than those of grade B-class universities. The reason lies in the different attributes of the universities: grade A-class universities have advantages over grade B-class universities under the existing evaluation regulations.
Grade A-class universities can be divided into two categories. The first category includes China's top humanities and social sciences universities. Compared to science and engineering universities, these universities attach more importance to building up different disciplines, and the annual funds obtained and allocated to statistics departments are thus ample. These universities also provide abundant resources for teachers, which common universities have difficulty matching. Accordingly, in terms of absolute output, no other universities come close. The second category includes common universities, which have a strong advantage in terms of the discipline of statistics. As a flagship discipline, statistics is highly valued by such universities, and therefore statistics departments receive the best resources that a university can provide. As a result, statistics departments in these common universities produce a large amount of scientific research output.
Grade B-class universities, such as THU and SJTU, focus primarily on science and engineering. These universities thus do not pay as much attention to statistics compared to grade A-class universities, and thus the research outputs and inputs of these universities are lower.
However, if research efficiency is used as the standard measurement for discipline evaluations, the results are closely related to a university's strength as a whole. This can be seen in a comparison we made between double-first-class universities and common universities. The research efficiency of the statistics discipline in double-first-class universities is higher than that of common universities during most years of the study. In particular, most double-first-class universities whose statistics disciplines were graded as B-class still had higher research efficiency than did common universities who were graded as A-class.
Considering the index system of the CUSR, we believe that for common universities to enter the double-first-class ranks, they should improve the output efficiency of their scientific research ''inputs''. If they cannot, they should focus on nonscientific research areas, e.g., introducing talent, international exchange, and student training. Some double-firstclass universities, which may perform redundant activities to a certain degree, should formulate relevant policies to make better use of scientific resources.

D. COMPARISONS TO EXISTING MODELS
To further demonstrate the rationality of the approach proposed in this paper, we empirically compared it to three other methods (the traditional DEA, the SDEA, and the nonradial SDEA). For illustration purposes, Table 3 lists the research efficiency of the statistics departments of 20 universities in 2017.  The second column in Table 3 shows the results calculated using the traditional DEA with VRS conditions. Eleven DMUs was considered to be efficient: 55% of DMUs, indicating the low discrimination capability of the traditional DEA method.
The third column provides the results when the outputoriented SDEA was utilized, with VRS conditions. The SDEA approach could solve the problem of low discrimination. Universities with a score of 1 in the second column had different efficiency scores in the third column; for example, PKU, RUC, and NKU obtained the same score (1, seen in the second column), but their efficiency scores (third column) were 1.1912, 2.1268, and 1.0083, respectively.
The fourth column provides the results of the outputoriented nonradial SDEA, where the MCDM-DEA method was used to determine the weights. It is obvious that the efficiency scores (seen in the third and fourth columns) were quite different. For example, the efficiency score of Peking University was 1.1912 (third column), and it decreased to 0.7677 (fourth column).
Moreover, the correlations between SDEA and nonradial SDEA were evaluated using Spearman's rank correlation test. As shown in Table 4, there was no significant correlation between the two methods. In other words, the results of the SDEA model inaccurately reflected the respective rankings of research efficiency.
Neither the SDEA nor non-radial SDEA method could solve the infeasibility problem. Taking this problem into account, we introduced an improved nonradial SDEA model into our model, and then the MCDM-NRSDEA was proposed. The results (seen in the fourth and fifth columns) were quite different, but the results in the fifth column are more in line with a practical situation. For example, in the case of RUC, the efficiency score was 0.7414 (fourth column), which places it in a middle or lower position. However, the efficiency score of RUC became 1.2853 (fifth column) when we used the proposed method, which is in line with RUC having the best statistics research capabilities in China. The above-mentioned results demonstrate that the method proposed in this paper simultaneously solves the problems of low DMU discrimination and SDEA infeasibility. Table 5 shows the geometric average efficiency scores for the four classes for each of the four models, and it is clear that all four models revealed some obvious characteristics. For instance, the statistics research efficiency of double-firstclass universities was higher than that of common universities, and in some case the statistics research efficiency of Grade B-class universities was higher than that of Grade Aclass universities, which indicates the proposed method is robust and efficient.

E. MALMQUIST PRODUCTIVITY INDEX AND ITS DECOMPOSITION
To dynamically evaluate research efficiency and analyze trends in the research activity index (TFP), Tfpch was calculated and decomposed for the given disciplines from 2012 to 2019. Table 6 depicts the average annual Malmquist productivity index is 0.985, which means that the TFP of statistical research activities slightly decreased. Notably, the average value of Techch and Effch are 0.992 and 0.993, respectively, showing that the technology and efficiency within production technology hindered the improvement of TFP. Furthermore, Effch can be decomposed into pure technical efficiency changes (Pech) and scale efficiency changes (Sech). The main reason for the decrease in Effch was the obstacle of Sech. In 2011, the statistics discipline was upgraded from a second-level discipline to a first-level discipline. Due to this, the statistics discipline is now an independent unit in terms of discipline management and resource allocation, which has created a convenient environment for research. However, research management and source allocation capabilities have failed to keep up with the growth rate of research investments, resulting in a redundancy in terms of resource investments. Therefore, research management mechanisms and factor allocation structures still need to be adjusted.
From the perspective of universities, the average dynamic change in TFP of grade A-class universities was 0.984 between 2012 to 2019, and the average change in TFP of grade B-class universities was 0.985. As can be seen from the decomposition of TFP, grade A-class universities declined due to the decline in Effch, which exhibited an average annual decrease of 2.1%. In addition, the production frontier of research activities has shifted inward, which caused the Techch of B-class universities to decrease by 1.9%.
Therefore, for grade A-class universities, in order to maintain the current status of their statistics departments, more ''inputs'' should be diverted to nonscientific research fields. In addition, grade B-class universities should step up the pace of innovation and prioritize a rise in the academic rankings by achieving progress with innovation and technology.

VI. CONCLUSIONS
Evaluating the performance of specific disciplines can enhance university development and is also an effective way to promote innovation at a regional level. Here, we proposed an MCDM-NRSDEA model to estimate the research efficiency of different statistics departments. First, we demon-strated that the model has more discriminating power than do traditional DEA methods. Second, by taking a linear combination of the optimal weight and the worst weight of the indicator as the final weight, the new model yields a more flexible and objective set of weights for indicators, avoiding the influence of subjective preference. Third, the improved NRSDEA model conforms to the laws governing scientific research activities and has a stronger tolerance for different data types. For instance, we used five integer variables for our case study, where the MCDM-NRSDEA method was applied to calculate the research efficiency of statistics departments in Chinese universities. The results showed that for common universities to achieve a double-first-class ranking, they should improve their scientific research efficiency. Conversely, some double-first-class universities should divert more ''inputs'' to nonscientific research fields to maintain their current status.
In future research, we will further integrate DEA-based models and MCDM-based methods to address efficiency measurements for different types of data, e.g., by combining a fuzzy interval DEA with a hybrid multiple-attribute decision-making method. In addition, our method will be applied to other similar discipline evaluation problems, such as talent training efficiency, resource utilization efficiency, and research achievement efficiency.