Parameter Estimation Effect of the Homogeneously Weighted Moving Average Chart to Monitor the Mean of Autocorrelated Observations With Measurement Errors

In statistical process monitoring, the usual assumption when designing monitoring schemes is that process parameters are known and have perfect measurements with independent and identically distributed observations. However, in real-life situation, these assumptions rarely hold. Hence, in this paper, the Phase II performance of the homogenously weighted moving average (HWMA) $\bar {X}$ monitoring scheme under the combined effect of autocorrelation and measurement errors is investigated when the unknown process parameters are estimated from an in-control Phase I dataset. Two models are considered, i.e. the first-order autoregressive model for within-sample autocorrelation and the linear covariate model for (constant and linearly increasing variance) measurement system error. Sampling strategies based on skipping some observations as well as mixing different subgroup samples and taking multiple measurements are implemented to reduce the negative effect of autocorrelation and measurement errors. Since the latter sampling strategies incur costs, as an alternative, increasing the slope coefficient of the linear covariate model compensate the negative effect of measurement errors. The new HWMA $\bar {X}$ scheme is shown to have some interesting detection abilities as compared to its competitors. A real-life example is used to illustrate the implementation of the proposed monitoring scheme.


ARL
Average run-length AR (1) First Various elements within the production industries can lead to process instability and irreversible eventualities such as product defects or inconsistent quality of products. This is of the main reasons that SPM field came into existence. Many practitioners view SPM not only as a solution to industrial problems but also essential in refining capability through which variability in any statistical process can be reduced; see for instance [1] and [2]. A control chart or monitoring scheme is the most used SPM tool to monitor quality characteristics in industrial and non-industrial applications. There are two main causes of quality variation that exist in the SPM; common (or chance) causes and special (or assignable) causes. A statistical process that is operating with only common causes of variation is said to be IC. These causes of variation are unavoidable as they are naturally present in any repetitive process; hence, they are regarded as inherent part of a statistical process. On the other hand, a statistical process that is operating in presence of special causes of variation is said to be OOC. Unlike with common causes, these causes of variation can be detected and controlled. There are two main types of monitoring schemes, namely, memory-less scheme (i.e. Shewhart-type) and memory-type schemes (CUSUM, EWMA, GWMA and HWMA). The Shewhart, CUSUM, EWMA and GWMA schemes were first developed by [3]- [6], respectively. The HWMA scheme (which is the focus of this paper) is a recently developed memory-type scheme by [7]. When implementing any of the latter monitoring schemes, it is important to note whether the underlying distribution parameters are known (denoted as Case K) or unknown (denoted as Case U). In Case K, a monitoring scheme can be directly implemented by using the known parameters to search for the corresponding design parameters such that the resulting control limits yield the desired nominal IC run-length values. However, in Case U, the monitoring procedure needs to be implemented in a twophase approach, i.e. Phase I and Phase II (see the following review publications by [8]- [10] for more details). The retrospective implemention of a monitoring scheme is done in Phase I in order to estimate the distribution parameters and determine the control limits using an IC reference sample. However, in Phase II, the control limits and estimated parameters are prospectively implemented on a scheme to monitor any departures from an IC state established from Phase I.
Note that perfect measurements almost do not exist in reallife applications, because as stated in the review paper on measurement errors by [11]: ''. . . wherever there is a human involvement, an exact measurement is a rare phenomenon in any manufacturing and service environment; hence a difference between the real quantities and the measured ones will always exist even with highly sophisticated advanced measuring instruments.'' For an excellent account on how to monitor observations subjected to measurement errors, readers are referred to [12]- [14]. For recent contributions to monitoring schemes under the effect of measurement errors; see [15]- [20]. Moreover, a majority of applications in SPM methodology are based on the assumption that the seriallygenerated sampled observations are from an i.i.d. process. However, in real life, this assumption is often violated and, consequently, leads to a poor performance because the autocorrelation of the observations is not taken into account; see [21]- [25]. Since the combined effect of autocorrelation and measurement errors has a more pronounced negative effect on the performance of monitoring schemes; in this paper, a combination of the first-order autoregressive model and the linear covariate error model are used to capture autocorrelation and measurement errors. Research works that have studied the performance of the combined effect of autocorrelation and measurement errors for Shewhart-type and CUSUM-type monitoring schemes as well as capability processes are documented in [26]- [35].
Since the focus of this paper is on HWMA scheme; this relatively new memory-type scheme allocates a specific weight to the current sample, and then the remaining weight is distributed homogeneously (or equally) to the previous samples. Unlike the other memory-type schemes in the SPM literature, there has been just a handful studies. To be precise, [7] first proposed the HWMA scheme to monitor the process mean of i.i.d. observations in Cases K and U; and discussed its robustness to non-normality. Thereafter, [36] investigated the use of an auxiliary variable in the form of a regression estimator as an unbiased estimate of the process mean in Cases K and U; and robustness to normality is illustrated. Next, [37] and [38] proposed the double and hybrid HWMA schemes as well as robustness to normality, respectively. The double (hybrid) model entails using the same (different) smoothing parameters to design a monitoring scheme. While [37] studied both Cases K and U, [38] investigated Case K only. More recently, [39] proposed a bivariate HWMA scheme based on linear profiles to monitor the intercept, slope and variance parameters using the Bayesian estimation framework and illustrated its efficiency over a number of competitors in Case U. For the multivariate scenario, [40] and [41] studied the performance of the HWMA scheme in detecting shifts in the process mean vector in Cases K and U, respectively. For nonparametric schemes, [42] studied the performance of the HWMA scheme based on the sign and signed-rank statistics to monitor symmetric and skewed distributions which are applicable in the Case K scenario.
Therefore, this paper aims at studying the performance of HWMAX scheme in Case U using sampling strategies based on skipping and mixing samples to reduce the negative effect of autocorrelation and measurement errors. Therefore, the key difference between this paper on the HWMA scheme and the existing ones is that, here it is not assumed that the sampled observations have perfect measurements (i.e. different levels of measurement errors are introduced) and the within-sample observations are not assumed to be i.i.d. (i.e. different levels of within-sample correlation are introduced). More importantly, a unified model taking into account autocorrelation and measurement errors is incorporated into the HWMA scheme's design.
The rest of this paper is organised as follows: In Section II, the manner in which the process mean is computed when observations are subjected to autocorrelation and measurement errors using different sampling strategies is illustrated. Section III introduces the HWMA scheme for autocorrelated observations with measurement errors in Case U. The empirical discussion is presented in Section IV. The illustrative VOLUME 8, 2020 examples using real-life data is provided in Section V. Finally, the concluding remarks are provided in Section VI.

II. AUTOCORRELATED PROCESS WITH MEASUREMENT ERRORS IN CASE U A. PHASE I AND PHASE II ANALYSIS
As stated in the Introduction, the estimation of the process parameters (i.e. the mean (µ 0 ) and standard deviation (σ 0 )) significantly reduces the performance of any monitoring scheme, (see for instance, [8]- [10]). Thus, the scheme's capability to respond swiftly to changes in the statistical processes weakens; hence, the investigation of parameter estimation when the underlying process mean is under the combined effect of autocorrelation and measurement errors needs to be conducted. The process parameters are estimated in Phase I (using m reference samples each of size n) when the process is deemed to be IC. The unbiased estimators for the µ 0 and σ 0 are defined byμ , see for instance [7] and [37].
In Phase II, let the sequence of observations {X ti : t = 1, 2, . . . , and i = 1, 2, . . . , n} be a set of samples of autocorrelated N (µ 0 + δσ 0 , σ 0 ) distribution that fits a stationary AR(1) model which is given by where φ is the level of autocorrelation assumed to satisfy 0 < φ < 1 and ε t are i.i.d. N (0, σ ε ) random variables, with σ 0 = σ ε √ 1−φ 2 and, without loss of generality, it is assumed that σ ε = 1. While it is assumed that there is dependence within the computation of {X ti }; however, between any {X ti } and {X li } (t = l) there is independence (i.e. no cross-correlation), this is in line with the derivation in [43] for subgroup observations.
It is further assumed that the true values of X t,i in Phase II are only observed through {X * t,i,j : t = 1, 2, . . .; i = 1, 2, . . . , n; j = 1, 2, . . . , r} which follow a N (A + Bµ 0 , B 2 σ 2 0 + σ 2 M ) distribution which are given by where A and B are intercept and slope coefficients depending on the measurement system location error. Note that ε t,i,j ∼ N (0, σ 2 M ) is a random error due to the measurement error that is distributed independently of X t,i ; where σ 2 M is the variance of the measurement system. Finally, r denotes the number of measurements taken in each sampled subgroup unit, for more discussions on multiple measurement sampling strategy, see [12]- [14] and [44].

B. COMPUTATION OF THE PROCESS MEAN
Assume that a sample of observations of size n in Phase II is available from the sequence {X * t,i,j } at each sampling point. Hence, the process mean of n observations with a single standard set of measurements (r = 1) is calculated as Note that (5) denotes the manner in which the process mean is calculated when there are no remedial approaches incorporated to reduce the negative effect of autocorrelation and measurement errors. The mixed-s-skip sampling strategy proposed in [31] entails merging two samples, at times t-1 and t, by skipping s observations before sampling to form a single rational subgroup of size n. When the mixed-s-skip sampling strategy (that is used to reduce the negative effect of autocorrelation) is combined with multiple measurements (that is used to reduce the negative effect of measurement errors), i.e. mixeds-skip with r-measurements, denoted as mixed-s&r, then the process mean is calculated as C.X * SCHEME WITH A CONSTANT VARIANCE Proceeding in a similar fashion as in [30], it can be shown that the ShewhartX * scheme with no remedial approach (for an autocorrelated process with a constant measurement system variance), the variance of theX * t is given by Thus, letting γ = σ M σ 0 (i.e. the standardized ratio of the measurement system variability to the process variability), then, with some algebraic manipulation, it follows that (7) can be written as where ρ denotes a placeholder depending on which sampling strategy is being implemented. Since for any sequence {X ti } and {X li } with t = l there is no cross-correlation, this implies that Hence, after some basic algebraic manipulations, for the following sampling strategies, the ρ expression is given by: • the no remedy, • the mixed-s&r, (11) as shown at the bottom of the next page.

D.X * SCHEME WITH A LINEARLY INCREASING VARIANCE
Note that in some cases, the measurement error σ 2 M should not be considered as being a constant but rather an increasing function of the process mean, i.e. σ 2 M = C + Dμ 0 and thus, , where C and D are two constants depending on the variability error of the measurement system. Hence, for linearly increasing variance, the corresponding ρ expressions are as follows: • the no remedy, • the mixed-s&r, (13) as shown at the bottom of the next page.

III. HWMAX * SCHEME IN CASE U USING THE MIXED-s&r SAMPLING STRATEGY
Thus, the plotting statistic of the HWMAX * scheme is defined by whereX * t is as defined in Equation (6) and X * t−1 is the mean of the previous t − 1 subgroup sample means, calculated by From the latter and (8), when t = 1, Var X * t−1 is given by However, when t > 1, Simplifying (16) further and using (8) to (9), the following is obtained: Hence, it follows that when t > 1, (16) reduces to Using (1), it follows that the mean of H * t is given by Due to the different expressions in (15) and (17) when t = 1 and t > 1, it follows that the variance of (14) is given by respectively; where ρ depends on which sampling strategy is implemented. Note that when φ = 0 and σ M = 0, then ρ is simply equal to 1. Thus, the time-varying lower and upper control limits (i.e. LCL t and UCL t ) of the HWMAX * scheme are defined by and respectively; where L * > 0 is the control limits constant that is set to have an IC ARL approximately equal to some prespecified ARL 0 . Thus, the HWMAX * scheme gives a signal if H * t ≥ UCL t or H * t ≤ LCL t . When the process has been running for a long time, 1 t−1 σ 2 0 n → 0 and thus, the control limits in (20a) and (20b) reduce to the following asymptotic ones: Therefore, the operational procedure of the HWMAX * scheme under the combined effect of autocorrelation and measurement errors is as summarized in Figure 1.

A. RUN-LENGTH METRICS
Run-length refers to the number of charting statistics to be plotted in a monitoring scheme before the first OOC signal is observed. The mean and standard deviation of the run-length are the most widely used monitoring scheme's performance metrics and these are referred to as the ARL and SDRL. In this paper, the empirical run-length values are calculated using Monte Carlo simulations in SAS R v9.4 software. Monte Carlo simulations can be used with relative ease to calculate the run-length distribution and its associated characteristics, provided the number of simulation runs is large enough. A simulation algorithm for the HWMAX * scheme is given in the Appendix. Note that, due to space restriction, in this paper, we conduct our analysis based on nominal ARL 0 = 500, other values of nominal ARL 0 yield similar conclusions. In addition, the EARL and ESDRL metrics are also used in evaluating the overall performance of schemes considered over a range of shift values. Mathematically, the EARL and ESDRL are defined by Note that the shifts within the interval [δ min , δ max ] usually occur according to a probability distribution function (p.d.f.) equal to f (δ) which is usually unknown, where ARL(δ) and SDRL(δ) are the ARL and SDRL as a function of the shift δ in the parameter under surveillance. In the absence of any particular information, it is usually assumed that the shifts in the process mean happen with an equal probability, then f (δ) = 1/(δ max − δ min ) i.e. a Uniform (δ min , δ max ) distribution. Note that (22) can also be estimated with Riemann sum expressions which are respectively given by with δ ∈ (δ min , δ max ], is the number of increments from δ min to δ max . To preserve writing space, increments of 0.1 in the summations in (23) are used, with δ min = 0 and δ max = 2.

B. HWMAX * SCHEME WITH A CONSTANT MEASUREMENT SYSTEM VARIANCE 1) NEGATIVE EFFECT OF AUTOCORRELATION AND MEASUREMENT ERRORS
The empirical illustration of the combined negative effect of increasing the level of autocorrelation and measurement errors is shown in Table 1 for the HWMAX * scheme when φ and γ are increased from 0. Note that when δ = 0, the slight difference in the IC ARL values is due to simulation error, not ARL-biasness. At each value of δ > 0, the ARL and SDRL are smallest when φ and γ are equal to 0; however, when φ and γ are greater than zero the HWMAX * scheme deteriorates in performance. For instance, when δ = 0.1, the ARL is equal to 161.88, 218.41, 302.69 and 380.94 when both φ and γ are 0, 0.2, 0.5 and 0.9, respectively. A similar pattern is observed in Table 1 for the SDRL, EARL and ESDRL. The latter shows that there is a significant deterioration in the performance of the HWMAX * scheme as φ and γ increases.
In the next sub-section, the mixed-s&r sampling strategy is implemented to reduce the negative effect of autocorrelation and measurement errors.

2) MIXED-s&r SAMPLING STRATEGY
Assuming that φ = γ = 0.5, in Table 2, the performance of the HWMAX * scheme using mixed-s&r sampling strategy is illustrated to show the effect of increasing s and r values to reduce the negative effect of autocorrelation and measurement errors, respectively. From × 100%. At the bottom of Table 2, since the EARL and SDRL values decrease as the values of s and r increases, the mixed-s&r sampling strategy yields an improved performance for the HWMAX * scheme. Moreover, based on the %Diff A and %Diff SD , the overall performance of the HWMAX * scheme improves as s and r increases. Note that the use of large values of s and r incurs production costs and requires time and effort. Alternatively, to slightly reduce the negative effect of measurement errors, the slope coefficient of the covariate error model can be increased to lower the OOC ARL values. The latter is illustrated in Table 3 when B ∈ {1, 2, 3}. It is observed from Table 3 that for any δ > 0, the OOC ARL decrease slightly as B increases, for instance, when the OOC ARL of the HWMĀ X * scheme using mixed-3&4 sampling strategy with δ = 0.2 are equal to 48.94, 46.51 and 46.14 when B = 1, 2 and 3, respectively. Moreover, it is observed in Table 3 that keep B constant and increasing s and r yields an improved OOC performance; for instance, when B = 3, the OOC ARL values at δ = 0.1 are equal to 208.04 and 174.88 for the mixed-1&2 and mixed-3&4 strategies, respectively.
Next, the effect of magnitude of the smoothing parameter (λ), Phase I subgroups size (m) and Phase II sample size (n) on the Phase II OOC ARL performance for the HWMĀ X * scheme with mixed-s&r sampling strategy in Figures 2,  3 and 4, respectively. Firstly, it is observed in Figure 2 that as the value of λ increase, the higher the OOC ARL become. Hence, to ensure that the HWMAX * scheme with mixed-s&r sampling strategy has good OOC performance in most situations, lower values of λ need to be used as optimal design parameters. Secondly, it is observed in Figure 3 that as the value of m increase, the lower the OOC ARL become in Phase II monitoring. Hence, to ensure that the HWMĀ X * scheme with mixed-s&r sampling strategy has better detection ability, higher values of m are suggested wherever possible. Note that m = ∞ corresponds to parameters known TABLE 1. The effect of increasing φ and γ on the HWMAX * scheme's ARL and SDRL using the no remedy sampling strategy when m = 100, n = 5, λ = 0.1, L * = 3.33 and a nominal ARL 0 = 500.  case (i.e. Case K) and based on Figure 3, the HWMAX * scheme has the best OOC ARL performance in Case K than in any of the other Case U scenarios (i.e. m = 20, 50, 100 and 500). Finally, it is observed in Figure 4 that as the value of n increase, the lower the OOC ARL values. Hence, to ensure that the HWMAX * scheme with mixed-s&r sampling strategy has better detection ability, higher values of n are recommended wherever possible.

3) IC AND OOC ROBUSTNESS STUDY
The IC and OOC robustness to non-normality of the HWMĀ X * scheme is investigated in Table 4. A monitoring scheme is said to be IC and OOC robust if the IC characteristics of the run-length distribution are the same or significantly close across all continuous distributions. To check this, in Table 4, the IC and OOC ARL values are computed for some symmetrical (with heavy-tails) and asymmetrical distributions. The   (iv) Standard double exponential distribution with µ = 0 and β = 1, denoted DEXP(0, 1).
For a fair comparison, the above distributions are transformed such that the mean and standard deviation are equal to 0 and 1, respectively. For different values of m, it is apparent that the HWMAX * scheme is not IC robust for some non-normal distributions. That is, based on the IC ARL values, the following findings can be observed from Table 4: • Regardless of the Phase I sample size, the proposed HWMAX * scheme is IC robust under the normal distribution.
• Under the t(v) distributions, regardless of the Phase I sample size (including the m = ∞, i.e. Case K) the HWMAX * scheme is not IC robust for small degrees of freedom (v). Whereas, the HWMAX * scheme is IC robust when the v is large, that is when v ≥ 30. Note that when v is this large, the t(v) distribution is approximately equal to the normal distribution.
• Under the standard DEXP(0, 1) distribution, the HWMĀ X * scheme is not at all IC robust. Table 4 shows that the HWMAX * scheme has a better IC robustness under the normal distribution than skewed-and heavy-tailed distributions. While the distributions in Table 4 have a similar OOC performance for moderate-to-large shifts; however, the standard DEXP(0, 1) distribution has the worst performance for all considered values of m and thus, is not OOC robust as compared to the other distributions.

4) HWMAX * SCHEME WITH A LINEARLY INCREASING MEASUREMENT SYSTEM VARIANCE
Note that the HWMAX * scheme using the no remedy strategy for the linearly increasing measurement system variance also exhibits a similar pattern as that shown in Table 1. That is, as the value of φ and γ increase, the HWMAX * scheme has deteriorating performance. In addition to the latter, as the linearly increasing variance parameters (i.e. C and D) increases, the performance of the HWMAX * scheme decrease; see Table 5. It is observed in Table 5 that, with B and D fixed, increasing C yields a deteriorating OOC performance. Similarly, with B and C fixed, increasing D yields a deteriorating OOC performance. The EARL values also show the same pattern as C and D increases.
Next, with respect to OOC ARL and EARL, it is shown in Table 6 that as s and r increases, the mixed-s&r strategy yields an improved HWMAX * scheme performance when the process is subjected to linearly increasing measurement system variance.
Below, a summary of other additional results for the HWMAX * scheme using mixed-s&r sampling strategy with linearly increasing variance are given. These empirical results are not shown here, to preserve writing space, as they are similar to what is shown in the case of constant variance in the previous subsection: • Similar to Table 3, with C and D fixed, increasing B yields smaller OOC ARL values.
• Similar to Figure 2, using higher values of λ yields a deteriorating OOC performance. • Similar to Figures 3 and 4, wherever possible, increasing m or n, lead to an improved Phase II OOC performance.
• Similar to Table 4, the HWMAX * scheme with linearly increasing variance in the measurement system is not IC robust to non-normal distributions for any different values of B, C and D, respectively. When considering different values of B, C, D, λ, m and n it is observed that increasing r excessively yields high production cost with very little OOC improved performance. Therefore, as a rule of thumb, multiple sets of measurements of no more than 3 or 4 are recommended; or more specifically, for 0 < γ < 0.4, 0.4 < γ < 0.8 and γ > 0.8, the recommended values of r are equal to 2, 3 and 4, respectively. Next, high values of s result in better OOC detection ability as compared to those yielded by high values of r. Thus, in big data applications, the value of s can be increased to certain reasonably large values; or more specifically, for 0 < φ < 0.3, 0.3 < γ < 0.5, 0.3 < γ < 0.5 and 0.5 < γ < 0.8, the recommended values of s are equal to 1, 2, 3 and 4, respectively. Note though, for 0.8 < φ < 1, any possible value of s > 4 can be used as this will yield an improved HWMAX * scheme's performance.

5) COMPARISON WITH OTHER COMPETING SAMPLING STRATEGIES
It is important to note that whenever s = 1 in the mixed-s&r sampling strategy model, this corresponds to the mix&r sampling strategy. The mix&r sampling strategy is a combination of the mixed samples strategy and r multiple measurements strategy proposed by [12] and [45], respectively. Thus, when s = 1 in all the theoretical expressions and empirical discussions in Sections 2 to 4 corresponding to the HWMAX * scheme with mixed-s&r strategy; these results hold for the HWMAX * scheme with mix&r strategy. The manner in which the process mean is calculated when the s-skip with r multiple measurements (denoted as s&r) sampling strategy is as follows, Next, the expression of ρ corresponding to the s&r sampling strategies when the measurement system is subjected to a constant variance is given by However, when the measurement system is subjected to a linearly increasing variance, it is given by Thus, the plotting statistic of the HWMAX * scheme with s&r strategy is obtained by substituting (24) in (14). The corresponding constant and linearly increasing variance control limits are obtained by respectively substituting (25) and (26) into (20a), (20b) and (21).
It is observed from Figure 5 that at each shift value, the no remedy strategy has the worst OOC ARL performance. However, the mixed-s&r strategy yields the best OOC ARL performance followed by the s&r and mix&r strategies. These results are observed in both Case K and Case U. Moreover, in Figure 5, it is observed that each of the sampling strategies yields a better OOC performance in Case K than it does in Case U, this is due to parameter estimation effect. It is also observed in Figure 5 that increasing the values of s and r in Case K and Case U yields smaller OOC ARL values for each of the sampling strategies. Finally, for the linearly increasing variance scenario, a similar pattern as that in Figure 5 is observed; hence, to preserve writing space, this is not shown here.

V. ILLUSTRATIVE EXAMPLE
To illustrate how the HWMAX * scheme is implemented in Case U with the mixed-s&m as a remedial strategy, the Phase II dataset based on the weights (Y * t,i,j ) of yogurt cups (see Table 7 here) from [30] are used. This dataset contains 20 samples each of size 5 (i.e. n = 5) taken every hour and each of them weighted two times (i.e. r = 2). The important assumptions in this illustration are that the IC mean and standard deviation are unknown; hence, they are estimated during Phase I analysis with m = 100. The estimation formulae for mean and standard deviation above yieldedμ 0 = 124.90g and s = 0.76g so thatσ 0 =    mixed-1&2 and mixed-2&2 sampling strategies are shown in Table 8 with their corresponding plots shown in Figure 6. For this specific example, an OOC signal is observed for the first time on the 16 th and 15 th subgroups for the HWMĀ X * scheme using the mixed-1&2 and mixed-2&2 sampling strategies, respectively. The latter indicates that increasing s (with measurement errors already accounted by taking m = 2 sets of measurements) reduces the negative effect of autocorrelation and thus, gives an OOC signal at an earlier sampling point.

VI. CONCLUSION
Given that from the currently available literature on HWMA monitoring schemes, there is no research work that investigates the combined negative effect of autocorrelation and measurement errors; this paper investigates this important real-life scenario. Some important run-length derivations are provided so that it can be better understood how to account for the combined negative effect of autocorrelation and measurement errors. To mathematically model the latter two factors, an AR(1) and a linear covariate error processes with sampling strategies based on skipping and taking multiple measurements are implemented. While sampling strategies based on skipping and taking multiple measurements significantly improves performance; these also increase production cost, time and effort in efficiently implementing the HWMA scheme. Thus, the use of relatively high values of the slope design parameter B is recommended to improve performance. It is worth mentioning that the intercept coefficient has no effect on run-length performance of the HWMA scheme.
In addition, wherever possible, large values of m and n, coupled with a relatively small λ value, need to be used to improve the Phase II OOC performance.
For future research purpose, the use of the mixed-s&r, mix&r and s&r sampling strategies to improve detection ability need to be investigated for the other three memory-type monitoring schemes (i.e. CUSUM, EWMA and GWMA) in both Case K and Case U as they currently do not exist and need to be compared to the HWMA ones proposed here. Since no research work exists on the economic or economic-statistical design of any monitoring scheme (i.e. Shewhart or memory-type) under the combined negative effect of autocorrelation and measurement errors; hence, following the procedure based on i.i.d. in [46] and [47], we intend to address this topic in a separate investigation.

APPENDIX
The computation of the IC and OOC run-length (RL) properties for the HWMAX * scheme in the case of a standard normal distribution using w simulation runs are described in this Appendix. The computation is done in two stages. In the first stage, a search for the design parameter(s) that gives an attained IC ARL as close as possible to the nominal ARL 0 is conducted. If such design parameters exist, they are called the optimal design parameters. In the second stage, these optimal design parameters are used to compute OOC ARL values.
Assuming that the parameters of the distribution have already been estimated from Phase I, the RL properties of the HWMAX * scheme can be computed using the following Monte Carlo algorithm: • First stage Step 1. Specify the desired nominal ARL 0 , m, n, w and λ.
Step 2. (a) Fix a first value of L * and calculate the control limits and go to Step 3. (b) If required, increase (or decrease) L * and recalculate the control limits so that the attained IC ARL get closer to the nominal ARL 0 . Step 3. Randomly generate a sample from the IC process distribution. Calculate the charting statistic and compare it to the control limits found in Step 2.
If the charting statistic plots between the control limits, then collect the next subgroup and calculate its charting statistic and compare it to the control limits. Continue this process until a sample point plots beyond the control limits. Then record the number of subgroups plotted until an OOC signal occurs, this represents one value of the IC RL (RL 0 ) distribution. Repeat Step 3 a total of w times to find the (w × 1)RL 0 vector. Step 4. Once the RL 0 vector is obtained, calculate the attained IC ARL (= 1 w w i=1 RL 0 i ). If the attained IC ARL is equal or much closer to the nominal ARL 0 , go to Step 5. Otherwise, go back to Step 2(b) (i.e., since the attained IC ARL is consider-ably greater (smaller) than the nominal value, then update the control limit(s) narrower (wider) and repeat again Steps 3 and 4).
Step 5. The design parameter L * found in Step 4 is called the optimal design parameter. Record the optimal L * and its corresponding control limits. Thus, the search of the optimal L * is completed.
• Second stage Step 6. For a specific shift δ (δ = 0), randomly generate a test sample from the IC process distribution. Calculate the charting statistic(s) and compare to the control limit(s) found in Step 5. If the charting statistic plots between the control limits, then collect the next subgroup and calculate its charting statistic and compare it to the control limits. Continue this process until a sample point plots beyond the control limits. Then record the number of subgroups plotted until an OOC signal occurs. This number represents one value of the RL 1 distribution. Repeat Step 6 a total of r times to find the (r × 1)RL 1 vector.
Step 7. Once the RL 1 vector is obtained, calculate the OOC ARL value (= 1 r r i=1 RL 1 i ).
Step 6. The computation of the characteristics of the RL 1 is completed.
Note that in Steps 4 and 7, other characteristics of the RL such as the standard deviation of the run-length (SDRL) can also be computed using PROC UNIVARIATE in SAS R v9.4 software.
MAONATLALA THANWANE received the diploma degree in compilation of official statistics from the Eastern Africa Statistical Training Center in Tanzania (known as EASTC), and the B.Sc. degree (Hons.) in statistics from the University of South Africa, where he is currently pursuing the M.Sc. degree in statistics. He is also working with Statistics South Africa (StatsSA) as a Data Analyst (Survey Statistician).