Visualizing Benefits: Evaluating Healthcare Information System Using IS-Impact Model

Reducing costs and optimizing operations are major challenges in many large-scale organizations including healthcare authorities. Research shows that despite ongoing investments in healthcare information systems (IS), the promised benefits often are partially realized or not at all. Part of the solution is to ensure that evaluation methodologies are available to clearly identify the success of these initiatives and from here articulate and mitigate the deficiencies or move to an alternative technology. The literature asserts that few practitioners have implemented a standardized evaluation approach. Using an established model, namely Information System Impact (IS-impact) model, we propose a modified evaluation model to assess and a visualization tool to visualize the success of an information systems from a healthcare perspective. The modified IS-Impact model includes six constructs – – individual impact, organization impact, provincial alignment impact, system quality, information quality, and service quality. We applied the modified IS-impact model and the proposed visualization tool against an existing healthcare software solution. An empirical study was conducted at a healthcare authority, with responses from 150 participants who use the healthcare IS, which confirmed that the proposed model and the visualization tool are valid and reliable to measure healthcare systems success. The evaluation model and the visualization tool are found to be efficient to narrow down the scope of inquiry from the general to the specific and quickly identifying the gaps and successes within the established software solution for the healthcare authority. Healthcare or clinical informatics researchers will be benefited from this research in evaluating the ongoing or nearly established healthcare information systems.


I. INTRODUCTION
The topic of benefit realization as it pertains to Information Systems (IS) is replete in the business community from both an operational and strategic perspective. The linkage between software applications and the influence it has, both positive and negative, on both organizations and individuals, is significant. It has been postulated that an organization's success and survival are dependent upon its choice of technology which allows it to adapt and transform [1]. Within complex The associate editor coordinating the review of this manuscript and approving it for publication was Rashid Mehmood .
organizational structures, such as in Canadian healthcare authorities, IS solutions are implemented to improve services and support for better healthcare outcomes. However, implementing healthcare information system is complex, and this must be done in an expedient, safe and responsible manner that benefits patients, staff, and the broader organization [2]. In most healthcare organizations, this is done in the context of increasing costs, changing demographics, an aging population and finite resources. The need is for timely and effective solutions to mitigate issues and create efficiencies. To this end, there is an ever-expanding list of proposed IS related projects, but this occurs in an atmosphere of dissatisfaction with the lack of benefits surfacing post-implementation [3]. Further, in many cases, the promised value does not materialize [4]. Meinert et al. [5] states that often the key challenge with health technologies is not only the design or the innovation itself, but in the lack of policies and frameworks that can enable adoption, sustainability and scalability. Coupled with the increasing demands in the healthcare sector, the need to ensure resources are allocated appropriately is paramount.
The value of IS can only be achieved through its usage which is influenced by the user's perception of the technology and ultimately translates to its adoption or lack thereof. In a multitude of industries, the success of IS is measured the ultimate gains from those solutions [6]. So, while evaluation methods in the healthcare setting utilizes some traditional instruments, such as return on investment, it only focuses on the tangible outcomes, such as the increase of the number of performed procedures. This narrow focus has significant drawbacks when used in isolation from the other factors, such as how the technology impacts a user's own value in the system [7]. Targeted users of the application may choose not to use the system or use only a portion of the system, which impacts the ability, efficiency, and overall health of the system, which eventually diminish returns from its value perspective [8], [9].
The alignment of a medical professional's core values and the tasks associated with an IS centered solution is critical for its success [10]. For example, IS solutions requiring additional data entry from a nurse may be viewed negatively as the additional steps may be regarded as a further barrier between the patient and themselves [10], [11]. If the task of collecting additional data is looked at only in isolation, management may then adopt varying approaches without fully understanding the source of the issue or invest in additional measures, which may be unnecessary. Be it hiring additional staff, abandon the system entirely, use parts of the system and so forth, in all instances there is a cost. Some are direct while others are less transparent, from requiring additional staff to perform data entry to affecting the quality of a downstream report. A similar concern has been reported by Orenstein et al. [12] regarding communication errors while using IS in the incentive care units. For the organizations, the question is one of the balancing priorities between the cost and the overall benefits. Ideally, the IS based solutions should reduce burdens on the professional and return their time to care while still contributing to the broader organizational success.
This research takes a systematic approach that incorporates and makes visible the codependences between the tangible and intangible aspects of an IS implementation. In this light, it is proposed that a modified version of the information system impact (IS-impact) model [8] be used as it provides flexibility in recognizing the relationship among people, processes, and technology. The IS-impact model uses a holistic set of measures to determine the success of an information system as determined by the impact on individuals and organizations as they experience the quality of the technology either by the information available or availability of the technology [7]. Specifically, this research has been applied against an existing software application in a health authority in Canada to determine if the application positively impacts the professionals within the public healthcare sector. An outcome of the model is a ''success score'' that quantifies the impact of the IS solution on the professionals and their leadership. Two main contributions of this work are the following: • A modified evaluation model based on IS-Impact [8] has been developed to assess the success of an information systems from a healthcare perspective.
• A visualization tool has been developed to visualize the success scores and the inter-relationships among the various aspects of a system obtained from the evaluation model. The evaluation model and the visualization tool together help in narrowing down the scope of inquiry from the general to the specific and quickly identifying the gaps within the established software solution for the healthcare authority. To our knowledge, this is the only evaluation system of this kind that evaluates a healthcare information system and visualizes the evaluation results simultaneously to facilitate decision making of the healthcare authority.
The rest of the paper is organized as follows. Section II provides discussion about related background along with two well-reputed models -IS-impact [8] and IS-success [13] from which this research is derived. Both the models provide the basis for the questions and subsequent analyses on our public health application. In Section III, the paper identifies the research methodologies used and addresses on the two additional dimensions that we have added to align the evaluation model to healthcare application. In Section IV, we discuss the results found in the survey and its impact on determining the degree of success for a public health application under review. This section also looks at the strength of the correlations between each of the dimensions and the visualization tool to identify the gaps and successes within the established software solution for the public health application. Finally, we draw the conclusion with some limitations and future recommendations in Section V. [14] suggest that technology can no longer be simply viewed as a productivity tool, nor can supporting technology be regarded as running and maintaining systems. Otherwise, the ability to generate value would be reduced to simple purchasing decisions [6], [14], [15]. Researchers need to adopt a broader perspective that recognizes the impact of technology on people and processes.

Smith and McKeen
To increase satisfaction and productivity, a system-oriented view, one that takes a holistic view would serve this goal. Ghosh and Sahney [16] expand on this idea in that they regard any technology system as two related subsystems: technology and social. The technology subsystem includes the tools, techniques, and devices that are used to achieve a set of tasks. VOLUME 8, 2020 The social subsystem includes the less tangible aspects of the organization, such as culture, flexibility, and degree of integration. This approach stipulates that both the subsystems must be acknowledged and managed, and the issues that appear in one will impact, either impeding or obstructing, the other. The synchronicity of the two is often expressed in various terms, such as benefits, benefit realization, success, or value.
DeLone and McLean [13] developed a highly regarded and influential IS-success model to identify success and impediments of IS. Using a socio-technical lens, it recognizes that technologies are embedded within business processes. Further, the structure of the model prominently places the user at the centre, meaning that those using the system will ultimately determine if benefits or ''dis-benefits'' are realized. The IS success model has been leveraged extensively in a variety of fields including human resources [6], education [17], mobile banking [18], and organizational IS solutions in general [19]. However, there are a limited number of examples of the IS-success model applied to healthcare. Specifically, the model has been used to validate an emergency medical information system [7], medical health records [20], and measuring the success of a cleaning logistics system in a medical centre [21]. The health records research is very relevant as the case study in this work also focuses to the same type of application.
DeLone and McLean [13] first published a set of 119 success measures that were mapped to six categories of IS-success model. Their model was derived after a review of 180 studies and came to include the interrelated dimensions [22] as presented in Table 1. The IS-success model emphasizes the inter-relationships among the various aspects of a system within an organization. The model also posits that the dependent variable for IS research is Success and that Success is achieved through these six interrelated dimensions. Where, for example, System Quality and Information Quality impact system Usage both independently and jointly [23]. Although, the model does not prescribe what Success means explicitly, it recognizes that the relationship among the dimensions is complex as experienced by those using the system. The Success is therefore interpreted by those impacted directly by the system and measured by their ability and willingness to use either all or part of the system to complete their tasks.
However, the IS-success model is not without its critics. In response, Gable et al. [8] reframed the IS-success model to address several underlying concerns, and in doing so, the authors created a new model called IS-impact model which focuses on a varied set of perceptual metrics. Of the several changes made, the dimension Use was removed arguing that it was redundant, as satisfaction was a consequence of success and should not be defined as a dimension. Additionally, the process/causal aspects of the model were removed, meaning that there was no pre-defined beginning or ending. To achieve this, the IS-impact model combined and grouped several of the previously identified constructs into two as presented in Table 2.  [16] description of the technology and social sub-systems, the IS-impact model's structure also emphasizes a sociotechnical approach. However, in this case, Social is the equivalent of Impact and Technology is the equivalent to Quality. Success is determined by the impact the application has had on Individuals and Organizations. Again, the definition of Success in one organization is not assumed to be the same in another [24]. Moreover, if there is a perception of failure, the sets of interrelated dimensions provide leadership with the ability to diagnose areas of weaknesses. By removing the process aspects implied in the IS-success model, additional flexibility is introduced. Not only can the model run at different times of a system's lifespan -from ideation to pilot to implementation and beyond, there is no determined beginning or ending -responses can be initiated at any point. From a system evaluation standpoint, each time it is engaged, leadership can gain a better understanding of how the system is perceived and, if any enhancements have been applied, identify where the impact -positive or negative -occurs.
The IS-impact model also introduces a temporal nuance to the model. As stated, the Impact dimension recognizes the effect of the application on the user. Karahanna et al. [25] have highlighted that user adoption is shaped by their past experiences with other software applications. The technological aspects of the solution (Quality) are implemented by analysts and programmers incorporating future or near future techniques. Meaning that the confluence of the two could create an interesting convergence of past, present and future. Like the IS-success model, the IS-impact model has also been used to evaluate a wide range of information systems including human resources, education, and the other software solutions.

III. RESEARCH METHODOLOGY
We have reviewed various models that are relevant to techniques and technology in evaluating information systems to find the most appropriate model from healthcare perspective for this research. While most of the models are concerned with the measurement of companies, institutions, and financial profits, the IS-Impact model covers the aspects that affects the quality and success of using any system such as healthcare information system [8]. This model is found to be most useful for measuring healthcare systems because it comprises 41 measures including six dimensions: System Quality, Information Quality, Use, Satisfaction, Individual Impact, and Organizational Impact.
Success as it pertains to this work is a multi-dimensional concept. The IS-impact model becomes an ideal vehicle from which to explore how to evaluate and ultimately determine the value of our information systems by identifying where the impact occurs. For this study, the model has been adjusted to better accommodate the complexities in the healthcare environment. Extending either the IS-success model or the IS-impact model has been done numerous times [6] and it speaks to the flexibility of both models. For this work, we have added two new constructs, Provincial Alignment and Service, which were not present in the IS-impact model (see Figure 3). This approach recognizes that the satisfaction or impact needs to be considered from various perspectives be it the Individual, Organization or Provincial lens, as the impact of the Service, Information or System on each of them may be different. The ability to articulate which aspects of the IS are successful\unsuccessful assists in validating the importance and effectiveness of the application and if additional steps to mitigate issues are needed.

A. PROVINCIAL ALIGNMENT
Health authorities have an obligation to ensure that funds are spent in a judicious and effective manner. For IS solutions, this is best done by creating a network of systems that are cost-effective, efficient and supports the needs of staff and clients. The notion of aligning IS with organizational objectives is one that is well represented in the literature [26].
Healthcare in British Columbia is under the purview of the provincial governments and is largely funded by taxpayers. Subsequently, the organization is held accountable to support the strategic direction set by the provincial government. The inclusion of an Alignment dimension provides transparency for all levels of leadership to see the linkage and mapping between the application and the organization's strategic direction. The strategic direction is described in the provincial government's service plan and summarized and grouped under the new dimension of alignment [27]. The service plan objectives have been identified as follows: (i) access/support better patient centred care; (ii) prevention and health promotion; (iii) improved primary and community care; (iv) improved access to specialist services; (v) access to quality diagnostic services; (vi) access to clinically and cost-effective pharmaceuticals; (vii) support the review and assist in improving acute services; and (viii) support access to appropriate residential care.

B. SERVICE QUALITY
While Service Quality was included in the IS-success model [13], it was removed from IS-impact model [8]. The dimension has been reintroduced. Some research studies also VOLUME 8, 2020 identify that Service Quality was a key factor for connecting IS resources with the expectations of the individuals and by the extension of the organization [28], [29]. The formal inclusion of the construct recognizes that support is needed to ensure an effective integration of business processes and values with the user base. As the Service Quality dimension serves as a bridge between the objectives of the organization and technology, it recognizes that the implemented system is no longer just a product but a service [30].
The inclusion of Service Quality also speaks to an additional cost which runs from additional people answering questions and calls to supporting documentation and videos to support the end users. It represents an additional operational cost and its inclusion as a separate dimension provides a more detailed, comprehensive view of the total environment. Identifying it as a separate dimension ensures that the conversations pertaining to its impact and its role in the broader ecology are adequate and supported. Moreover, in some situations, the execution and the costs associated with the support is undertaken by one healthcare authority, with the application hosted and expensed to another entity, be it in another healthcare authority or government department. Moreover, after an IS application is first piloted and subsequently rolled out to a broader audience, the role of support is often reassigned and, in some cases, assigned to a service or support desk/group. Creating a separate dimension again allows for a modular set of questions that can be asked before and after the change in responsibility. Deficiencies become more visible and not buried within a larger set of system metrics.

C. HYPOTHESES
Using the score from each of the constructs, each was evaluated and tested against one another using regression analysis. This approach allows for the analysis of both small and medium datasets while still identifying if relationships exist between the higher-level summary (i.e., as Impact and Quality) and the drivers behind it.
As with Petter and Fruhling [31], a summative scales approach was used to calculate a Success score for each dimension and then further aggregated to create an overall score. These scores represent a defined set of factors and are therefore formative in nature as they are a composite of multiple measures [32]. As the constructs are formative as opposed to reflective, the reliability of each of the items within the scales does not need to be determined [31]. This higher-level view, with the ability to drill down into the results, allows leadership to gain a perspective on the overall impact emanating from perceptions of people, processes and technologies.
In this approach, Success, as previously identified by DeLone and McLean [13], is the dependent variable. The independent variables flow from both Impact and Quality. Additionally, it is postulated that there are relationships between the dimensions with the Impact and Quality constructs. For example, the Quality construct is impacted by the dimensions under the Impact construct. The statistical analysis will assist in determining which of these metrics have the biggest level of impact when evaluating such technologies.
The IS-impact model is designed and tested using a formative model where there is an expectation that the measures have a high degree of inter-correlation. If, for example, there are low values associated with the Impact construct, it is expected that either the Information or the System dimension (or both) would support the narrative around the cause.
Using univariate regression analysis, the research looks to confirm the validity of the application by examining the latent data structures (see Figure 4). This research works to confirm the hypotheses in Table 3 by testing to see if there is a relationship between each of the sets of dimensions. An aggregated value is calculated to quantify the success of the software solution. As the IS-impact model purports that there are multiple factors which ultimately determines if an IS is successful, a regression methodology is needed to identify if there are impediments or supports between the dimensions within each Quality and Impact constructs.  The additional rigor of a regression model serves not only to provide better analysis and clarity over how the success score is influenced, but potentially used to identify patterns and allows the business to anticipate future impacts when similar conditions surface. Using the coefficient of determination (R2), the accuracy of the extended IS-impact model has been evaluated.

D. SURVEY QUESTIONNAIRE
Adapted from the IS-impact model [8], an online survey of 48 seven Likert-scale questions was used to evaluate a Public Health application. Individual was comprised of 3 questions, 7 for Organization, 8 for Alignment, 10 for Information, 15 for System and 5 for Service. The survey questionnaire used in this research are listed in Table 4. For each question, survey recipients were asked to select a response on a continuum between Strongly Agree, Agree, Somewhat Agree, Neutral (undecided), Somewhat Disagree, Disagree and Strongly Disagree. Each of the values were assigned a number between 1 and 7 and it is these ordinal numbers that were used for the analysis. Values were then calculated for each construct, generating a summative score for each summary point. Finally, the average of Quality and Impact determines the Success score of the IS implementation under evaluation. The higher the number, the greater the success and the smaller the perceived gap in issues arising from the implementation.
The users do not require to answer all 48 questions. The ability for users to ''opt-out'' of questions reflected their comfort with the area that the question was focused on. For example, users who did not feel comfortable about answering questions pertaining to the impact on the organization did not need to answer the related seven questions. This could be due to many factors including their position in the organization or the amount of time and exposure. The intent was to have greater confidence that the survey accurately reflected the opinions of the reviewer. However, the data analysis only used those responses where all 48 questions were answered. Aside from the 8 Alignment questions, the source of the survey questions came from Gable's research [8]. The Alignment questions were derived from the set of Ministry of Health metrics.

IV. EVALUATION AND DISCUSSION
This section focuses on the set of hypotheses that are based on the modified version of the IS-impact model. This work strives to use the model to articulate the perceptions of those working with a publicly deployed healthcare IS solution. Using the sociotechnical approach, the research looks to identify how technology has impacted individuals and in turn transformed their organizations.

A. PUBLIC HEALTH APPLICATION
In this research, we have considered a public health application which we termed as Public Health application. The Public Health application under consideration has a suite of modules written to address and manage key day-today responsibilities within their portfolio. Public Health is an important part of publicly funded systems and is responsible for key activities within the health authority, namely focusing on prevention and promotion. Key areas of responsibility include: (i) access to safe drinking water and  food; (ii) province-wide vaccination programs; (iii) reporting on the health of British Columbians; (iv) preventing and managing outbreaks of disease; (v) providing at-home visits by public health nurses to young, vulnerable, first-time mothers; and (vi) encouraging people to use healthy behaviors [33].
The Public Health application under consideration is a custom designed health record application, which is used to track and monitor population data, including tracking immunizations. It also assists healthcare professionals in areas such as detecting and managing communicable diseases (e.g., tuberculosis, influenza, and others) which may pose an outbreak threat.

B. SAMPLE
Approximately 180 participants (Public Health employees) were invited to participate in the survey. Responses from 150 participants were received. However, only those that completed all 48 questions were used. Subsequently, 115 responses were used in the analysis. Most users of the application are nurses and the largest percentage of our participants, though people with administrative titles that worked with the application were also invited to participate. Figure 5 shows the various IS-impact dimensions by the number of responses.

C. SUCCESS SCORE
Using summative scales, a numeric representation of Success is calculated. The Success number is a value used to signify the gap between the perceptions of the users and the various people and groups responsible for the design, implementation and maintenance of the application. The lower the score, the greater the gap and subsequently suggesting to leadership that more work may be needed to be done. The Public Health application under consideration was given a score out of 7. Scores appear on a continuum between 1 (strongly dissatisfied) and 7 (strongly satisfied) as perceived by the users answering the 48 questions. Each of the questions from the survey was mapped to one of 6 dimensions. These were then summarized into one of two constructs. A score was given to each dimension and construct.
To understand the scores and the relationships of the extended IS-Impact model, a visualization tool was created to allow leadership to explore the data as through the lens of the extended IS-Impact model. The scores were brought into the tool and visualized through a chord diagram. The outer ring identifies all six dimensions, the inner displays the constructs they map to, which then displays an overall success value (see Figure 6). The tool also allows for users to explore in more detail each of the constructs and dimensions that make up the IS-Impact model. By clicking on a dimension or construct the metrics and line graph are filtered to display further detail to support the system score provided. Further, by clicking on each of the metrics (see Figure 7), the line graph is filtered to show the data supporting the score. While the contents of the list box show filtered data, as does the line graph, the chord diagram continues to provide the ''bird's eye view'' of the evaluation, namely the consistent display of dimensions. By providing this representation, the user can quickly jump between different metrics while exploring the data. The success score for the Public Health application is 3.62 (see Table 5 and Figure 6) and is derived from the lower level constructs which are then divided into Impact and Quality. The score places the overall satisfaction or success of the  application within the ''somewhat dissatisfied'' range. The numeric value for the success score indicates that issues from the user's perspective are presented. Both the constructs -Quality and Impact -reflect this result as do the detailed measures to which they map (see Figure 7).
When grouping the variants of Dissatisfied against the Satisfied (where Neutral is excluded), the Individual, Organization, Alignment and System dimensions fall into the Dissatisfied category (see Table 6). All these dimensions have more than 50% of respondents in one of the dissatisfied categories. The System dimension shows the lowest number of satisfied responses at just below 10% and the highest number of dissatisfied responses at 66.37%.
The System dimension is comprised of 15 measures, where 8 of the 15 were given a value between 2.15 and 2.9 and listed as Dissatisfied (Reliability, Sophistication, Flexibility, Ease of Use, Efficiency, Customizability, System Accuracy, and Ease of Learning). An additional three (Integration, User Requirements and Systems Features) were listed as somewhat dissatisfied. Only four of the measures were rated as neutral (Data Accuracy, Database Contents, Data Currency, and Convenience of Access).
The lowest measure was the Reliability (see Figure 7), which mapped to the question ''Is (the IS) always up and running as necessary?''. When the leadership responsible for this application were asked what they thought the respondents might be responding to, they pointed out several ongoing issues. Namely, there are late entries due to downtime ses- sions, downtime that continues, and the need for nurses to find time to re-enter data. Nurses get access to the application only when they are at their office and as these are Public Health nurses, travel to various locations and sites is commonplace. When offsite, handwritten notes are made and later then entered to the system. This is at times done by another nurse rather than a clerk as these are patient data that are being transcribed.
From a broad perspective, it suggests that the Information dimension (which represents the inputs and output of the system) is seen as neutral or slightly better, but the System dimension (the technology that hosts it), is seen as an impediment. Specifically, the highest rated measure under Information was Importance, meaning that users of the system understood that the information was important to their work. Likewise, respondents also recognized through the high rating of 5 with the Uniqueness measure, that the information was not available elsewhere.
From an Impact standpoint, when looking at both the Individual and Organizational dimensions, this model suggests that there are key areas where gaps exist within both dimensions. A closer examination of Individual dimension which only has a score of 3.62 (see Figure 6) shows that two of the three metrics are within the 3 ranges: Individual Productivity with a value of 3.07 and Decision Effectiveness of 3.58. Indicating the application did not positively improve their effectiveness or productivity, though there is some indication that it did better at assisting them in identifying and retaining information necessary for them to achieve the tasks in their position.
The Organization dimension scored slightly worse with an overall score of 3.11 (see Figure 6). Three of the seven of its metrics ranged from 2.64 to 2.92. The lowest metric score was the Reduced Staffing Costs. With the range of scores in the dissatisfied to strongly dissatisfied range, the application has not only failed to improve productivity but did the inverse as the organization incurred additional staffing costs to incur the goals of the application. This very core issue of who accesses and who enters the data may factor in any of the low scoring metrics in either the Impact or the Quality dimensions. It could show up in Reliability (as leadership has suggested) in that the product is not available for nurses to use remotely. It certainly shows up from a staffing costs standpoint in that the health organization needed to hire additional qualified people, specifically nurses, to perform the data entry. The scores were not surprising to leadership and were quick to apply a narrative to explain the gap. The purpose of the IS-impact model is to identify where the areas of concern are, and where the gaps have developed. By focusing on these areas specifically and questioning why the gaps are interpreted as they are, leadership can then plan and evaluate potential mitigations with confidence. Each dimension was tested for reliability using Cronbach's coefficient alpha (see Table 7). A value of 0.70 or above indicates a reliable measurement of internal consistency [34]. All values were calculated with a very strong value with the smallest being 0.811 and the heist being 0.945, suggesting the question set is highly consistent.

D. CORRELATION
The summative scales approach is used to create an overall number out of seven for comparison purposes. However, understanding the relationships between the constructs, especially, if looking to itemize and prioritize improvements, can be accomplished by analyzing the correlation between each construct in isolation. As each dimension represents a set of people, processes and\or technologies, this allows leadership to better understand the degree of success and begin a process of where to invest resources based on impact.
The interpretation of what the relationship and the clarity provided by univariant is key in this analysis as issues that surface should inform management of what actions they may choose or not choose. Using univariant analysis as opposed to multivariant analysis also provides a clearer narrative from which to support to reject suppositions made by leadership, which is also why this approach is favored by many arears and industries [35]. Subsequently a two univariant models, both Spearman's Correlation and R2 were applied.
As ordinal values were used in the survey, a Spearman's correlation model was selected, and the results of the data indicate that the relationships between each of the constructs. Most are between moderate and strong (see Table 8). Some variations in the degree of strength do exist. The Service dimension for example, shows significantly lower levels of confidence. The calculated values as seen in Table 8 show a range between 0.328 and 0.443. These values indicate that a relationship exists, albeit not as strong as the other dimensions. The data pertaining to this case study confirms that there is a strong correlation between the dimensions in this research. However, the needs of the business also necessitate that a degree of prediction is confirmed as a part of the model. While Spearman's correlation clearly identifies there is a correlation between constructs, the benefit of using an additional method such as R2 allows for a greater degree of confidence in a tool that recognizes predictability, and this will help leadership to identify areas of concern and more importantly to prioritize and address issues. Looking at the same dataset using R2 does provide a slightly different perspective. While the value range differs from Spearman's, it also reflects a similar set or grouping of values. By applying a univariant linear regression, these comparisons are done dimension to dimension. The eventual question for leadership is if they were to invest time and resources to address one dimension what dimension they would begin and as a part of their decision making which other dimension would they expect to see an impact. As each of the dimensions represent a different segment of the entire environment, leadership would need to understand the potential for change and potentially prioritize them.
For example, dimensions that were compared against the Service dimension are low in terms of the results. Conversely, the relationship between Organization and System is showing the highest of values. Thus, when considering the findings using both Spearman's and R2, our univariant approach, if those working with this Public Health application were looking to positively impact the overall system, one place to begin the further examination of the issues would be within the System dimension as it has the lowest quality score at 3.37 (see Table 5). The leadership would target those measures that are the lowest: Reliability, Sophistication, Flexibility, and Ease of Use and Efficiency. The conversation around solutions would then extend to how improvements to the system would positively affect the measures of the Individual, Organization or Alignment. In particular, the expectation is that solutions in one dimension would be targeted to impact positively the other dimensions. Questions about what to improve then extend beyond just Reliability but also now connect with how that could impact staffing costs, productivity, outcomes or capacity (Organization), or decision effectiveness or individual productivity (Individual).

E. ALIGNMENT AND SERVICES
This healthcare-centric version of the IS-impact model postulated that the additions of both the Alignment and Service dimensions were important to provide leadership with a better understanding of their existing systems. The inclusion of alignment provided a view as to how the application was perceived to align with the goals of the broader provincial objectives as they pertain to healthcare. The overall score, however, was considered low at 3.08 (see Figure 6 and Table 5). Among the 8 measures from which this summative score was created, 5 ranged from 2.56 to 2.88.
As the alignment goals encompass a wide number of initiatives and objectives, it is not realistic to score high on all items. However, considering the Public Health application is an area, which the government would like to see a greater emphasis, it is surprising that none of the scores were above 3.79. Among the available measures, the first 5 metrics would not score high. This is understandable, as improvements in Public Health application would not directly improve access to specialist services, diagnostic services, cost-effective pharmaceuticals, residential or acute services. Theoretically, it could, however, be perceived as providing better community/primary care, patient centred care, and above all else assisting in promotion and prevention. It is with great interest, however that the value attributed to the most likely candidate is merely 3.79. The survey participants did not see the linkage between this measure and the broader role of the application.
Two key possibilities come to mind. The first is that the survey participants did understand the question but did not feel that the application was able to perform its key tasks and provide the support needed to perform this function. The other is that those using the application do not understand the broader goals of the application. For example, it is possible that the nurse while performing vaccinations may not have a complete picture of how the aggregation of all the public health tasks, including hers, would be interpreted at the provincial level.
If the issue pertains to communication around goals, the number as seen in this research could increase if the organization invested in additional communication sessions. In particular, the communication would focus on how this technology assists the organization in delivering the objective. If, however, the reason for the low score is that a participant doesn't believe that this application assists them to achieve better outcomes, then it becomes another indicator that there are critical issues within the application that must be resolved to address their view. Further work would be required to ascertain if either of these, or another alternative, drove the responses as calculated.
The Service dimension was also newly introduced to this implementation of the IS-impact model. Its inclusion is here to make transparent a key component of the total software solution. Jantti et al. [36] argue that this functionality is a critical connection among the user, IT service providers, and the third-party providers, and is located at the heart of the customer interface. As described by Jantti et al., one of the most important tasks pertains to communication, be it progress on tasks, updates, resolution of issues, etc. It is proposed that if these services are not available, the value of the system, and the benefits of the investment are at risk of not being realized.
For the organization that is hosting a solution, having a separate block not only provides visibility around its role but also is helpful in determining if there are issues in how that role is carried out, providing this functionality is done so at a cost. If one of the barriers to delivering value is the service desk, then this too needs to be understood with potential mitigation plans scoped and developed.
The Service score for this application is the highest of all the dimensions at 4.53. The scores for each of the individual metrics within the dimension are all within the four range, with the highest being that the staff are interested and sincere when working with a user who is having issues. As the service desk component of this is supported by the Public Health group, as opposed to the vendor, seeing these numbers in the context of the greater application may continue to justify the resources and allocation of budget.
When looked at the broader context, bigger and more pressing issues exist within the System dimension. So investing in changes here would likely not be warrented. Moreover, from the R2 persepective, the relationship between this dimension and all others is the weakest, suggesting if improvements were to be made, changes that directly impact this dimension would likely not have a strong impact. From a prioritization standpoint, changes made here should be lower.

F. EVALUATION OF THE VISUALIZATION TOOL
The survey was pulled into the newly created IS-impact tool and presented to leadership responsible for the application. In addition to the presentation, each person was asked a series of qualitative questions in an attempt to determine if the visualization supported sensemaking.
The basis for the evaluation of the toolset is around sensemaking. In short, the supposition is that if the tool supports sensemaking, then it will be useful for decision making. Sensemaking which has been defined as a ''cognitive process'', one that allows people to interpret and contextualize information to derive knowledge and support actions [12]. The interpretation of sensemaking and how it can be applied is quite broad. Lam et al. [27] provides a set of steps or facets when engaging in that set of cognitive processes. The four facets that make up the Visual Data Analysis and VOLUME 8, 2020 Reasoning (VDAR) methodology Hypothesis generation, Data Exploration, Knowledge Discovery, and Decision Making. Leadership were asked questions about the tool to determine if the tool supported them through each of these facets.
Hypothesis generation encapsulates the idea that the visualization presented will either support or challenge the ideas of the users which will lead to action. Leadership was asked if the success score supported or surprised their understanding of their application. Further they were asked to work through the tool to better understand how the data supports the aggregation of values.
This tool uses a visualization that is a top-down design, meaning that users are presented with an overall score and they can then move down through the details to understand the reasoning behind it. Leadership was asked if, based on the score they were seeing if the score matched expectations or contradicted it. For the Public Health application, the success score matched expectations.
For Public Health application, the overall score of 3.62 out of 7 was not surprising to leadership and was in an expected range. Additionally, by drilling into the Organizational dimension, there was interest around the mixed responses to ''improved outcomes'' which maps to the question ''(the IS) has resulted in improved outcomes or outputs''. The presentation of the graph suggested to leadership that there were mixed opinions among survey respondents. This introduced a curiosity around how this question was interpreted by the respondents. As there is a range of values, it might suggest either the need for technical enhancements or one where better communication around the role of the application is needed. Further analysis and inquiry will be needed.

V. CONCLUSION
This work presents a modified IS-impact model against an existing Public Health application. The model was selected due to its flexibility and its sociotechnical approach, meaning that the evaluation focuses on the impact it has on its users and the organization as of a point in time. The methodology, therefore the tool, can be run multiple times during the software's lifecycle, especially as new functionality is introduced. Based on the modified IS Impact model, strengths and areas of improvements were identified in the Public Health application. Gaps are identified in all dimensions and indicated that the work lies ahead for those looking to manage and improve the adoption of the application.
From the Information dimension, the survey suggests that the availability and timeliness metrics show that these are areas for investigation. An earlier set of documents provided by the Public Health portfolio included comments from nurses who noted that they were not able to see notes from other health care professionals, especially those in other Canadian health authorities, which impinges on collaboration. They also believe that not all the data was timely or in a format that they preferred. The System dimension introduced the lowest rated metric, reliability. Issues on availability persist and impact the users directly as they are not able to work in the system anywhere other than from their offices, and even when at an office location, the system is often unavailable. Among all the dimensions, the Service dimension shows the highest score. However, as they are still in the 4 range (out of 7) improvements around response time is an area for review.
In the Impact construct, both the Individual and Organization dimensions reveal low productivity metrics. As demonstrated through R2, both of these dimensions are moderately connected to the system issues. It would seem reasonable to assume issues regarding reliability would impact productivity. Moreover, the organizational dimension's lowest score pertains to reducing staffing costs -which the introduction of this software did not achieve. Conversely, additional nurses were hired to input data into the application.
Lastly, the Alignment dimension indicates that the respondents perceive little linkage between this application and the strategic direction of the organization as set by the Provincial Government. This could indicate that the users of the system are not aware of the overarching goals of the application and the organization, or that they feel this application does not assist in achieving this goal. Nonetheless, it represents a gap.
The overall score of 3.62 suggests that there are numerous items with the Public Health application for review. For leadership, it represents an opportunity to speak through the gaps presented in each of the dimensions to determine what can be addressed. As indicated, there are some aspects that may have internal solutions, such as education. For others, it should trigger a conversation with the vendor to see if they can be addressed or potentially if a new solution is needed.
This work supports the initial findings of other research in terms of both approach and applicability. Petter and Fruhling [31], for example, concluded with their examination of the IS Success model that the model should be altered to accommodate the nature of the applications under evaluation. In their instance, looking to understand how their system would provide decision makers the appropriate course of action in the event of a man-made or natural disaster. Their application also existed within the Public Health space, and it was noted that for their user base, the strength of the System and Information dimensions were key determinants if users would use the system. This relationship is present in this work. Roky and Meriouh [28] also found that the strength of Information was a key determinate of usage in their study pertaining to the automotive industry. Our implementation also illustrated the importance of the Information dimension and the value attached to the data stored, though also found areas for improvement.
Others with modified models have also found success. Alkhalaf et al. [37] used a modified IS Success/Impact model to successfully measure the impact of their application on e-learning students and Alhussain [38] on the use of an electric blackboard system. That work modified both the Individual and Organizational dimensions to get greater clarity on how they could respond more quickly to change and adapt to the needs of their students. Additionally, Alshardan et al. [19] implemented the IS-Impact model, and also noted that the Service dimension was absent. This would have important implications for policy makers and managers, specifically for helping them to determine which interventions assist them in the effective allocation of scarce public funding. We also made modifications to better capture alignment with the direction of the provincial government in addition to adding the Service dimension recognizing it as a critical means to connect people and technology.
Small and large, publicly funded entities and private, all these scenarios have taken advantage of flexible evaluation models to articulate and quantify the level and type of impact post-implementation. While the lower success score for this application was not surprising for leadership, it quantified the impact their users were experiencing. Having presented leadership with the findings, they voiced the need to focus on improving reliability, minimize disruption to business delivery, improve the interface based on ''ease of use'' and ''overall productivity'' metrics results and make the data more accessible based on ''availability of information''. Leadership was also pleased at the high the support metrics, which in turn solidified their funding. Each priority however can now be addressed systematically through the scores shown via the model.

VI. LIMITATIONS OF THIS STUDY
There are several limitations within this research. This work represents an evaluation of just a single application. To further validate the work, this evaluation would need to be applied to multiple large applications. Due to the high level of participation, the statistics are highly representative. However, to ensure validity, repetition using similar sized systems is needed. Moreover, running the survey at a later date would also highlight any changes of perceptions, which could then be analyzed against any changes made to the environment.
While the data involves a set of current systems working within a health authority, there continues to be opportunities in applying this method and tool against a larger and greater set of applications and responses. A larger sample size will help to ensure that the modified model continues to be valid and reproducible at any other health authorities in Canada.
As this model can be applied at any time during the lifetime of the application, there is also the opportunity to use the model against systems during the ideation phase, and then to compare the score and its analysis to the pilot phase, post initial rollout, and a set anniversary date. The tool provides a standardized methodology for creating a baseline and subsequently be used to identify how well it is performing in an environment. As gaps are identified and additional enhancements are made to people, processes and technologies, so too should these changes be noted within the results with an aim to evaluate, re-evaluate and determine if mitigations are needed.
MAIGA CHANG (Member, IEEE) is currently a Full Professor with the School of Computing Information and Systems, Athabasca University, Canada. He has given more than 105 talks and lectures in different conferences, universities, and events. He has participated in more than 310 international conferences and workshops as a Program Committee Member. He has (co-)authored more than 225 edited books, special issues, book chapters, journal and international conference papers. His research interests include game-based learning, training and assessment, learning behavior pattern analysis and extraction, learning analytics and academic analytics, data mining and artificial intelligence, health informatics, natural language processing, intelligent agent technology, multi-agent systems, mobile health, mobile learning, and ubiquitous learning.
He is also an Advisory Board Member of the Journal of Computers and Applied Science Education. He is also the Chair of the IEEE Technical BRENDA LIGGETT received the Master of Arts degree in leadership and training from Royal Roads University.
She is currently the Vice President (System Optimization) and the Chief Financial Officer at Fraser Health, Surrey, BC, Canada, where she leads system optimization with the purpose of fully leveraging integrated analytics to improve clinical, operational, and business processes with Fraser Health. She is also the Fraser Health Executive Liaison for Lower Mainland consolidated services, which includes integrated protection services, laboratory, and medical imaging. As Chief Financial Officer, she provides leadership and oversight with Fraser Health's strategic financial planning, budgeting, and financial services. She also leads accounting services, business support, financial planning, capital development and has general management oversight for Public Private Partnership (P3) projects. She also has responsibility for the Business Initiatives and Support Services (BISS) and the supply chain and accounts payable functions for the BC Clinical and Support Services (BCCSS).
Ms. Liggett is a member of the Chartered Professional Accountants Society of British Columbia. She is a Chartered Professional Accountant (CPA, CMA). VOLUME 8, 2020