Abstract:
Researchers in the artificial intelligence community, who design decision support systems for medicine, are aware of the need for response to real clinical issues, in a p...Show MoreMetadata
Abstract:
Researchers in the artificial intelligence community, who design decision support systems for medicine, are aware of the need for response to real clinical issues, in a problem driven approach, rather than just an academic exercise. They recognise that their systems need to meet the specific goals of the domain requirements and also to have been thoroughly evaluated, for acceptability. Attempts at compliance, however, are hampered by lack of guidelines. Evaluation can be thought of as being subjectivist and objectivist. Subjectivist evaluation appears to be addressed in the literature and also some objectivist evaluation, but the core evaluation of performance accuracy appears to be the area that receives least attention in evaluation papers. It is hoped to rectify this, by concentrating on the methodology of formal quantitative evaluation and disseminating the information, allowing progression towards the production of guidelines for a sufficiency of performance evaluation. Not carrying out this core evaluation avoids answering - "Does the system do what it claims?" and "is it more accurate than current methods?" Such questioning, is essential for giving evidence that a real, scientific process has been applied to meet the safety-critical requirements of medical systems.
Date of Conference: 25-28 October 2001
Date Added to IEEE Xplore: 07 November 2002
Print ISBN:0-7803-7211-5
Print ISSN: 1094-687X