1
|
Kohn MS, Sun J, Knoop S, Shabo A, Carmeli B, Sow D, Syed-Mahmood T, Rapp W. IBM's Health Analytics and Clinical Decision Support. Yearb Med Inform 2014; 9:154-62. [PMID: 25123736 DOI: 10.15265/iy-2014-0002] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVES This survey explores the role of big data and health analytics developed by IBM in supporting the transformation of healthcare by augmenting evidence-based decision-making. METHODS Some problems in healthcare and strategies for change are described. It is argued that change requires better decisions, which, in turn, require better use of the many kinds of healthcare information. Analytic resources that address each of the information challenges are described. Examples of the role of each of the resources are given. RESULTS There are powerful analytic tools that utilize the various kinds of big data in healthcare to help clinicians make more personalized, evidenced-based decisions. Such resources can extract relevant information and provide insights that clinicians can use to make evidence-supported decisions. There are early suggestions that these resources have clinical value. As with all analytic tools, they are limited by the amount and quality of data. CONCLUSION Big data is an inevitable part of the future of healthcare. There is a compelling need to manage and use big data to make better decisions to support the transformation of healthcare to the personalized, evidence-supported model of the future. Cognitive computing resources are necessary to manage the challenges in employing big data in healthcare. Such tools have been and are being developed. The analytic resources, themselves, do not drive, but support healthcare transformation.
Collapse
Affiliation(s)
- M S Kohn
- Martin S. Kohn, MD, MS, FACEP, FACPE, Chief Medical Scientist, Jointly Health, Big Data Analytics for Remote Patient Monitoring, 120 Vantis, #570, Aliso Viejo, CA, 92656, USA, E-mail:
| | | | | | | | | | | | | | | |
Collapse
|
2
|
Rodríguez-González A, Torres-Niño J, Valencia-Garcia R, Mayer MA, Alor-Hernandez G. Using experts feedback in clinical case resolution and arbitration as accuracy diagnosis methodology. Comput Biol Med 2013; 43:975-86. [DOI: 10.1016/j.compbiomed.2013.05.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2012] [Revised: 05/02/2013] [Accepted: 05/04/2013] [Indexed: 10/26/2022]
Affiliation(s)
- Alejandro Rodríguez-González
- Bioinformatics at the Centre for Plant Biotechnology and Genomics UPM-INIA, Polytechnic University of Madrid, Parque Científico y Tecnológico de la U.P.M. Campus de Montegancedo, Pozuelo de Alarcón, 28223 Madrid, Spain.
| | | | | | | | | |
Collapse
|
3
|
Kaplan B. Evaluating informatics applications--some alternative approaches: theory, social interactionism, and call for methodological pluralism. Int J Med Inform 2001; 64:39-56. [PMID: 11673101 DOI: 10.1016/s1386-5056(01)00184-8] [Citation(s) in RCA: 188] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
A review of evaluation literature concerning CDSSs indicates that randomized controlled clinical trials (RCTs) are the 'gold standard' for evaluation. While this approach is excellent for studying system or clinical performance, it is not well suited to answering questions concerning whether systems will be used or how they will be used. Because lack of use of CDSS has been of concern for some years, other evaluation research designs are needed to address those issues. This paper critiques RCT and experimental evaluation approaches and presents alternative approaches to evaluation that address questions outside the scope of the usual RCT and experimental designs. A wide range of literature is summarized to illustrate the value of evaluations that take into account social, organizational, professional, and other contextual considerations. Many of these studies go beyond the usual measures of systems performance or physicians' behavior by focusing on 'fit' of the system with other aspects of professional and organizational life. Because there is little explicit theory that informs many evaluations, the paper then reviews CDSS evaluations informed by social science theories. Lastly, it proposes a theoretical social science base of social interactionism. An example of such an approach is given. It involves a CDSS in psychiatry and is based on Kaplan's 4Cs, which focus on communication, control, care, and context. Although the example is a CDSS, the evaluation approach also is useful for clinical guideline implementation and other medical informatics applications. Similarly, although the discussion is about social interactionism, the more important point is the need to broaden evaluation through a variety of methods and approaches that investigate social, cultural, organizational, cognitive, and other contextual concerns. Methodological pluralism and a variety of research questions can increase understanding of many influences concerning informatics applications development and deployment.
Collapse
Affiliation(s)
- B Kaplan
- Center for Medical Informatics, Yale University School of Medicine, New Haven, CT, USA.
| |
Collapse
|
4
|
Abstract
This paper reviews clinical decision support systems (CDSS) literature, with a focus on evaluation. The literature indicates a general consensus that clinical decision support systems are thought to have the potential to improve care. Evidence is more equivocal for guidelines and for systems to aid physicians with diagnosis. There also is general consensus that a variety of systems are little used despite demonstrated or potential benefits. In the evaluation literature, the main emphasis is on how clinical performance changes. Most studies use an experimental or randomized controlled clinical trials design (RCT) to assess system performance or to focus on changes in clinical performance that could affect patient care. Few studies involve field tests of a CDSS and almost none use a naturalistic design in routine clinical settings with real patients. In addition, there is little theoretical discussion, although papers are permeated by a rationalist perspective that excludes contextual issues related to how and why systems are used. The studies mostly concern physicians rather than other clinicians. Further, CDSS evaluation studies appear to be insulated from evaluations of other informatics applications. Consequently, there is a lack of information useful for understanding why CDSSs may or may not be effective, resulting in making less informed decisions about these technologies and, by extension, other medical informatics applications.
Collapse
Affiliation(s)
- B Kaplan
- Center for Medical Informatics, Yale University School of Medicine, New Haven, CT, USA.
| |
Collapse
|
5
|
Razzouk D, Shirakawa I, Mari JDJ. Sistemas inteligentes no diagnóstico da esquizofrenia. BRAZILIAN JOURNAL OF PSYCHIATRY 2000. [DOI: 10.1590/s1516-44462000000500012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
6
|
Abstract
As increasingly powerful informatics systems are designed, developed, and implemented, they inevitably affect larger, more heterogeneous groups of people and more organizational areas. In turn, the major challenges to system success are often more behavioral than technical. Successfully introducing such systems into complex health care organizations requires an effective blend of good technical and good organizational skills. People who have low psychological ownership in a system and who vigorously resist its implementation can bring a "technically best" system to its knees. However, effective leadership can sharply reduce the behavioral resistance to change-including to new technologies-to achieve a more rapid and productive introduction of informatics technology. This paper looks at four major areas-why information system failures occur, the core theories supporting change management, the practical applications of change management, and the change management efforts in informatics.
Collapse
Affiliation(s)
- N M Lorenzi
- University of Cincinnati, Ohio 45267-0663, USA.
| | | |
Collapse
|
7
|
Delaney BC, Fitzmaurice DA, Riaz A, Hobbs FD. Can computerised decision support systems deliver improved quality in primary care? Interview by Abi Berger. BMJ (CLINICAL RESEARCH ED.) 1999; 319:1281. [PMID: 10559035 PMCID: PMC1129060 DOI: 10.1136/bmj.319.7220.1281] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
8
|
Forsythe DE. “It's Just a Matter of Common Sense”: Ethnography as Invisible Work. Comput Support Coop Work 1999. [DOI: 10.1023/a:1008692231284] [Citation(s) in RCA: 100] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
9
|
Fitzmaurice DA, Hobbs FD, Delaney BC, Wilson S, McManus R. Review of computerized decision support systems for oral anticoagulation management. Br J Haematol 1998; 102:907-9. [PMID: 9734638 DOI: 10.1046/j.1365-2141.1998.00858.x] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Computerized decision support systems (CDSS) are available to assist clinicians in the therapeutic management of oral anticoagulation. We report the findings relating to CDSS for oral anticoagulation management of a primary-care-based systematic review which largely focused on near-patient testing. Seven papers were reviewed which covered four different systems. The methodology of these papers was generally poor, although one randomized controlled trial showed improved therapeutic control associated with computerized management compared with human performance.
Collapse
Affiliation(s)
- D A Fitzmaurice
- Department of General Practice, Medical School, University of Birmingham, Edgbaston
| | | | | | | | | |
Collapse
|
10
|
Hejlesen OK, Andreassen S, Frandsen NE, Sørensen TB, Sandø SH, Hovorka R, Cavan DA. Using a double blind controlled clinical trial to evaluate the function of a Diabetes Advisory System: a feasible approach? COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 1998; 56:165-173. [PMID: 9700431 DOI: 10.1016/s0169-2607(98)00023-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
This paper assesses the feasibility of using a double blind controlled clinical trial to evaluate the function of a decision support system by applying such a design to the evaluation of a Diabetes Advisory System (DIAS). DIAS is based on a model of the human carbohydrate metabolism and is designed an interactive clinical tool, which can be used to predict the effects of changes in insulin dose or food intake on the blood glucose concentration in patients with insulin dependent diabetes. It can also be used to identify risk periods for hypoglycaemia. and to provide advice on insulin dose. The latter feature was evaluated in the present study. We believe double blind controlled clinical trials are prerequisites for clinical application of many decision support systems, and conclude that the present double blind controlled clinical trial is a suitable evaluation method for the function of DIAS.
Collapse
Affiliation(s)
- O K Hejlesen
- Department of Medical Informatics, Aalborg University, Denmark.
| | | | | | | | | | | | | |
Collapse
|
11
|
Abstract
A medical diagnostic decision support system (DDSS) has been developed for and tested in general practice. Two major issues have been addressed: diagnostic support and usefulness. The diagnostic support pertains to the ability of the system to generate diagnostic hypotheses from a set of patient data. The usefulness is approached by creating a computer system which can be used simultaneously with the doctor-patient consultation. The support function operates by matching symptoms from the patient data base with symptom configurations contained in the knowledge base. The support is presented as a list of diagnostic hypotheses ranked by degree of concordance. A user-friendly interface has been constructed with a comprehensive set of clinical terms within which the doctor can locate a desired symptom and store it with a single keystroke. With another keystroke the doctor can check the stored data and ask for support at any moment during the process. The overall purpose is to invite the doctor to rethink and re-examine his steps and to reconsider possible alternatives in the light of the presented diagnostic information. In our view it has to be the doctor who makes the final judgement. A test with the system in general practice revealed good performance of the system and an astonishing proficiency of the participating doctors in its use during the consultation. Twenty doctors solved five patient cases, entering 2000 clinical items within acceptable limits of consultation time. In 96% of the cases the correct diagnosis appeared in the differential diagnosis list. The doctors' diagnostic accuracy was 43%. The use of standardised terminology as an option for further development is discussed. The role of the doctor in computer-aided diagnostics remains open to debate. A computer-aided diagnostic support system in general practice appears to be feasible.
Collapse
Affiliation(s)
- J Ridderikhoff
- Department of Family Medicine, Erasmus University Rotterdam, Netherlands
| | | |
Collapse
|
12
|
Pelz J, Arendt V, Kunze J. Computer assisted diagnosis of malformation syndromes: an evaluation of three databases (LDDB, POSSUM, and SYNDROC). AMERICAN JOURNAL OF MEDICAL GENETICS 1996; 63:257-67. [PMID: 8723119 DOI: 10.1002/(sici)1096-8628(19960503)63:1<257::aid-ajmg44>3.0.co;2-k] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Computer programs which can be used as an aid to diagnose multiple congenital anomaly syndromes have been used for many years, but up to now they have been evaluated very rarely. The diagnostic abilities of three of these systems [LDDB (London Dysmorphology Database), POSSUM (Pictures of Standard Syndromes and Undiagnosed Malformations), and SYNDROC] were analyzed. All three programs are based on an algorithm which defines a diagnosis by a set of phenotypic components all having the same weight (descriptive algorithm). A second algorithm is applied by SYNDROC to rank competing diagnoses in order of probability. This pseudo-Bayesian algorithm provides a coefficient of certitude (CC). For a test the clinical findings of 102 patients who had received a firm diagnosis were used. Two search strategies were tried: "novice's strategy" with all findings taken for a search and "expert's strategy" with a selected set of anomalies. Only those diagnoses that were suggested with the 1st rank, defined as the highest degree of agreement, or the highest CC were studied. The greatest resemblance between suggestions of the databases and the clinical diagnosis was obtained with the expert strategy. The highest number of matches were produced by SYNDROC (80 with expert strategy) and the lowest by POSSUM (54 with novice strategy). The overall agreement between the databases is about 40% for the 1st rank. This number reflects that different authors use different pivotal signs for the description of a syndrome. With the pseudo-Bayesian algorithm 59 cases obtained the highest CC value. Great difficulties exist with the subjective estimates for the calculation of these values; the absolute CC values seem to be meaningless. A small number of unusual cases with special combinations of anomalies provide serious problems for correct diagnosis.
Collapse
Affiliation(s)
- J Pelz
- Institut für Humangenetik, Virchow-Klinikum, Berlin, Germany
| | | | | |
Collapse
|
13
|
Andreassen S, Rosenfalck A, Falck B, Olesen KG, Andersen SK. Evaluation of the diagnostic performance of the expert EMG assistant MUNIN. ELECTROENCEPHALOGRAPHY AND CLINICAL NEUROPHYSIOLOGY 1996; 101:129-44. [PMID: 8647018 DOI: 10.1016/0924-980x(95)00252-g] [Citation(s) in RCA: 25] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The diagnostic performance of the medical expert system MUNIN for diagnosis of neuromuscular disorders was evaluated on a set of 30 test cases. The cases were provided by 7 experienced electromyographers who were subsequently invited to participate in the evaluation. To reasonably cover the range of disorders, the electromyographers were asked to provide cases from patients with different types of muscular dystrophy, with neuromuscular transmission disorders, with motor neurone disease, and with different types of polyneuropathies. In addition, patients with a range of local neuropathies were provided. Out of the 30 cases, 11 cases were evaluated by an "almost peer review" method and the remaining 19 cases were evaluated by a "silver standard" method. The number of cases evaluated by "almost peer review" was limited to 11 due to time constraints on the evaluation procedure. During the "almost peer review," each electromyographer was asked to diagnose patients, using a vocabulary that closely resembled MUNIN's vocabulary. Subsequently, we attempted to provide a consensus diagnosis for the patients based on discussion among the participating electromyographers. The electromyographers were also asked to assess how well MUNIN had performed in each case. The remaining 19 cases were evaluated by a "silver standard" procedure, where MUNIN's diagnosis was compared to the diagnosis of the expert who provided the case. The results indicated that MUNIN performed well, and the electromyographers considered "that MUNIN performed at the same level as an experienced neurophysiologist." In particular, it was noted that MUNIN handled cases with conflicting findings well, and that it was able to diagnose patients with multiple diseases.
Collapse
Affiliation(s)
- S Andreassen
- Department of Medical Informatics and Image Analysis, Aalborg University, Denmark.
| | | | | | | | | |
Collapse
|
14
|
van der Loo RP, van Gennip EM, Bakker AR, Hasman A, Rutten FF. Evaluation of automated information systems in health care: an approach to classifying evaluative studies. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 1995; 48:45-52. [PMID: 8846711 DOI: 10.1016/0169-2607(95)01659-h] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
In this paper we discuss an approach to classifying evaluative studies of automated information systems in health care. Selected literature (76 studies) is classified according to the type of automated information system (based on relationship to the care process), the study design used, the data collection methods used, the effect(s) measured and the type of evaluation (e.g. cost-benefit analysis). First results show that certain types of automated information systems have not been evaluated much, going by the number of studies selected. Furthermore, it is observed that certain study designs (time-series design), data collection methods (modelling and simulation) and effect measures (job satisfaction) are hardly to be found in the literature. Only 10 of 76 selected studies used a type of evaluation for which both consequences and costs are considered. Detailed investigation of the literature may provide information for the development of a general framework for the evaluation of different types of automated information systems.
Collapse
Affiliation(s)
- R P van der Loo
- BAZIS Central Development and Support Group Hospital Information System, Leiden, The Netherlands
| | | | | | | | | |
Collapse
|
15
|
Ferns WJ, Mowshowitz A. Knowledge-intensive systems in the social service agency: Anticipated impacts on the organisation. AI & SOCIETY 1995. [DOI: 10.1007/bf01210602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
16
|
Buchanan BG, Moore JD, Forsythe DE, Carenini G, Ohlsson S, Banks G. An intelligent interactive system for delivering individualized information to patients. Artif Intell Med 1995; 7:117-54. [PMID: 7647838 DOI: 10.1016/0933-3657(94)00029-r] [Citation(s) in RCA: 57] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
This paper is a report on the first phase of a long-term, interdisciplinary project whose goal is to increase the overall effectiveness of physicians' time, and thus the quality of health care, by improving the information exchange between physicians and patients in clinical settings. We are focusing on patients with long-term and chronic conditions, initially on migraine patients, who require periodic interaction with their physicians for effective management of their condition. We are using medical informatics to focus on the information needs of patients, as well as of physicians, and to address problems of information exchange. This requires understanding patients' concerns to design an appropriate system, and using state-of-the-art artificial intelligence techniques to build an interactive explanation system. In contrast to many other knowledge-based systems, our system's design is based on empirical data on actual information needs. We used ethnographic techniques to observe explanations actually given in clinic settings, and to conduct interviews with migraine sufferers and physicians. Our system has an extensive knowledge base that contains both general medical terminology and specific knowledge about migraine, such as common trigger factors and symptoms of migraine, the common therapies, and the most common effects and side effects of those therapies. The system consists of two main components: (a) an interactive history-taking module that collects information from patients prior to each visit, builds a patient model, and summarizes the patients' status for their physicians; and (b) an intelligent explanation module that produces an interactive information sheet containing explanations in everyday language that are tailored to individual patients, and responds intelligently to follow-up questions about topics covered in the information sheet.
Collapse
Affiliation(s)
- B G Buchanan
- Department of Computer Science, University of Pittsburgh, PA 15260, USA
| | | | | | | | | | | |
Collapse
|
17
|
Nøhr C. The evaluation of expert diagnostic systems. How to assess outcomes and quality parameters? Artif Intell Med 1994; 6:123-35. [PMID: 8049753 DOI: 10.1016/0933-3657(94)90041-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Improvement of the quality in health care and the assessment of health outcomes of medical technologies have attracted an increasing attention in the implementation phases. In this paper 10 recent evaluation studies are reviewed to investigate to what extent they reflect the structure, process, and outcome of the conceptual framework. It is found that all the evaluation studies focus on structure measures. But if computer programs to support medical decision making are to be considered in the planning process of the health care system, the evaluation studies must strive to evaluate process and outcomes measures as well. A proposal for a framework for this kind of exploratory and evaluative research is outlined.
Collapse
Affiliation(s)
- C Nøhr
- Department of Development and Planning, Aalborg University, Denmark
| |
Collapse
|
18
|
Miller RA. Medical diagnostic decision support systems--past, present, and future: a threaded bibliography and brief commentary. J Am Med Inform Assoc 1994; 1:8-27. [PMID: 7719792 PMCID: PMC116181 DOI: 10.1136/jamia.1994.95236141] [Citation(s) in RCA: 273] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Abstract
Articles about medical diagnostic decision support (MDDS) systems often begin with a disclaimer such as, "despite many years of research and millions of dollars of expenditures on medical diagnostic systems, none is in widespread use at the present time." While this statement remains true in the sense that no single diagnostic system is in widespread use, it is misleading with regard to the state of the art of these systems. Diagnostic systems, many simple and some complex, are now ubiquitous, and research on MDDS systems is growing. The nature of MDDS systems has diversified over time. The prospects for adoption of large-scale diagnostic systems are better now than ever before, due to enthusiasm for implementation of the electronic medical record in academic, commercial, and primary care settings. Diagnostic decision support systems have become an established component of medical technology. This paper provides a review and a threaded bibliography for some of the important work on MDDS systems over the years from 1954 to 1993.
Collapse
Affiliation(s)
- R A Miller
- University of Pittsburgh, Medical Informatics Section, PA 15261, USA
| |
Collapse
|
19
|
Vingtoft S, Fuglsang-Frederiksen A, Rønager J, Petrera J, Stigsby B, Willison RG, Jarratt JA, Fawcett PR, Schofield IS, Otte G. KANDID--an EMG decision support system--evaluated in a European multicenter trial. Muscle Nerve 1993; 16:520-9. [PMID: 8515760 DOI: 10.1002/mus.880160514] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
KANDID is an advanced EMG decision support system dedicated to the support of the clinical neurophysiologist during EMG examinations. It has facilities for test planning, automatized and structured data interpretation, EMG diagnosis, explanation, and reporting. In a prospective European multicenter field trial, the agreement levels between clinical neurophysiologists and KANDID's diagnostic statements were measured under ordinary clinical EMG practice. KANDID was assessed in 159 individual patient EMG examinations by nine clinical neurophysiologists at seven different EMG laboratories. The reasoning of KANDID was considered understandable for the examiners in 80-90% of cases. The agreement level for the electrophysiological states of muscles and nerves between KANDID and the individual examiners was, on average, 81%. The corresponding diagnostic agreement with KANDID was, on average, 61%. A pronounced interexaminer variation in the agreement level related to the different EMG centers was observed. All Danish and Belgian examiners agreed with KANDID in more than 50% of their cases with regard to the EMG diagnosis, while the English examiners were in agreement with KANDID in 50% or less of their cases. These differences were possibly due to differences in epidemiology, examination techniques, control material, and examination planning strategies. It is concluded that it is possible to transfer systems like KANDID out of their development sites and apply them successfully if they can be locally customized by the clinical end users via editors.
Collapse
Affiliation(s)
- S Vingtoft
- Computer Resources International, Bregnerødvej, Birkerød, Denmark
| | | | | | | | | | | | | | | | | | | |
Collapse
|
20
|
François P, Cremilleux B, Robert C, Demongeot J. MENINGE: A medical consulting system for child's meningitis. Study on a series of consecutive cases. Artif Intell Med 1992. [DOI: 10.1016/0933-3657(92)90042-n] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
21
|
Stheeman SE, van der Stelt PF, Mileman PA. Expert systems in dentistry. Past performance--future prospects. J Dent 1992; 20:68-73. [PMID: 1564183 DOI: 10.1016/0300-5712(92)90105-l] [Citation(s) in RCA: 19] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
Expert systems are knowledge-based computer programs designed to provide assistance in diagnosis and treatment planning. They assist the practitioner in decision making. A search of the literature on expert system design for medical and dental applications was carried out. It showed an increase in the number of articles on this subject. Between 1984 and 1991, 608 articles have been published in medical journals and two in dental journals. Because it is likely that this development will influence dental practice in the future a critical review of medical literature on the topic has also been carried out. A number of general principles are described to give the dental practitioner some insight into how expert systems work. A set of criteria have been formulated from the medical literature which expert systems should meet. These requirements are also applicable to dentistry and may be used to judge dental expert systems. In the last part of the paper the features of several dental expert systems developed in the past decade are described in the light of these criteria. It is concluded that in the future more attention should be paid to the development and evaluation of expert systems in the clinical setting. Only well-designed and properly evaluated expert systems can be expected to earn a place in everyday practice.
Collapse
Affiliation(s)
- S E Stheeman
- Department of Oral Radiology, Academic Centre for Dentistry Amsterdam (ACTA), The Netherlands
| | | | | |
Collapse
|
22
|
Abstract
A review of the literature regarding computer-assisted diagnosis of rheumatic diseases is presented. After a general outline of the history and goals of computer programs intended to support physicians in the diagnostic process, 14 systems or projects are described. The scope of seven of these is general internal medicine, and the other seven are intended exclusively for rheumatic problems. The majority of these systems are prototypes. To date, none of them is widely used by physicians. Preliminary evaluation studies and/or independent reviews have been reported for all of the systems. The need for further evaluation studies is recognized, and strategies to carry these out are outlined. Furthermore, the potential usefulness for patient care and education is discussed. It is concluded that a new and interesting field is being developed that deserves more attention among rheumatologists.
Collapse
Affiliation(s)
- H J Moens
- Department of Rheumatology, Jan van Breemen Institute, Amsterdam, The Netherlands
| | | |
Collapse
|
23
|
Abstract
Clinical information systems (CIS) are health care technologies that can assist clinicians and clinical managers to improve the performance of health care organizations. However, failure to consider scientific evidence of efficacy, effectiveness, and efficiency when selecting CISs is one factor explaining the adoption of systems that do not improve either the quality or efficiency of patient care. This paper discusses a technology assessment framework that can assist decision-makers to evaluate alternative CISs. Existing methodologies developed to evaluate diagnostic and therapeutic technologies can be used by researchers to provide evidence needed by decision-makers at each step of the framework. The rigorous evaluation of CISs prior to their implementation can help decision-makers to avoid adopting "white elephants."
Collapse
Affiliation(s)
- R Wall
- Faculty of Medicine, University of Manitoba, Winnipeg, Canada
| |
Collapse
|
24
|
Piraino D, Richmond B, Schluchter M, Rockey D, Schils J. Radiology image interpretation system: modified observer performance study of an image interpretation expert system. J Digit Imaging 1991; 4:94-101. [PMID: 2070008 DOI: 10.1007/bf03170417] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Application of computer-based expert systems to diagnostic medical problems has been described in many areas including clinical diagnosis and radiology. Expert systems are computer programs that contain encoded expert knowledge to provide expert advice. A modified observer-performance study was done comparing the efficacy of the Radiology Image Interpretation System (RIIS), an expert system that diagnoses focal bone abnormalities, and radiology residents on a known set of 44 abnormal and 10 normal cases. Modified receiver operating characteristic curves for four inexperienced residents, five experienced residents, and RIIS were generated using the set of known radiographs. The true-positive rates of RIIS and the residents at false-positive rates of 0.05, 0.15, and 0.20 were estimated using the modified receiver operating characteristics curve and were compared using a paired t test. On the average, the RIIS system was less accurate when compared with experienced and inexperienced residents but the difference was only significant for experienced residents at a false-positive rate of 0.05. RIIS performed better than inexperienced residents when RIIS was used by experienced residents but this difference was not significant.
Collapse
Affiliation(s)
- D Piraino
- Department of Radiology, Cleveland Clinic Foundation, OH
| | | | | | | | | |
Collapse
|
25
|
Wyatt J, Spiegelhalter D. Evaluating Medical Expert Systems: What To Test, And How ? KNOWLEDGE BASED SYSTEMS IN MEDICINE: METHODS, APPLICATIONS AND EVALUATION 1991. [DOI: 10.1007/978-3-662-08131-0_22] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
26
|
Fieschi M. Towards validation of expert systems as medical decision aids. INTERNATIONAL JOURNAL OF BIO-MEDICAL COMPUTING 1990; 26:93-108. [PMID: 2394500 DOI: 10.1016/0020-7101(90)90022-m] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Expert system evaluation is an important step in knowledge engineering development and is clearly not a simple process. The aim of this paper is to introduce methodological aspects of medical knowledge base validation. We distinguish two components of the evaluation: verification and validation. In the first part, the difficulties of evaluation are analysed, problems with some techniques used in the evaluation process are discussed. In the second part, our experiment gives guidelines to present these aspects and underline what and when to evaluate.
Collapse
Affiliation(s)
- M Fieschi
- Département de l'Information Médicale, Hôpital de la Conception, Marseille, France
| |
Collapse
|
27
|
Rossi-Mori A, Pisanelli DM, Ricci FL. Evaluation stages and design steps for knowledge-based systems in medicine. MEDICAL INFORMATICS = MEDECINE ET INFORMATIQUE 1990; 15:191-204. [PMID: 2232955 DOI: 10.3109/14639239009025267] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
After the early experiments in artificial intelligence a methodology is emerging around advanced systems for the management of medical knowledge. The stress is moving away from the implementation of prototypes to the evaluation. It is possible to adapt and to apply this to field evaluation techniques already developed in similar contexts of knowledge management (books, drugs, epidemiology, consultants, etc.). The time is ready for a further step: to envisage a methodology for the design of real systems that cope with the 'knowledge environment' of the user. Every stage of the evaluation process is re-examined here, and considered as a framework to define goals and criteria about a step of design: (1) the impact of the system on the progress of health care provision (priorities, cost-benefit analysis, share of tasks among different media); (2) effectiveness in the end-user's environment and long-term effects on his behaviour (changes in people's role and responsibilities, improvements in the quality of data, acceptance of the system); (3) the intrinsic efficiency of the system apart from the operational context (correctness of the knowledge base, appropriateness of the reasoning). The need to differentiate the test sample into three classes (obvious, typical, atypical) is emphasized, discussing the influence on both evaluation and design. In particular the difficulty of having 'gold standards' on atypical cases, due to the disagreement among the experts, leads to the definition of two alternative attitudes: the 'standardization mode' and the 'brain-storming mode'.
Collapse
|
28
|
Wyatt J, Spiegelhalter D. Evaluating medical expert systems: what to test and how? MEDICAL INFORMATICS = MEDECINE ET INFORMATIQUE 1990; 15:205-17. [PMID: 2232956 DOI: 10.3109/14639239009025268] [Citation(s) in RCA: 123] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Many believe that medical expert systems have great potential to improve health care, but few of these systems have been rigorously evaluated, and even fewer are in routine use. We propose the evaluation of medical expert systems in two stages: laboratory and field testing. In the former, the perspectives of both prospective users and experts responsible for implementation are valuable. In the latter, the study must be designed to test, in an unbiased manner, whether the system is used in clinical practice, and if it is used, how it affects the structure, process and outcome of health care encounters. We conclude with proposals for encouraging the objective evaluation of these systems.
Collapse
Affiliation(s)
- J Wyatt
- National Heart and Lung Institute, London, UK
| | | |
Collapse
|
29
|
Schwartz S, Griffin T, Fox J. Clinical expert systems versus linear models: do we really have to choose? BEHAVIORAL SCIENCE 1989; 34:305-11. [PMID: 2684137 DOI: 10.1002/bs.3830340408] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
This article deals with decision subsystems at the level of the organism. In recent years there has been debate as to whether linear models or clinical expert systems make clinical decisions more effectively. Previous articles in this journal have favored linear models. This article argues the opposite case. We show that expert systems are not necessarily more expensive or less accurate than linear models and that, in theory at least, they can perform many tasks that are beyond the scope of linear models. Indeed, while a linear model may serve as a subsystem of a human or computer expert system, an expert system cannot be seen as a subsystem of a linear model. We conclude that clinical expert systems and linear models are not interchangeable and users should not be forced to choose between them.
Collapse
|
30
|
Berner ES, Brooks CM, Miller RA, Masarie FE, Jackson JR. Evaluation issues in the development of expert systems in medicine. Eval Health Prof 1989; 12:270-81. [PMID: 10294604 DOI: 10.1177/016327878901200303] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Evaluation of medical decision support software (MDSS)--computer programs to assist health professionals with diagnostic and/or therapeutic decisions--has not kept pace with the development of such programs. This article describes the following formative evaluation issues that must be addressed by developers of MDSS to evaluate these programs properly: (1) How can systematic feedback be obtained about an evolving program? (2) How can enough data to evaluate the program be obtained? (3) How much instruction is necessary? (4) What are the most important aspects for users to evaluate? and (5) How can the appropriate use of a developing MDSS be assured? Data from an ongoing evaluation of an existing MDSS, Quick Medical Reference, are used to illustrate the issues and to suggest recommendations for addressing them.
Collapse
|