1
|
Lavanchy JL, Ramesh S, Dall'Alba D, Gonzalez C, Fiorini P, Müller-Stich BP, Nett PC, Marescaux J, Mutter D, Padoy N. Challenges in multi-centric generalization: phase and step recognition in Roux-en-Y gastric bypass surgery. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03166-3. [PMID: 38761319 DOI: 10.1007/s11548-024-03166-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Accepted: 04/02/2024] [Indexed: 05/20/2024]
Abstract
PURPOSE Most studies on surgical activity recognition utilizing artificial intelligence (AI) have focused mainly on recognizing one type of activity from small and mono-centric surgical video datasets. It remains speculative whether those models would generalize to other centers. METHODS In this work, we introduce a large multi-centric multi-activity dataset consisting of 140 surgical videos (MultiBypass140) of laparoscopic Roux-en-Y gastric bypass (LRYGB) surgeries performed at two medical centers, i.e., the University Hospital of Strasbourg, France (StrasBypass70) and Inselspital, Bern University Hospital, Switzerland (BernBypass70). The dataset has been fully annotated with phases and steps by two board-certified surgeons. Furthermore, we assess the generalizability and benchmark different deep learning models for the task of phase and step recognition in 7 experimental studies: (1) Training and evaluation on BernBypass70; (2) Training and evaluation on StrasBypass70; (3) Training and evaluation on the joint MultiBypass140 dataset; (4) Training on BernBypass70, evaluation on StrasBypass70; (5) Training on StrasBypass70, evaluation on BernBypass70; Training on MultiBypass140, (6) evaluation on BernBypass70 and (7) evaluation on StrasBypass70. RESULTS The model's performance is markedly influenced by the training data. The worst results were obtained in experiments (4) and (5) confirming the limited generalization capabilities of models trained on mono-centric data. The use of multi-centric training data, experiments (6) and (7), improves the generalization capabilities of the models, bringing them beyond the level of independent mono-centric training and validation (experiments (1) and (2)). CONCLUSION MultiBypass140 shows considerable variation in surgical technique and workflow of LRYGB procedures between centers. Therefore, generalization experiments demonstrate a remarkable difference in model performance. These results highlight the importance of multi-centric datasets for AI model generalization to account for variance in surgical technique and workflows. The dataset and code are publicly available at https://github.com/CAMMA-public/MultiBypass140.
Collapse
Affiliation(s)
- Joël L Lavanchy
- University Digestive Health Care Center - Clarunis, 4002, Basel, Switzerland.
- Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France.
| | - Sanat Ramesh
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- ICube, University of Strasbourg, CNRS, 67000, Strasbourg, France
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy
| | - Diego Dall'Alba
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy
| | - Cristians Gonzalez
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- University Hospital of Strasbourg, 67000, Strasbourg, France
| | - Paolo Fiorini
- Altair Robotics Lab, University of Verona, 37134, Verona, Italy
| | - Beat P Müller-Stich
- University Digestive Health Care Center - Clarunis, 4002, Basel, Switzerland
- Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland
| | - Philipp C Nett
- Department of Visceral Surgery and Medicine, Inselspital Bern University Hospital, 3010, Bern, Switzerland
| | | | - Didier Mutter
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- University Hospital of Strasbourg, 67000, Strasbourg, France
| | - Nicolas Padoy
- Institute of Image-Guided Surgery, IHU Strasbourg, 67000, Strasbourg, France
- ICube, University of Strasbourg, CNRS, 67000, Strasbourg, France
| |
Collapse
|
2
|
Eckhoff JA, Ban Y, Rosman G, Müller DT, Hashimoto DA, Witkowski E, Babic B, Rus D, Bruns C, Fuchs HF, Meireles O. TEsoNet: knowledge transfer in surgical phase recognition from laparoscopic sleeve gastrectomy to the laparoscopic part of Ivor-Lewis esophagectomy. Surg Endosc 2023; 37:4040-4053. [PMID: 36932188 PMCID: PMC10156818 DOI: 10.1007/s00464-023-09971-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 02/21/2023] [Indexed: 03/19/2023]
Abstract
BACKGROUND Surgical phase recognition using computer vision presents an essential requirement for artificial intelligence-assisted analysis of surgical workflow. Its performance is heavily dependent on large amounts of annotated video data, which remain a limited resource, especially concerning highly specialized procedures. Knowledge transfer from common to more complex procedures can promote data efficiency. Phase recognition models trained on large, readily available datasets may be extrapolated and transferred to smaller datasets of different procedures to improve generalizability. The conditions under which transfer learning is appropriate and feasible remain to be established. METHODS We defined ten operative phases for the laparoscopic part of Ivor-Lewis Esophagectomy through expert consensus. A dataset of 40 videos was annotated accordingly. The knowledge transfer capability of an established model architecture for phase recognition (CNN + LSTM) was adapted to generate a "Transferal Esophagectomy Network" (TEsoNet) for co-training and transfer learning from laparoscopic Sleeve Gastrectomy to the laparoscopic part of Ivor-Lewis Esophagectomy, exploring different training set compositions and training weights. RESULTS The explored model architecture is capable of accurate phase detection in complex procedures, such as Esophagectomy, even with low quantities of training data. Knowledge transfer between two upper gastrointestinal procedures is feasible and achieves reasonable accuracy with respect to operative phases with high procedural overlap. CONCLUSION Robust phase recognition models can achieve reasonable yet phase-specific accuracy through transfer learning and co-training between two related procedures, even when exposed to small amounts of training data of the target procedure. Further exploration is required to determine appropriate data amounts, key characteristics of the training procedure and temporal annotation methods required for successful transferal phase recognition. Transfer learning across different procedures addressing small datasets may increase data efficiency. Finally, to enable the surgical application of AI for intraoperative risk mitigation, coverage of rare, specialized procedures needs to be explored.
Collapse
Affiliation(s)
- J A Eckhoff
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA. .,Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany.
| | - Y Ban
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA.,Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - G Rosman
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA.,Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - D T Müller
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany
| | - D A Hashimoto
- Department of Surgery, University Hospitals Cleveland Medical Center, Cleveland, OH, 44106, USA.,Department of Surgery, Case Western Reserve School of Medicine, Cleveland, OH, 44106, USA
| | - E Witkowski
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| | - B Babic
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany
| | - D Rus
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - C Bruns
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany
| | - H F Fuchs
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany
| | - O Meireles
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| |
Collapse
|
3
|
Shinozuka K, Turuda S, Fujinaga A, Nakanuma H, Kawamura M, Matsunobu Y, Tanaka Y, Kamiyama T, Ebe K, Endo Y, Etoh T, Inomata M, Tokuyasu T. Artificial intelligence software available for medical devices: surgical phase recognition in laparoscopic cholecystectomy. Surg Endosc 2022; 36:7444-7452. [PMID: 35266049 PMCID: PMC9485170 DOI: 10.1007/s00464-022-09160-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 02/18/2022] [Indexed: 11/29/2022]
Abstract
Background Surgical process modeling automatically identifies surgical phases, and further improvement in recognition accuracy is expected with deep learning. Surgical tool or time series information has been used to improve the recognition accuracy of a model. However, it is difficult to collect this information continuously intraoperatively. The present study aimed to develop a deep convolution neural network (CNN) model that correctly identifies the surgical phase during laparoscopic cholecystectomy (LC). Methods We divided LC into six surgical phases (P1–P6) and one redundant phase (P0). We prepared 115 LC videos and converted them to image frames at 3 fps. Three experienced doctors labeled the surgical phases in all image frames. Our deep CNN model was trained with 106 of the 115 annotation datasets and was evaluated with the remaining datasets. By depending on both the prediction probability and frequency for a certain period, we aimed for highly accurate surgical phase recognition in the operation room. Results Nine full LC videos were converted into image frames and were fed to our deep CNN model. The average accuracy, precision, and recall were 0.970, 0.855, and 0.863, respectively. Conclusion The deep learning CNN model in this study successfully identified both the six surgical phases and the redundant phase, P0, which may increase the versatility of the surgical process recognition model for clinical use. We believe that this model can be used in artificial intelligence for medical devices. The degree of recognition accuracy is expected to improve with developments in advanced deep learning algorithms.
Collapse
Affiliation(s)
- Ken'ichi Shinozuka
- Faculty of Information Engineering, Department of Information and Systems Engineering, Fukuoka Institute of Technology, 1-30-1 Wajiro higashi, Higashi-ku, Fukuoka, Fukuoka, 811-0295, Japan
| | - Sayaka Turuda
- Faculty of Information Engineering, Department of Information and Systems Engineering, Fukuoka Institute of Technology, 1-30-1 Wajiro higashi, Higashi-ku, Fukuoka, Fukuoka, 811-0295, Japan
| | - Atsuro Fujinaga
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, Oita, Japan
| | - Hiroaki Nakanuma
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, Oita, Japan
| | - Masahiro Kawamura
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, Oita, Japan
| | - Yusuke Matsunobu
- Faculty of Information Engineering, Department of Information and Systems Engineering, Fukuoka Institute of Technology, 1-30-1 Wajiro higashi, Higashi-ku, Fukuoka, Fukuoka, 811-0295, Japan
| | - Yuki Tanaka
- Customer Solutions Development, Platform Technology, Olympus Technologies Asia, Olympus Corporation, Tokyo, Japan
| | - Toshiya Kamiyama
- Customer Solutions Development, Platform Technology, Olympus Technologies Asia, Olympus Corporation, Tokyo, Japan
| | - Kohei Ebe
- Customer Solutions Development, Platform Technology, Olympus Technologies Asia, Olympus Corporation, Tokyo, Japan
| | - Yuichi Endo
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, Oita, Japan
| | - Tsuyoshi Etoh
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, Oita, Japan
| | - Masafumi Inomata
- Faculty of Medicine, Department of Gastroenterological and Pediatric Surgery, Oita University, Oita, Japan
| | - Tatsushi Tokuyasu
- Faculty of Information Engineering, Department of Information and Systems Engineering, Fukuoka Institute of Technology, 1-30-1 Wajiro higashi, Higashi-ku, Fukuoka, Fukuoka, 811-0295, Japan.
| |
Collapse
|
4
|
Zhang Y, Marsic I, Burd RS. Real-time medical phase recognition using long-term video understanding and progress gate method. Med Image Anal 2021; 74:102224. [PMID: 34543914 PMCID: PMC8560574 DOI: 10.1016/j.media.2021.102224] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 08/31/2021] [Accepted: 09/02/2021] [Indexed: 01/10/2023]
Abstract
We introduce a real-time system for recognizing five phases of the trauma resuscitation process, the initial management of injured patients in the emergency department. We used depth videos as input to preserve the privacy of the patients and providers. The depth videos were recorded using a Kinect-v2 mounted on the sidewall of the room. Our dataset consisted of 183 depth videos of trauma resuscitations. The model was trained on 150 cases with more than 30 minutes each and tested on the remaining 33 cases. We introduced a reduced long-term operation (RLO) method for extracting features from long segments of video and combined it with the regular model having short-term information only. The model with RLO outperformed the regular short-term model by 5% using the accuracy score. We also introduced a progress gate (PG) method to distinguish visually similar phases using video progress. The final system achieved 91% accuracy and significantly outperformed previous systems for phase recognition in this setting.
Collapse
Affiliation(s)
- Yanyi Zhang
- Department of Electrical and Computer Engineering, Rutgers University, Piscataway, NJ 08854, USA.
| | - Ivan Marsic
- Department of Electrical and Computer Engineering, Rutgers University, Piscataway, NJ 08854, USA
| | - Randall S Burd
- Division of Trauma and Burn Surgery, Children's National Medical Center, Washington, DC 20010, USA
| |
Collapse
|
5
|
Lecuyer G, Ragot M, Martin N, Launay L, Jannin P. Assisted phase and step annotation for surgical videos. Int J Comput Assist Radiol Surg 2020; 15:673-80. [PMID: 32040704 DOI: 10.1007/s11548-019-02108-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Accepted: 12/17/2019] [Indexed: 12/21/2022]
Abstract
PURPOSE Annotation of surgical videos is a time-consuming task which requires specific knowledge. In this paper, we present and evaluate a deep learning-based method that includes pre-annotation of the phases and steps in surgical videos and user assistance in the annotation process. METHODS We propose a classification function that automatically detects errors and infers temporal coherence in predictions made by a convolutional neural network. First, we trained three different architectures of neural networks to assess the method on two surgical procedures: cholecystectomy and cataract surgery. The proposed method was then implemented in an annotation software to test its ability to assist surgical video annotation. A user study was conducted to validate our approach, in which participants had to annotate the phases and the steps of a cataract surgery video. The annotation and the completion time were recorded. RESULTS The participants who used the assistance system were 7% more accurate on the step annotation and 10 min faster than the participants who used the manual system. The results of the questionnaire showed that the assistance system did not disturb the participants and did not complicate the task. CONCLUSION The annotation process is a difficult and time-consuming task essential to train deep learning algorithms. In this publication, we propose a method to assist the annotation of surgical workflows which was validated through a user study. The proposed assistance system significantly improved annotation duration and accuracy.
Collapse
|
6
|
Kitaguchi D, Takeshita N, Matsuzaki H, Takano H, Owada Y, Enomoto T, Oda T, Miura H, Yamanashi T, Watanabe M, Sato D, Sugomori Y, Hara S, Ito M. Real-time automatic surgical phase recognition in laparoscopic sigmoidectomy using the convolutional neural network-based deep learning approach. Surg Endosc. 2020;34:4924-4931. [PMID: 31797047 DOI: 10.1007/s00464-019-07281-0] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Accepted: 11/23/2019] [Indexed: 02/06/2023]
Abstract
BACKGROUND Automatic surgical workflow recognition is a key component for developing the context-aware computer-assisted surgery (CA-CAS) systems. However, automatic surgical phase recognition focused on colorectal surgery has not been reported. We aimed to develop a deep learning model for automatic surgical phase recognition based on laparoscopic sigmoidectomy (Lap-S) videos, which could be used for real-time phase recognition, and to clarify the accuracies of the automatic surgical phase and action recognitions using visual information. METHODS The dataset used contained 71 cases of Lap-S. The video data were divided into frame units every 1/30 s as static images. Every Lap-S video was manually divided into 11 surgical phases (Phases 0-10) and manually annotated for each surgical action on every frame. The model was generated based on the training data. Validation of the model was performed on a set of unseen test data. Convolutional neural network (CNN)-based deep learning was also used. RESULTS The average surgical time was 175 min (± 43 min SD), with the individual surgical phases also showing high variations in the duration between cases. Each surgery started in the first phase (Phase 0) and ended in the last phase (Phase 10), and phase transitions occurred 14 (± 2 SD) times per procedure on an average. The accuracy of the automatic surgical phase recognition was 91.9% and those for the automatic surgical action recognition of extracorporeal action and irrigation were 89.4% and 82.5%, respectively. Moreover, this system could perform real-time automatic surgical phase recognition at 32 fps. CONCLUSIONS The CNN-based deep learning approach enabled the recognition of surgical phases and actions in 71 Lap-S cases based on manually annotated data. This system could perform automatic surgical phase recognition and automatic target surgical action recognition with high accuracy. Moreover, this study showed the feasibility of real-time automatic surgical phase recognition with high frame rate.
Collapse
|
7
|
Meeuwsen FC, van Luyn F, Blikkendaal MD, Jansen FW, van den Dobbelsteen JJ. Surgical phase modelling in minimal invasive surgery. Surg Endosc 2018; 33:1426-1432. [PMID: 30187202 PMCID: PMC6484813 DOI: 10.1007/s00464-018-6417-4] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Accepted: 08/31/2018] [Indexed: 12/04/2022]
Abstract
Background Surgical Process Modelling (SPM) offers the possibility to automatically gain insight in the surgical workflow, with the potential to improve OR logistics and surgical care. Most studies have focussed on phase recognition modelling of the laparoscopic cholecystectomy, because of its standard and frequent execution. To demonstrate the broad applicability of SPM, more diverse and complex procedures need to be studied. The aim of this study is to investigate the accuracy in which we can recognise and extract surgical phases in laparoscopic hysterectomies (LHs) with inherent variability in procedure time. To show the applicability of the approach, the model was used to automatically predict surgical end-times. Methods A dataset of 40 video-recorded LHs was manually annotated for instrument use and divided into ten surgical phases. The use of instruments provided the feature input for building a Random Forest surgical phase recognition model that was trained to automatically recognise surgical phases. Tenfold cross-validation was performed to optimise the model for predicting the surgical end-time throughout the procedure. Results Average surgery time is 128 ± 27 min. Large variability within specific phases is seen. Overall, the Random Forest model reaches an accuracy of 77% recognising the current phase in the procedure. Six of the phases are predicted accurately over 80% of their duration. When predicting the surgical end-time, on average an error of 16 ± 13 min is reached throughout the procedure. Conclusions This study demonstrates an intra-operative approach to recognise surgical phases in 40 laparoscopic hysterectomy cases based on instrument usage data. The model is capable of automatic detection of surgical phases for generation of a solid prediction of the surgical end-time.
Collapse
Affiliation(s)
- F C Meeuwsen
- Department of Biomechanical Engineering, Delft University of Technology, Mekelweg 2, 2628 CD, Delft, The Netherlands.
| | - F van Luyn
- Department of Biomechanical Engineering, Delft University of Technology, Mekelweg 2, 2628 CD, Delft, The Netherlands
| | - M D Blikkendaal
- Department of Gynecology, Leiden University Medical Center (LUMC), Albinusdreef 2, 2333 ZA, Leiden, The Netherlands
| | - F W Jansen
- Department of Gynecology, Leiden University Medical Center (LUMC), Albinusdreef 2, 2333 ZA, Leiden, The Netherlands
| | - J J van den Dobbelsteen
- Department of Biomechanical Engineering, Delft University of Technology, Mekelweg 2, 2628 CD, Delft, The Netherlands
| |
Collapse
|
8
|
Philipp P, Maleshkova M, Katic D, Weber C, Götz M, Rettinger A, Speidel S, Kämpgen B, Nolden M, Wekerle AL, Dillmann R, Kenngott H, Müller B, Studer R. Toward cognitive pipelines of medical assistance algorithms. Int J Comput Assist Radiol Surg 2016; 11:1743-53. [PMID: 26646415 DOI: 10.1007/s11548-015-1322-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2015] [Accepted: 10/30/2015] [Indexed: 10/22/2022]
Abstract
PURPOSE Assistance algorithms for medical tasks have great potential to support physicians with their daily work. However, medicine is also one of the most demanding domains for computer-based support systems, since medical assistance tasks are complex and the practical experience of the physician is crucial. Recent developments in the area of cognitive computing appear to be well suited to tackle medicine as an application domain. METHODS We propose a system based on the idea of cognitive computing and consisting of auto-configurable medical assistance algorithms and their self-adapting combination. The system enables automatic execution of new algorithms, given they are made available as Medical Cognitive Apps and are registered in a central semantic repository. Learning components can be added to the system to optimize the results in the cases when numerous Medical Cognitive Apps are available for the same task. Our prototypical implementation is applied to the areas of surgical phase recognition based on sensor data and image progressing for tumor progression mappings. RESULTS Our results suggest that such assistance algorithms can be automatically configured in execution pipelines, candidate results can be automatically scored and combined, and the system can learn from experience. Furthermore, our evaluation shows that the Medical Cognitive Apps are providing the correct results as they did for local execution and run in a reasonable amount of time. CONCLUSION The proposed solution is applicable to a variety of medical use cases and effectively supports the automated and self-adaptive configuration of cognitive pipelines based on medical interpretation algorithms.
Collapse
|