1
|
Hulstaert L, Twick I, Sarsour K, Verstraete H. Enhancing site selection strategies in clinical trial recruitment using real-world data modeling. PLoS One 2024; 19:e0300109. [PMID: 38466688 PMCID: PMC10927105 DOI: 10.1371/journal.pone.0300109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 02/21/2024] [Indexed: 03/13/2024] Open
Abstract
Slow patient enrollment or failing to enroll the required number of patients is a disruptor of clinical trial timelines. To meet the planned trial recruitment, site selection strategies are used during clinical trial planning to identify research sites that are most likely to recruit a sufficiently high number of subjects within trial timelines. We developed a machine learning approach that outperforms baseline methods to rank research sites based on their expected recruitment in future studies. Indication level historical recruitment and real-world data are used in the machine learning approach to predict patient enrollment at site level. We define covariates based on published recruitment hypotheses and examine the effect of these covariates in predicting patient enrollment. We compare model performance of a linear and a non-linear machine learning model with common industry baselines that are constructed from historical recruitment data. Performance of the methodology is evaluated and reported for two disease indications, inflammatory bowel disease and multiple myeloma, both of which are actively being pursued in clinical development. We validate recruitment hypotheses by reviewing the covariates relationship with patient recruitment. For both indications, the non-linear model significantly outperforms the baselines and the linear model on the test set. In this paper, we present a machine learning approach to site selection that incorporates site-level recruitment and real-world patient data. The model ranks research sites by predicting the number of recruited patients and our results suggest that the model can improve site ranking compared to common industry baselines.
Collapse
Affiliation(s)
- Lars Hulstaert
- R&D Data Science & Digital Health, Janssen-Cilag GmbH, Neuss, North Rhine-Westphalia, Germany
| | - Isabell Twick
- R&D Data Science & Digital Health, Janssen-Cilag GmbH, Neuss, North Rhine-Westphalia, Germany
| | - Khaled Sarsour
- R&D Data Science & Digital Health, Janssen Pharmaceuticals, Titusville, New Jersey, United States of America
| | - Hans Verstraete
- R&D Data Science & Digital Health, Janssen Pharmaceutica NV, Beerse, Antwerp, Belgium
| |
Collapse
|
2
|
Woods MS, Ekstrom V, Darer JD, Tonkel J, Twick I, Ramshaw B, Nissan A, Assaf D. A Practical Approach to Predicting Surgical Site Infection Risk Among Patients Before Leaving the Operating Room. Cureus 2023; 15:e42085. [PMID: 37602114 PMCID: PMC10434973 DOI: 10.7759/cureus.42085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/18/2023] [Indexed: 08/22/2023] Open
Abstract
A surgical site infection (SSI) prediction model that identifies at-risk patients before leaving the operating room can support efforts to improve patient safety. In this study, eight pre-operative and five perioperative patient- and procedure-specific characteristics were tested with two scoring algorithms: 1) count of positive factors (manual), and 2) logistic regression model (automated). Models were developed and validated using data from 3,440 general and oncologic surgical patients. In the automated algorithm, two pre-operative (procedure urgency, odds ratio [OR]: 1.7; and antibiotic administration >2 hours before incision, OR: 1.6) and three intraoperative risk factors (open surgery [OR: 3.7], high-risk procedure [OR: 3.5], and operative time OR: [2.6]) were associated with SSI risk. The manual score achieved an area under the curve (AUC) of 0.831 and the automated algorithm achieved AUC of 0.868. Open surgery had the greatest impact on prediction, followed by procedure risk, operative time, and procedure urgency. At 80% sensitivity, the manual and automated scores achieved a positive predictive value of 16.3% and 22.0%, respectively. Both the manual and automated SSI risk prediction algorithms accurately identified at-risk populations. Use of either model before the patient leaves the operating room can provide the clinical team with evidence-based guidance to consider proactive intervention to prevent SSIs.
Collapse
Affiliation(s)
| | | | - Jonathan D Darer
- Medical and Innovation Director, Health Analytics LLC, Maryland, USA
| | - Jacqueline Tonkel
- Senior Vice President, Client Engagement Clinical Transformation, Caresyntax Corp, Boston, USA
| | | | | | - Aviram Nissan
- Department of General and Oncological Surgery - Surgery C, Chaim Sheba Medical Center, Tel Aviv, ISR
| | - Dan Assaf
- Department of General and Oncological Surgery - Surgery C, Chaim Sheba Medical Center, Tel Aviv, ISR
| |
Collapse
|
3
|
Wagner M, Müller-Stich BP, Kisilenko A, Tran D, Heger P, Mündermann L, Lubotsky DM, Müller B, Davitashvili T, Capek M, Reinke A, Reid C, Yu T, Vardazaryan A, Nwoye CI, Padoy N, Liu X, Lee EJ, Disch C, Meine H, Xia T, Jia F, Kondo S, Reiter W, Jin Y, Long Y, Jiang M, Dou Q, Heng PA, Twick I, Kirtac K, Hosgor E, Bolmgren JL, Stenzel M, von Siemens B, Zhao L, Ge Z, Sun H, Xie D, Guo M, Liu D, Kenngott HG, Nickel F, Frankenberg MV, Mathis-Ullrich F, Kopp-Schneider A, Maier-Hein L, Speidel S, Bodenstedt S. Comparative validation of machine learning algorithms for surgical workflow and skill analysis with the HeiChole benchmark. Med Image Anal 2023; 86:102770. [PMID: 36889206 DOI: 10.1016/j.media.2023.102770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 02/03/2023] [Accepted: 02/08/2023] [Indexed: 02/23/2023]
Abstract
PURPOSE Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.
Collapse
Affiliation(s)
- Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany.
| | - Beat-Peter Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Anna Kisilenko
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Duc Tran
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Patrick Heger
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Lars Mündermann
- Data Assisted Solutions, Corporate Research & Technology, KARL STORZ SE & Co. KG, Dr. Karl-Storz-Str. 34, 78332 Tuttlingen
| | - David M Lubotsky
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Benjamin Müller
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Tornike Davitashvili
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Manuela Capek
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Annika Reinke
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; HIP Helmholtz Imaging Platform, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Im Neuenheimer Feld 205, 69120 Heidelberg
| | - Carissa Reid
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 280, Heidelberg, Germany
| | - Tong Yu
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Armine Vardazaryan
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Chinedu Innocent Nwoye
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, 111 Michigan Ave NW, Washington, DC 20010, USA
| | - Eung-Joo Lee
- University of Maryland, College Park, 2405 A V Williams Building, College Park, MD 20742, USA
| | - Constantin Disch
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, 28359 Bremen, Germany
| | - Hans Meine
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, 28359 Bremen, Germany; University of Bremen, FB3, Medical Image Computing Group, ℅ Fraunhofer MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| | - Tong Xia
- Lab for Medical Imaging and Digital Surgery, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Fucang Jia
- Lab for Medical Imaging and Digital Surgery, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Satoshi Kondo
- Konika Minolta, Inc., 1-2, Sakura-machi, Takatsuki, Oasak 569-8503, Japan
| | - Wolfgang Reiter
- Wintegral GmbH, Ehrenbreitsteiner Str. 36, 80993 München, Germany
| | - Yueming Jin
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Yonghao Long
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Meirui Jiang
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Qi Dou
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Pheng Ann Heng
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Isabell Twick
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | - Kadir Kirtac
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | - Enes Hosgor
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | | | | | | | - Long Zhao
- Hikvision Research Institute, Hangzhou, China
| | - Zhenxiao Ge
- Hikvision Research Institute, Hangzhou, China
| | - Haiming Sun
- Hikvision Research Institute, Hangzhou, China
| | - Di Xie
- Hikvision Research Institute, Hangzhou, China
| | - Mengqi Guo
- School of Computing, National University of Singapore, Computing 1, No.13 Computing Drive, 117417, Singapore
| | - Daochang Liu
- National Engineering Research Center of Visual Technology, School of Computer Science, Peking University, Beijing, China
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Felix Nickel
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Moritz von Frankenberg
- Department of Surgery, Salem Hospital of the Evangelische Stadtmission Heidelberg, Zeppelinstrasse 11-33, 69121 Heidelberg, Germany
| | - Franziska Mathis-Ullrich
- Health Robotics and Automation Laboratory, Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Geb. 40.28, KIT Campus Süd, Engler-Bunte-Ring 8, 76131 Karlsruhe, Germany
| | - Annette Kopp-Schneider
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 280, Heidelberg, Germany
| | - Lena Maier-Hein
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; HIP Helmholtz Imaging Platform, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Im Neuenheimer Feld 205, 69120 Heidelberg; Medical Faculty, Heidelberg University, Im Neuenheimer Feld 672, 69120 Heidelberg
| | - Stefanie Speidel
- Div. Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Fetscherstraße 74, 01307 Dresden, Germany; Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI) of Technische Universität Dresden, 01062 Dresden, Germany
| | - Sebastian Bodenstedt
- Div. Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Fetscherstraße 74, 01307 Dresden, Germany; Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI) of Technische Universität Dresden, 01062 Dresden, Germany
| |
Collapse
|
4
|
Lavanchy JL, Zindel J, Kirtac K, Twick I, Hosgor E, Candinas D, Beldi G. Surgical skill assessment using machine learning algorithms. Br J Surg 2021. [DOI: 10.1093/bjs/znab202.093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Abstract
Objective
Surgical skill is correlated with clinical outcomes. Therefore, the assessment of surgical skill is of major importance to improve clinical outcomes and increase patient safety. However, surgical skill assessment often lacks objectivity and reproducibility. Furthermore, it is time-consuming and expensive. Therefore, we developed an automated surgical skill assessment using machine learning algorithms.
Methods
Surgical skills were assessed in videos of laparoscopic cholecystectomy using a three-step machine learning algorithm. First, a three-dimensional convolutional neural network was trained to localize and classify the instruments within the videos. Second, movement patterns of the instruments were recorded over time and extracted. Third, the movement patterns were correlated with human surgical skill ratings using a linear regression model to predict surgical skill ratings automatically. Machine ratings were compared against human ratings of four board certified surgeons using a score ranging from 1 (poor skills) to 5 (excellent skills).
Results
Human raters and machine learning algorithms assessed surgical skills in 242 videos. Inter-rater reliability for human raters was excellent (79%, 95%CI 72-85%). Instrument detection showed an average precision of 78% and average recall of 82%. Machine learning algorithms showed an 87% accuracy in predicting good or poor surgical skills, when compared to human raters.
Conclusion
Machine learning algorithms can be trained to distinguish good and poor surgical skills with high accuracy.
This work was published in Sci Rep 11, 5197 (2021). https://doi.org/10.1038/s41598-021-84295-6
Collapse
Affiliation(s)
- J L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - J Zindel
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - K Kirtac
- Caresyntax GmbH, Berlin, Germany
| | - I Twick
- Caresyntax GmbH, Berlin, Germany
| | - E Hosgor
- Caresyntax GmbH, Berlin, Germany
| | - D Candinas
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - G Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| |
Collapse
|
5
|
Lavanchy JL, Zindel J, Kirtac K, Twick I, Hosgor E, Candinas D, Beldi G. Automation of surgical skill assessment using a three-stage machine learning algorithm. Sci Rep 2021; 11:5197. [PMID: 33664317 PMCID: PMC7933408 DOI: 10.1038/s41598-021-84295-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 02/15/2021] [Indexed: 12/04/2022] Open
Abstract
Surgical skills are associated with clinical outcomes. To improve surgical skills and thereby reduce adverse outcomes, continuous surgical training and feedback is required. Currently, assessment of surgical skills is a manual and time-consuming process which is prone to subjective interpretation. This study aims to automate surgical skill assessment in laparoscopic cholecystectomy videos using machine learning algorithms. To address this, a three-stage machine learning method is proposed: first, a Convolutional Neural Network was trained to identify and localize surgical instruments. Second, motion features were extracted from the detected instrument localizations throughout time. Third, a linear regression model was trained based on the extracted motion features to predict surgical skills. This three-stage modeling approach achieved an accuracy of 87 ± 0.2% in distinguishing good versus poor surgical skill. While the technique cannot reliably quantify the degree of surgical skill yet it represents an important advance towards automation of surgical skill assessment.
Collapse
Affiliation(s)
- Joël L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, 3010, Bern, Switzerland
| | - Joel Zindel
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, 3010, Bern, Switzerland
| | - Kadir Kirtac
- Caresyntax, Komturstr. 18A, 12099, Berlin, Germany
| | | | - Enes Hosgor
- Caresyntax, Komturstr. 18A, 12099, Berlin, Germany
| | - Daniel Candinas
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, 3010, Bern, Switzerland
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, 3010, Bern, Switzerland.
| |
Collapse
|
6
|
Roß T, Reinke A, Full PM, Wagner M, Kenngott H, Apitz M, Hempe H, Mindroc-Filimon D, Scholz P, Tran TN, Bruno P, Arbeláez P, Bian GB, Bodenstedt S, Bolmgren JL, Bravo-Sánchez L, Chen HB, González C, Guo D, Halvorsen P, Heng PA, Hosgor E, Hou ZG, Isensee F, Jha D, Jiang T, Jin Y, Kirtac K, Kletz S, Leger S, Li Z, Maier-Hein KH, Ni ZL, Riegler MA, Schoeffmann K, Shi R, Speidel S, Stenzel M, Twick I, Wang G, Wang J, Wang L, Wang L, Zhang Y, Zhou YJ, Zhu L, Wiesenfarth M, Kopp-Schneider A, Müller-Stich BP, Maier-Hein L. Comparative validation of multi-instance instrument segmentation in endoscopy: Results of the ROBUST-MIS 2019 challenge. Med Image Anal 2020; 70:101920. [PMID: 33676097 DOI: 10.1016/j.media.2020.101920] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 09/22/2020] [Accepted: 11/24/2020] [Indexed: 12/27/2022]
Abstract
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions. While numerous methods for detecting, segmenting and tracking of medical instruments based on endoscopic video images have been proposed in the literature, key limitations remain to be addressed: Firstly, robustness, that is, the reliable performance of state-of-the-art methods when run on challenging images (e.g. in the presence of blood, smoke or motion artifacts). Secondly, generalization; algorithms trained for a specific intervention in a specific hospital should generalize to other interventions or institutions. In an effort to promote solutions for these limitations, we organized the Robust Medical Instrument Segmentation (ROBUST-MIS) challenge as an international benchmarking competition with a specific focus on the robustness and generalization capabilities of algorithms. For the first time in the field of endoscopic image processing, our challenge included a task on binary segmentation and also addressed multi-instance detection and segmentation. The challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures from three different types of surgery. The validation of the competing methods for the three tasks (binary segmentation, multi-instance detection and multi-instance segmentation) was performed in three different stages with an increasing domain gap between the training and the test data. The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap. While the average detection and segmentation quality of the best-performing algorithms is high, future research should concentrate on detection and segmentation of small, crossing, moving and transparent instrument(s) (parts).
Collapse
Affiliation(s)
- Tobias Roß
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany; University of Heidelberg, Germany, Seminarstraße 2, 69117 Heidelberg, Germany.
| | - Annika Reinke
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany; University of Heidelberg, Germany, Seminarstraße 2, 69117 Heidelberg, Germany
| | - Peter M Full
- University of Heidelberg, Germany, Seminarstraße 2, 69117 Heidelberg, Germany; Division of Medical Image Computing (MIC), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
| | - Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 110, 69120 Heidelberg, Germany
| | - Hannes Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 110, 69120 Heidelberg, Germany
| | - Martin Apitz
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 110, 69120 Heidelberg, Germany
| | - Hellena Hempe
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Diana Mindroc-Filimon
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Patrick Scholz
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany; HIDSS4Health - Helmholtz Information and Data Science School for Health, Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
| | - Thuy Nuong Tran
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Pierangela Bruno
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany; Department of Mathematics and Computer Science, University of Calabria, 87036 Rende, Italy
| | - Pablo Arbeláez
- Universidad de los Andes, Cra. 1 No 18A - 12, 111711 Bogotá, Colombia
| | - Gui-Bin Bian
- University of Chinese Academy Sciences, 52 Sanlihe Rd., Beijing, China; State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, 100864 Beijing, China
| | - Sebastian Bodenstedt
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Germany: German Cancer Research Center, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; Helmholtz Association/Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Bautzner Landstraße 400, 01328 Dresden, Germany
| | | | | | - Hua-Bin Chen
- University of Chinese Academy Sciences, 52 Sanlihe Rd., Beijing, China; State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, 100864 Beijing, China
| | - Cristina González
- Universidad de los Andes, Cra. 1 No 18A - 12, 111711 Bogotá, Colombia
| | - Dong Guo
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Shahe Campus:No.4, Section 2, North Jianshe Road, 610054
- Qingshuihe Campus:No.2006, Xiyuan Ave, West Hi-Tech Zone, 611731, Chengdu, China
| | - Pål Halvorsen
- SimulaMet, Pilestredet 52, 0167 Oslo, Norway; Oslo Metropolitan University (OsloMet), Pilestredet 52, 0167 Oslo, Norway
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Chung Chi Rd, Ma Liu Shui, Hong Kong, China
| | - Enes Hosgor
- caresyntax, Komturstraße 18A, 12099 Berlin, Germany
| | - Zeng-Guang Hou
- University of Chinese Academy Sciences, 52 Sanlihe Rd., Beijing, China; State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, 100864 Beijing, China
| | - Fabian Isensee
- University of Heidelberg, Germany, Seminarstraße 2, 69117 Heidelberg, Germany; Division of Medical Image Computing (MIC), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
| | - Debesh Jha
- SimulaMet, Pilestredet 52, 0167 Oslo, Norway; Department of Informatics, UIT The Arctic University of Norway, Hansine Hansens vei 54, 9037 Tromsø, Norway
| | - Tingting Jiang
- Institute of Digital Media (NELVT), Peking University, 5 Yiheyuan Rd, Haidian District, 100871 Peking, China
| | - Yueming Jin
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Chung Chi Rd, Ma Liu Shui, Hong Kong, China
| | - Kadir Kirtac
- caresyntax, Komturstraße 18A, 12099 Berlin, Germany
| | - Sabrina Kletz
- Institute of Information Technology, Klagenfurt University, Universitätsstraße 65-67, 9020 Klagenfurt, Austria
| | - Stefan Leger
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Germany: German Cancer Research Center, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; Helmholtz Association/Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Bautzner Landstraße 400, 01328 Dresden, Germany
| | - Zhixuan Li
- Institute of Digital Media (NELVT), Peking University, 5 Yiheyuan Rd, Haidian District, 100871 Peking, China
| | - Klaus H Maier-Hein
- Division of Medical Image Computing (MIC), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
| | - Zhen-Liang Ni
- University of Chinese Academy Sciences, 52 Sanlihe Rd., Beijing, China; State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, 100864 Beijing, China
| | | | - Klaus Schoeffmann
- Institute of Information Technology, Klagenfurt University, Universitätsstraße 65-67, 9020 Klagenfurt, Austria
| | - Ruohua Shi
- Institute of Digital Media (NELVT), Peking University, 5 Yiheyuan Rd, Haidian District, 100871 Peking, China
| | - Stefanie Speidel
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Germany: German Cancer Research Center, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; Helmholtz Association/Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Bautzner Landstraße 400, 01328 Dresden, Germany
| | | | | | - Gutai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Shahe Campus:No.4, Section 2, North Jianshe Road, 610054
- Qingshuihe Campus:No.2006, Xiyuan Ave, West Hi-Tech Zone, 611731, Chengdu, China
| | - Jiacheng Wang
- Department of Computer Science, School of Informatics, Xiamen University, 422 Siming South Road, 361005 Xiamen, China
| | - Liansheng Wang
- Department of Computer Science, School of Informatics, Xiamen University, 422 Siming South Road, 361005 Xiamen, China
| | - Lu Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Shahe Campus:No.4, Section 2, North Jianshe Road, 610054
- Qingshuihe Campus:No.2006, Xiyuan Ave, West Hi-Tech Zone, 611731, Chengdu, China
| | - Yujie Zhang
- Department of Computer Science, School of Informatics, Xiamen University, 422 Siming South Road, 361005 Xiamen, China
| | - Yan-Jie Zhou
- University of Chinese Academy Sciences, 52 Sanlihe Rd., Beijing, China; State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, 100864 Beijing, China
| | - Lei Zhu
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Chung Chi Rd, Ma Liu Shui, Hong Kong, China
| | - Manuel Wiesenfarth
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 581, Heidelberg, Germany
| | - Annette Kopp-Schneider
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 581, Heidelberg, Germany
| | - Beat P Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 110, 69120 Heidelberg, Germany
| | - Lena Maier-Hein
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| |
Collapse
|
7
|
Abstract
A ubiquitous feature of an animal's response to an odorant is that it declines when the odorant is frequently or continuously encountered. This decline in olfactory response, termed olfactory habituation, can have temporally or mechanistically different forms. The neural circuitry of the fruit fly Drosophila melanogaster's olfactory system is well defined in terms of component cells, which are readily accessible to functional studies and genetic manipulation. This makes it a particularly useful preparation for the investigation of olfactory habituation. In addition, the insect olfactory system shares many architectural and functional similarities with mammalian olfactory systems, suggesting that olfactory mechanisms in insects may be broadly relevant. In this chapter, we discuss the likely mechanisms of olfactory habituation in context of the participating cell types, their connectivity, and their roles in sensory processing. We overview the structure and function of key cell types, the mechanisms that stimulate them, and how they transduce and process odor signals. We then consider how each stage of olfactory processing could potentially contribute to behavioral habituation. After this, we overview a variety of recent mechanistic studies that point to an important role for potentiation of inhibitory synapses in the primary olfactory processing center, the antennal lobe, in driving the reduced response to familiar odorants. Following the discussion of mechanisms for short- and long-term olfactory habituation, we end by considering how these mechanisms may be regulated by neuromodulators, which likely play key roles in the induction, gating, or suppression of habituated behavior, and speculate on the relevance of these processes for other forms of learning and memory.
Collapse
Affiliation(s)
- Isabell Twick
- School of Genetics and Microbiology and School of Natural Sciences, Smurfit Institute of Genetics, Trinity College Institute of Neuroscience, Trinity College Dublin, Ireland.
| | - John Anthony Lee
- School of Genetics and Microbiology and School of Natural Sciences, Smurfit Institute of Genetics, Trinity College Institute of Neuroscience, Trinity College Dublin, Ireland.
| | - Mani Ramaswami
- School of Genetics and Microbiology and School of Natural Sciences, Smurfit Institute of Genetics, Trinity College Institute of Neuroscience, Trinity College Dublin, Ireland; National Centre for Biological Science, Bangalore, India
| |
Collapse
|