1
|
Reinke A, Tizabi MD, Baumgartner M, Eisenmann M, Heckmann-Nötzel D, Kavur AE, Rädsch T, Sudre CH, Acion L, Antonelli M, Arbel T, Bakas S, Benis A, Buettner F, Cardoso MJ, Cheplygina V, Chen J, Christodoulou E, Cimini BA, Farahani K, Ferrer L, Galdran A, van Ginneken B, Glocker B, Godau P, Hashimoto DA, Hoffman MM, Huisman M, Isensee F, Jannin P, Kahn CE, Kainmueller D, Kainz B, Karargyris A, Kleesiek J, Kofler F, Kooi T, Kopp-Schneider A, Kozubek M, Kreshuk A, Kurc T, Landman BA, Litjens G, Madani A, Maier-Hein K, Martel AL, Meijering E, Menze B, Moons KGM, Müller H, Nichyporuk B, Nickel F, Petersen J, Rafelski SM, Rajpoot N, Reyes M, Riegler MA, Rieke N, Saez-Rodriguez J, Sánchez CI, Shetty S, Summers RM, Taha AA, Tiulpin A, Tsaftaris SA, Van Calster B, Varoquaux G, Yaniv ZR, Jäger PF, Maier-Hein L. Understanding metric-related pitfalls in image analysis validation. Nat Methods 2024; 21:182-194. [PMID: 38347140 DOI: 10.1038/s41592-023-02150-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 12/12/2023] [Indexed: 02/15/2024]
Abstract
Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.
Collapse
Affiliation(s)
- Annika Reinke
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
| | - Minu D Tizabi
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany.
| | - Michael Baumgartner
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | - Matthias Eisenmann
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Doreen Heckmann-Nötzel
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - A Emre Kavur
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Tim Rädsch
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany
| | - Carole H Sudre
- MRC Unit for Lifelong Health and Ageing at UCL and Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Laura Acion
- Instituto de Cálculo, CONICET - Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Michela Antonelli
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
- Centre for Medical Image Computing, University College London, London, UK
| | - Tal Arbel
- Centre for Intelligent Machines and MILA (Quebec Artificial Intelligence Institute), McGill University, Montréal, Quebec, Canada
| | - Spyridon Bakas
- Division of Computational Pathology, Dept of Pathology & Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, USA
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Arriel Benis
- Department of Digital Medical Technologies, Holon Institute of Technology, Holon, Israel
- European Federation for Medical Informatics, Le Mont-sur-Lausanne, Switzerland
| | - Florian Buettner
- German Cancer Consortium (DKTK), partner site Frankfurt/Mainz, a partnership between DKFZ and UCT Frankfurt-Marburg, Frankfurt am Main, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Goethe University Frankfurt, Department of Medicine, Frankfurt am Main, Germany
- Goethe University Frankfurt, Department of Informatics, Frankfurt am Main, Germany
- Frankfurt Cancer Insititute, Frankfurt am Main, Germany
| | - M Jorge Cardoso
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Veronika Cheplygina
- Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | - Jianxu Chen
- Leibniz-Institut für Analytische Wissenschaften - ISAS - e.V., Dortmund, Germany
| | - Evangelia Christodoulou
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, MD, USA
| | - Luciana Ferrer
- Instituto de Investigación en Ciencias de la Computación (ICC), CONICET-UBA, Ciudad Autónoma de Buenos Aires, Buenos Aires, Argentina
| | - Adrian Galdran
- Universitat Pompeu Fabra, Barcelona, Spain
- University of Adelaide, Adelaide, South Australia, Australia
| | - Bram van Ginneken
- Fraunhofer MEVIS, Bremen, Germany
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Ben Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - Patrick Godau
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Daniel A Hashimoto
- Department of Surgery, Perelman School of Medicine, Philadelphia, PA, USA
- General Robotics Automation Sensing and Perception Laboratory, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael M Hoffman
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| | - Merel Huisman
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Fabian Isensee
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image - UMR_S 1099, Université de Rennes 1, Rennes, France
- INSERM, Paris, France
| | - Charles E Kahn
- Department of Radiology and Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - Dagmar Kainmueller
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Biomedical Image Analysis and HI Helmholtz Imaging, Berlin, Germany
- University of Potsdam, Digital Engineering Faculty, Potsdam, Germany
| | - Bernhard Kainz
- Department of Computing, Faculty of Engineering, Imperial College London, London, UK
- Department AIBE, Friedrich-Alexander-Universität (FAU), Erlangen-Nürnberg, Germany
| | | | - Jens Kleesiek
- Translational Image-guided Oncology (TIO), Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
| | | | | | - Annette Kopp-Schneider
- German Cancer Research Center (DKFZ) Heidelberg, Division of Biostatistics, Heidelberg, Germany
| | - Michal Kozubek
- Centre for Biomedical Image Analysis and Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Anna Kreshuk
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Health Science Center, Stony Brook, NY, USA
| | | | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Amin Madani
- Department of Surgery, University Health Network, Philadelphia, PA, USA
| | - Klaus Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Anne L Martel
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, Canada
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, UNSW Sydney, Kensington, New South Wales, Australia
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Karel G M Moons
- Julius Center for Health Sciences and Primary Care, UMC Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
- Medical Faculty, University of Geneva, Geneva, Switzerland
| | - Brennan Nichyporuk
- MILA (Quebec Artificial Intelligence Institute), Montréal, Quebec, Canada
| | - Felix Nickel
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Jens Petersen
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | | | - Nasir Rajpoot
- Tissue Image Analytics Laboratory, Department of Computer Science, University of Warwick, Coventry, UK
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - Michael A Riegler
- Simula Metropolitan Center for Digital Engineering, Oslo, Norway
- UiT The Arctic University of Norway, Tromsø, Norway
| | | | - Julio Saez-Rodriguez
- Institute for Computational Biomedicine, Heidelberg University, Heidelberg, Germany
- Faculty of Medicine, Heidelberg University Hospital, Heidelberg, Germany
| | - Clara I Sánchez
- Informatics Institute, Faculty of Science, University of Amsterdam, Amsterdam, the Netherlands
| | | | - Ronald M Summers
- National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Abdel A Taha
- Institute of Information Systems Engineering, TU Wien, Vienna, Austria
| | - Aleksei Tiulpin
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
- Neurocenter Oulu, Oulu University Hospital, Oulu, Finland
| | | | - Ben Van Calster
- Department of Development and Regeneration and EPI-centre, KU Leuven, Leuven, Belgium
- Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, the Netherlands
| | - Gaël Varoquaux
- Parietal project team, INRIA Saclay-Île de France, Palaiseau, France
| | - Ziv R Yaniv
- National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA
| | - Paul F Jäger
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group, Heidelberg, Germany.
| | - Lena Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany.
- Faculty of Medicine, Heidelberg University Hospital, Heidelberg, Germany.
| |
Collapse
|
2
|
Giffard E, Jannin P, Baxter JSH. A preliminary exploration into top-down and bottom-up deep-learning approaches to localising neuro-interventional point targets in volumetric MRI. Int J Comput Assist Radiol Surg 2024; 19:283-296. [PMID: 37815676 DOI: 10.1007/s11548-023-03023-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 09/12/2023] [Indexed: 10/11/2023]
Abstract
PURPOSE Point localisation is a critical aspect of many interventional planning procedures, specifically representing anatomical regions of interest or landmarks as individual points. This could be seen as analogous to the problem of visual search in cognitive psychology, in which this search is performed either: bottom-up, constructing increasingly abstract and coarse-resolution features over the entire image; or top-down, using contextual cues from the entire image to refine the scope of the region being investigated. Traditional convolutional neural networks use the former, but it is not clear if this is optimal. This article is a preliminary investigation as to how this motivation affects 3D point localisation in neuro-interventional planning. METHODS Two neuro-imaging datasets were collected: one for cortical point localisation for repetitive transcranial magnetic stimulation and the other for sub-cortical anatomy localisation for deep brain stimulation. Four different frameworks were developed using top-down versus bottom-up paradigms as well as representing points as co-ordinates or heatmaps. These networks were applied to point localisation for transcranial magnetic stimulation and subcortical anatomy localisation. These networks were evaluated using cross-validation and a varying number of training datasets to analyse their sensitivity to quantity of training data. RESULTS Each network shows increasing performance as the amount of available training data increases, with the co-ordinate-based top-down network consistently outperforming the others. Specifically, the top-down architectures tend to outperform the bottom-up ones. An analysis of their memory consumption also encourages the top-down co-ordinate based architecture as it requires significantly less memory than either bottom-up architectures or those representing their predictions via heatmaps. CONCLUSION This paper is a preliminary foray into a fundamental aspect of machine learning architectural design: that of the top-down/bottom-up divide from cognitive psychology. Although there are additional considerations within the particular architectures investigated that could affect these results and the number of architectures investigated is limited, our results do indicate that the less commonly used top-down paradigm could lead to more efficient and effective architectures in the future.
Collapse
Affiliation(s)
- Enora Giffard
- LTSI - INSERM UMR 1099, Université de Rennes, 35000, Rennes, France
| | - Pierre Jannin
- LTSI - INSERM UMR 1099, Université de Rennes, 35000, Rennes, France
| | - John S H Baxter
- LTSI - INSERM UMR 1099, Université de Rennes, 35000, Rennes, France.
| |
Collapse
|
3
|
Maier-Hein L, Reinke A, Godau P, Tizabi MD, Buettner F, Christodoulou E, Glocker B, Isensee F, Kleesiek J, Kozubek M, Reyes M, Riegler MA, Wiesenfarth M, Kavur AE, Sudre CH, Baumgartner M, Eisenmann M, Heckmann-Nötzel D, Rädsch T, Acion L, Antonelli M, Arbel T, Bakas S, Benis A, Blaschko MB, Cardoso MJ, Cheplygina V, Cimini BA, Collins GS, Farahani K, Ferrer L, Galdran A, van Ginneken B, Haase R, Hashimoto DA, Hoffman MM, Huisman M, Jannin P, Kahn CE, Kainmueller D, Kainz B, Karargyris A, Karthikesalingam A, Kofler F, Kopp-Schneider A, Kreshuk A, Kurc T, Landman BA, Litjens G, Madani A, Maier-Hein K, Martel AL, Mattson P, Meijering E, Menze B, Moons KGM, Müller H, Nichyporuk B, Nickel F, Petersen J, Rajpoot N, Rieke N, Saez-Rodriguez J, Sánchez CI, Shetty S, van Smeden M, Summers RM, Taha AA, Tiulpin A, Tsaftaris SA, Van Calster B, Varoquaux G, Jäger PF. Metrics reloaded: recommendations for image analysis validation. Nat Methods 2024; 21:195-212. [PMID: 38347141 DOI: 10.1038/s41592-023-02151-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 12/12/2023] [Indexed: 02/15/2024]
Abstract
Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.
Collapse
Affiliation(s)
- Lena Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
- Medical Faculty, Heidelberg University, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany.
| | - Annika Reinke
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
| | - Patrick Godau
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Minu D Tizabi
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Florian Buettner
- German Cancer Consortium (DKTK), partner site Frankfurt/Mainz, a partnership between DKFZ and UCT Frankfurt-Marburg, Frankfurt am Main, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Department of Medicine, Goethe University Frankfurt, Frankfurt am Main, Germany
- Department of Informatics, Goethe University Frankfurt, Frankfurt am Main, Germany
- Frankfurt Cancer Insititute, Frankfurt am Main, Germany
| | - Evangelia Christodoulou
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Ben Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - Fabian Isensee
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine, University Medicine Essen, Essen, Germany
| | - Michal Kozubek
- Centre for Biomedical Image Analysis and Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - Michael A Riegler
- Simula Metropolitan Center for Digital Engineering, Oslo, Norway
- Department of Computer Science, UiT The Arctic University of Norway, Tromsø, Norway
| | - Manuel Wiesenfarth
- German Cancer Research Center (DKFZ) Heidelberg, Division of Biostatistics, Heidelberg, Germany
| | - A Emre Kavur
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Carole H Sudre
- MRC Unit for Lifelong Health and Ageing at UCL and Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Michael Baumgartner
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | - Matthias Eisenmann
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Doreen Heckmann-Nötzel
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Tim Rädsch
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany
| | - Laura Acion
- Instituto de Cálculo, CONICET - Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Michela Antonelli
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
- Centre for Medical Image Computing, University College London, London, UK
| | - Tal Arbel
- Centre for Intelligent Machines and MILA (Québec Artificial Intelligence Institute), McGill University, Montréal, Quebec, Canada
| | - Spyridon Bakas
- Division of Computational Pathology, Department of Pathology & Laboratory Medicine, Indiana University School of Medicine, IU Health Information and Translational Sciences Building, Indianapolis, IN, USA
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Arriel Benis
- Department of Digital Medical Technologies, Holon Institute of Technology, Holon, Israel
- European Federation for Medical Informatics, Le Mont-sur-Lausanne, Switzerland
| | - Matthew B Blaschko
- Center for Processing Speech and Images, Department of Electrical Engineering, KU Leuven, Leuven, Belgium
| | - M Jorge Cardoso
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Veronika Cheplygina
- Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | - Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Gary S Collins
- Centre for Statistics in Medicine, University of Oxford, Nuffield Orthopaedic Centre, Oxford, UK
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, MD, USA
| | - Luciana Ferrer
- Instituto de Investigación en Ciencias de la Computación (ICC), CONICET-UBA, Ciudad Autónoma de Buenos Aires, Buenos Aires, Argentina
| | - Adrian Galdran
- BCN Medtech, Universitat Pompeu Fabra, Barcelona, Spain
- Australian Institute for Machine Learning AIML, University of Adelaide, Adelaide, South Australia, Australia
| | - Bram van Ginneken
- Fraunhofer MEVIS, Bremen, Germany
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Robert Haase
- Technische Universität (TU) Dresden, DFG Cluster of Excellence 'Physics of Life', Dresden, Germany
- Center for Systems Biology, Dresden, Germany
- Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI), Leipzig University, Leipzig, Germany
| | - Daniel A Hashimoto
- Department of Surgery, Perelman School of Medicine, Philadelphia, PA, USA
- General Robotics Automation Sensing and Perception Laboratory, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael M Hoffman
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
| | - Merel Huisman
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image - UMR_S 1099, Université de Rennes 1, Rennes, France
- INSERM, Paris, France
| | - Charles E Kahn
- Department of Radiology and Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - Dagmar Kainmueller
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Biomedical Image Analysis and HI Helmholtz Imaging, Berlin, Germany
- Digital Engineering Faculty, University of Potsdam, Potsdam, Germany
| | - Bernhard Kainz
- Department of Computing, Faculty of Engineering, Imperial College London, London, UK
- Department AIBE, Friedrich-Alexander-Universität (FAU), Erlangen-Nürnberg, Germany
| | | | | | | | - Annette Kopp-Schneider
- German Cancer Research Center (DKFZ) Heidelberg, Division of Biostatistics, Heidelberg, Germany
| | - Anna Kreshuk
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Health Science Center, Stony Brook, NY, USA
| | | | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Amin Madani
- Department of Surgery, University Health Network, Philadelphia, PA, USA
| | - Klaus Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Anne L Martel
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, Canada
| | - Peter Mattson
- Google, 1600 Amphitheatre Pkwy, Mountain View, CA, USA
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, UNSW Sydney, Kensington, New South Wales, Australia
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Karel G M Moons
- Julius Center for Health Sciences and Primary Care, UMC Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
- Medical Faculty, University of Geneva, Geneva, Switzerland
| | - Brennan Nichyporuk
- MILA (Québec Artificial Intelligence Institute), Montréal, Quebec, Canada
| | - Felix Nickel
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Jens Petersen
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | - Nasir Rajpoot
- Tissue Image Analytics Laboratory, Department of Computer Science, University of Warwick, Coventry, UK
| | | | - Julio Saez-Rodriguez
- Institute for Computational Biomedicine, Heidelberg University, Heidelberg, Germany
- Faculty of Medicine, Heidelberg University Hospital, Heidelberg, Germany
| | - Clara I Sánchez
- Informatics Institute, Faculty of Science, University of Amsterdam, Amsterdam, the Netherlands
| | | | - Maarten van Smeden
- Julius Center for Health Sciences and Primary Care, UMC Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Ronald M Summers
- National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Abdel A Taha
- Institute of Information Systems Engineering, TU Wien, Vienna, Austria
| | - Aleksei Tiulpin
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
- Neurocenter Oulu, Oulu University Hospital, Oulu, Finland
| | | | - Ben Van Calster
- Department of Development and Regeneration and EPI-centre, KU Leuven, Leuven, Belgium
- Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, the Netherlands
| | - Gaël Varoquaux
- Parietal project team, INRIA Saclay-Île de France, Palaiseau, France
| | - Paul F Jäger
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group, Heidelberg, Germany.
| |
Collapse
|
4
|
Tronchot A, Casy T, Vallee N, Common H, Thomazeau H, Jannin P, Huaulmé A. Virtual reality simulation training improve diagnostic knee arthroscopy and meniscectomy skills: a prospective transfer validity study. J Exp Orthop 2023; 10:138. [PMID: 38095746 PMCID: PMC10721743 DOI: 10.1186/s40634-023-00688-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 11/13/2023] [Indexed: 12/17/2023] Open
Abstract
PURPOSE Limited data exist on the actual transfer of skills learned using a virtual reality (VR) simulator for arthroscopy training because studies mainly focused on VR performance improvement and not on transfer to real word (transfer validity). The purpose of this single-blinded, controlled trial was to objectively investigate transfer validity in the context of initial knee arthroscopy training. METHODS For this study, 36 junior resident orthopaedic surgeons (postgraduate year one and year two) without prior experience in arthroscopic surgery were enrolled to receive standard knee arthroscopy surgery training (NON-VR group) or standard training plus training on a hybrid virtual reality knee arthroscopy simulator (1 h/month) (VR group). At inclusion, all participants completed a questionnaire on their current arthroscopic technical skills. After 6 months of training, both groups performed three exercises that were evaluated independently by two blinded trainers: i) arthroscopic partial meniscectomy on a bench-top knee simulator; ii) supervised diagnostic knee arthroscopy on a cadaveric knee; and iii) supervised knee partial meniscectomy on a cadaveric knee. Training level was determined with the Arthroscopic Surgical Skill Evaluation Tool (ASSET) score. RESULTS Overall, performance (ASSET scores) was better in the VR group than NON-VR group (difference in the global scores: p < 0.001, in bench-top meniscectomy scores: p = 0.03, in diagnostic knee arthroscopy on a cadaveric knee scores: p = 0.04, and in partial meniscectomy on a cadaveric knee scores: p = 0.02). Subgroup analysis by postgraduate year showed that the year-one NON-VR subgroup performed worse than the other subgroups, regardless of the exercise. CONCLUSION This study showed the transferability of the technical skills acquired by novice residents on a hybrid virtual reality simulator to the bench-top and cadaveric models. Surgical skill acquired with a VR arthroscopy surgical simulator might safely improve arthroscopy competences in the operating room, also helping to standardise resident training and follow their progress. LEVEL OF EVIDENCE: 2
Collapse
Affiliation(s)
- Alexandre Tronchot
- University Rennes, CHU Rennes, Inserm, LTSI, Equipe MediCIS- UMR 1099, 35000, Rennes, France.
- Orthopaedics and Trauma Department, Rennes University Hospital, 2 Rue Henri Le Guilloux, 35000, Rennes, France.
| | - Tiphaine Casy
- University Rennes, CHU Rennes, Inserm, LTSI, Equipe MediCIS- UMR 1099, 35000, Rennes, France
| | - Nicolas Vallee
- University Rennes, CHU Rennes, Inserm, LTSI, Equipe MediCIS- UMR 1099, 35000, Rennes, France
- Orthopaedics and Trauma Department, Rennes University Hospital, 2 Rue Henri Le Guilloux, 35000, Rennes, France
| | - Harold Common
- Orthopaedics and Trauma Department, Rennes University Hospital, 2 Rue Henri Le Guilloux, 35000, Rennes, France
| | - Hervé Thomazeau
- University Rennes, CHU Rennes, Inserm, LTSI, Equipe MediCIS- UMR 1099, 35000, Rennes, France
- Orthopaedics and Trauma Department, Rennes University Hospital, 2 Rue Henri Le Guilloux, 35000, Rennes, France
| | - Pierre Jannin
- University Rennes, CHU Rennes, Inserm, LTSI, Equipe MediCIS- UMR 1099, 35000, Rennes, France
| | - Arnaud Huaulmé
- University Rennes, CHU Rennes, Inserm, LTSI, Equipe MediCIS- UMR 1099, 35000, Rennes, France
| |
Collapse
|
5
|
Eckhoff JA, Rosman G, Altieri MS, Speidel S, Stoyanov D, Anvari M, Meier-Hein L, März K, Jannin P, Pugh C, Wagner M, Witkowski E, Shaw P, Madani A, Ban Y, Ward T, Filicori F, Padoy N, Talamini M, Meireles OR. SAGES consensus recommendations on surgical video data use, structure, and exploration (for research in artificial intelligence, clinical quality improvement, and surgical education). Surg Endosc 2023; 37:8690-8707. [PMID: 37516693 PMCID: PMC10616217 DOI: 10.1007/s00464-023-10288-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 07/05/2023] [Indexed: 07/31/2023]
Abstract
BACKGROUND Surgery generates a vast amount of data from each procedure. Particularly video data provides significant value for surgical research, clinical outcome assessment, quality control, and education. The data lifecycle is influenced by various factors, including data structure, acquisition, storage, and sharing; data use and exploration, and finally data governance, which encompasses all ethical and legal regulations associated with the data. There is a universal need among stakeholders in surgical data science to establish standardized frameworks that address all aspects of this lifecycle to ensure data quality and purpose. METHODS Working groups were formed, among 48 representatives from academia and industry, including clinicians, computer scientists and industry representatives. These working groups focused on: Data Use, Data Structure, Data Exploration, and Data Governance. After working group and panel discussions, a modified Delphi process was conducted. RESULTS The resulting Delphi consensus provides conceptualized and structured recommendations for each domain related to surgical video data. We identified the key stakeholders within the data lifecycle and formulated comprehensive, easily understandable, and widely applicable guidelines for data utilization. Standardization of data structure should encompass format and quality, data sources, documentation, metadata, and account for biases within the data. To foster scientific data exploration, datasets should reflect diversity and remain adaptable to future applications. Data governance must be transparent to all stakeholders, addressing legal and ethical considerations surrounding the data. CONCLUSION This consensus presents essential recommendations around the generation of standardized and diverse surgical video databanks, accounting for multiple stakeholders involved in data generation and use throughout its lifecycle. Following the SAGES annotation framework, we lay the foundation for standardization of data use, structure, and exploration. A detailed exploration of requirements for adequate data governance will follow.
Collapse
Affiliation(s)
- Jennifer A Eckhoff
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA.
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany.
| | - Guy Rosman
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - Maria S Altieri
- Stony Brook University Hospital, Washington University in St. Louis, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Stefanie Speidel
- National Center for Tumor Diseases (NCT), Fiedlerstraße 23, 01307, Dresden, Germany
| | - Danail Stoyanov
- University College London, 43-45 Foley Street, London, W1W 7TY, UK
| | - Mehran Anvari
- Center for Surgical Invention and Innovation, Department of Surgery, McMaster University, Hamilton, ON, Canada
| | - Lena Meier-Hein
- German Cancer Research Center, Deutsches Krebsforschungszentrum (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Keno März
- German Cancer Research Center, Deutsches Krebsforschungszentrum (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Pierre Jannin
- MediCIS, University of Rennes - Campus Beaulieu, 2 Av. du Professeur Léon Bernard, 35043, Rennes, France
| | - Carla Pugh
- Department of Surgery, Stanford School of Medicine, 291 Campus Drive, Stanford, CA, 94305, USA
| | - Martin Wagner
- Department of Surgery, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Elan Witkowski
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| | - Paresh Shaw
- New York University Langone, 530 1St Ave. Floor 12, New York, NY, 10016, USA
| | - Amin Madani
- Surgical Artifcial Intelligence Research Academy, Department of Surgery, University Health Network, Toronto, ON, Canada
| | - Yutong Ban
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - Thomas Ward
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| | - Filippo Filicori
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, New York, NY, USA
| | - Nicolas Padoy
- Ihu Strasbourg - Institute Surgery Guided Par L'image, 1 Pl. de L'Hôpital, 67000, Strasbourg, France
| | - Mark Talamini
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Ozanan R Meireles
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| |
Collapse
|
6
|
Le Lous M, Beridot C, Baxter JSH, Huaulme A, Vasconcelos F, Stoyanov D, Siassakos D, Jannin P. Physical environment of the operating room during cesarean section: A systematic review. Eur J Obstet Gynecol Reprod Biol 2023; 288:1-6. [PMID: 37406465 DOI: 10.1016/j.ejogrb.2023.06.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 06/27/2023] [Indexed: 07/07/2023]
Abstract
INTRODUCTION Environmental factors in the operating room during cesarean sections are likely important for both women/birthing people and their babies but there is currently a lack of rigorous literature about their evaluation. The principal aim of this study was to systematically examine studies published on the physical environment in the obstetrical operating room during c-sections and its impact on mother and neonate outcomes. The secondary objective was to identify the sensors used to investigate the operating room environment during cesarean sections. METHODS In this literature review, we searched MEDLINE a database using the following keywords: Cesarean section AND (operating room environment OR Noise OR Music OR Video recording OR Light level OR Gentle OR Temperature OR Motion Data). Eligible studies had to be published in English or French within the past 10 years and had to investigate the operating room environment during cesarean sections in women. For each study we reported which aspects of the physical environment were investigated in the OR (i.e., noise, music, movement, light or temperature) and the involved sensors. RESULTS Of a total of 105 studies screened, we selected 8 articles from title and abstract in PubMed. This small number shows that the field is poorly investigated. The most evaluated environment factors to date are operating room noise and temperature, and the presence of music. Few studies used advanced sensors in the operating room to evaluate environmental factors in a more nuanced and complete way. Two studies concern the sound level, four concern music, one concerns temperature and one analyzed the number of entrances/exits into the OR. No study analyzed light level or more fine-grained movement data. CONCLUSIONS Main findings include increase of noise and motion at specific time-points, for example during delivery or anaesthesia; the positive impact of music on parents and staff alike; and that a warmer theatre is better for babies but more uncomfortable for surgeons.
Collapse
Affiliation(s)
- Maela Le Lous
- Department of Obstetrics and Gynecology, University Hospital of Rennes, France; LTSI - INSERM UMR 1099, University of Rennes 1, F35000 Rennes, France; Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom.
| | - Caroline Beridot
- Department of Obstetrics and Gynecology, University Hospital of Rennes, France
| | - John S H Baxter
- LTSI - INSERM UMR 1099, University of Rennes 1, F35000 Rennes, France
| | - Arnaud Huaulme
- LTSI - INSERM UMR 1099, University of Rennes 1, F35000 Rennes, France
| | - Francisco Vasconcelos
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Danail Stoyanov
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom
| | - Dimitrios Siassakos
- Department of Computer Science, Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, United Kingdom; EGA Institute for Women's Health, University College London, London, United Kingdom
| | - Pierre Jannin
- LTSI - INSERM UMR 1099, University of Rennes 1, F35000 Rennes, France
| |
Collapse
|
7
|
Galuret S, Vallée N, Tronchot A, Thomazeau H, Jannin P, Huaulmé A. Gaze behavior is related to objective technical skills assessment during virtual reality simulator-based surgical training: a proof of concept. Int J Comput Assist Radiol Surg 2023; 18:1697-1705. [PMID: 37286642 DOI: 10.1007/s11548-023-02961-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 05/16/2023] [Indexed: 06/09/2023]
Abstract
PURPOSE Simulation-based training allows surgical skills to be learned safely. Most virtual reality-based surgical simulators address technical skills without considering non-technical skills, such as gaze use. In this study, we investigated surgeons' visual behavior during virtual reality-based surgical training where visual guidance is provided. Our hypothesis was that the gaze distribution in the environment is correlated with the simulator's technical skills assessment. METHODS We recorded 25 surgical training sessions on an arthroscopic simulator. Trainees were equipped with a head-mounted eye-tracking device. A U-net was trained on two sessions to segment three simulator-specific areas of interest (AoI) and the background, to quantify gaze distribution. We tested whether the percentage of gazes in those areas was correlated with the simulator's scores. RESULTS The neural network was able to segment all AoI with a mean Intersection over Union superior to 94% for each area. The gaze percentage in the AoI differed among trainees. Despite several sources of data loss, we found significant correlations between gaze position and the simulator scores. For instance, trainees obtained better procedural scores when their gaze focused on the virtual assistance (Spearman correlation test, N = 7, r = 0.800, p = 0.031). CONCLUSION Our findings suggest that visual behavior should be quantified for assessing surgical expertise in simulation-based training environments, especially when visual guidance is provided. Ultimately visual behavior could be used to quantitatively assess surgeons' learning curve and expertise while training on VR simulators, in a way that complements existing metrics.
Collapse
Affiliation(s)
- Soline Galuret
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
| | - Nicolas Vallée
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
- Orthopedics and Trauma Department, Rennes University Hospital, 35000, Rennes, France
| | - Alexandre Tronchot
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
- Orthopedics and Trauma Department, Rennes University Hospital, 35000, Rennes, France
| | - Hervé Thomazeau
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
- Orthopedics and Trauma Department, Rennes University Hospital, 35000, Rennes, France
| | - Pierre Jannin
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France.
| | - Arnaud Huaulmé
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
| |
Collapse
|
8
|
Li C, Liu C, Huaulme A, Zemiti N, Jannin P, Poignet P. sEMG-based Motion Recognition for Robotic Surgery Training - A Preliminary Study. Annu Int Conf IEEE Eng Med Biol Soc 2023; 2023:1-4. [PMID: 38083107 DOI: 10.1109/embc40787.2023.10340047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Robotic surgery represents a major breakthrough in the evolution of medical technology. Accordingly, efficient skill training and assessment methods should be developed to meet the surgeon's need of acquiring such robotic skills over a relatively short learning curve in a safe manner. Different from conventional training and assessment methods, we aim to explore the surface electromyography (sEMG) signal during the training process in order to obtain semantic and interpretable information to help the trainee better understand and improve his/her training performance. As a preliminary study, motion primitive recognition based on sEMG signal is studied in this work. Using machine learning (ML) technique, it is shown that the sEMG-based motion recognition method is feasible and promising for hand motions along 3 Cartesian axes in the virtual reality (VR) environment of a commercial robotic surgery training platform, which will hence serve as the basis for new robotic surgical skill assessment criterion and training guidance based on muscle activity information. Considering certain motion patterns were less accurately recognized than others, more data collection and deep learning-based analysis will be carried out to further improve the recognition accuracy in future research.
Collapse
|
9
|
Huaulmé A, Harada K, Nguyen QM, Park B, Hong S, Choi MK, Peven M, Li Y, Long Y, Dou Q, Kumar S, Lalithkumar S, Hongliang R, Matsuzaki H, Ishikawa Y, Harai Y, Kondo S, Mitsuishi M, Jannin P. PEg TRAnsfer Workflow recognition challenge report: Do multimodal data improve recognition? Comput Methods Programs Biomed 2023; 236:107561. [PMID: 37119774 DOI: 10.1016/j.cmpb.2023.107561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 04/06/2023] [Accepted: 04/18/2023] [Indexed: 05/21/2023]
Abstract
BACKGROUND AND OBJECTIVE In order to be context-aware, computer-assisted surgical systems require accurate, real-time automatic surgical workflow recognition. In the past several years, surgical video has been the most commonly-used modality for surgical workflow recognition. But with the democratization of robot-assisted surgery, new modalities, such as kinematics, are now accessible. Some previous methods use these new modalities as input for their models, but their added value has rarely been studied. This paper presents the design and results of the "PEg TRAnsfer Workflow recognition" (PETRAW) challenge with the objective of developing surgical workflow recognition methods based on one or more modalities and studying their added value. METHODS The PETRAW challenge included a data set of 150 peg transfer sequences performed on a virtual simulator. This data set included videos, kinematic data, semantic segmentation data, and annotations, which described the workflow at three levels of granularity: phase, step, and activity. Five tasks were proposed to the participants: three were related to the recognition at all granularities simultaneously using a single modality, and two addressed the recognition using multiple modalities. The mean application-dependent balanced accuracy (AD-Accuracy) was used as an evaluation metric to take into account class balance and is more clinically relevant than a frame-by-frame score. RESULTS Seven teams participated in at least one task with four participating in every task. The best results were obtained by combining video and kinematic data (AD-Accuracy of between 93% and 90% for the four teams that participated in all tasks). CONCLUSION The improvement of surgical workflow recognition methods using multiple modalities compared with unimodal methods was significant for all teams. However, the longer execution time required for video/kinematic-based methods(compared to only kinematic-based methods) must be considered. Indeed, one must ask if it is wise to increase computing time by 2000 to 20,000% only to increase accuracy by 3%. The PETRAW data set is publicly available at www.synapse.org/PETRAW to encourage further research in surgical workflow recognition.
Collapse
Affiliation(s)
- Arnaud Huaulmé
- Univ Rennes, INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| | - Kanako Harada
- Department of Mechanical Engineering, the University of Tokyo, Tokyo 113-8656, Japan
| | | | - Bogyu Park
- VisionAI hutom, Seoul, Republic of Korea
| | | | | | | | | | - Yonghao Long
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Qi Dou
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong
| | | | | | - Ren Hongliang
- National University of Singapore, Singapore, Singapore; The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Hiroki Matsuzaki
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | - Yuto Ishikawa
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | - Yuriko Harai
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | | | - Manoru Mitsuishi
- Department of Mechanical Engineering, the University of Tokyo, Tokyo 113-8656, Japan
| | - Pierre Jannin
- Univ Rennes, INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| |
Collapse
|
10
|
Baxter JSH, Croci S, Delmas A, Bredoux L, Lefaucheur JP, Jannin P. Reference-free Bayesian model for pointing errors of typein neurosurgical planning. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02943-w. [PMID: 37249748 DOI: 10.1007/s11548-023-02943-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 04/27/2023] [Indexed: 05/31/2023]
Abstract
PURPOSE Many neurosurgical planning tasks rely on identifying points of interest in volumetric images. Often, these points require significant expertise to identify correctly as, in some cases, they are not visible but instead inferred by the clinician. This leads to a high degree of variability between annotators selecting these points. In particular, errors of type are when the experts fundamentally select different points rather than the same point with some inaccuracy. This complicates research as their mean may not reflect any of the experts' intentions nor the ground truth. METHODS We present a regularised Bayesian model for measuring errors of type in pointing tasks. This model is reference-free; in that it does not require a priori knowledge of the ground truth point but instead works on the basis of the level of consensus between multiple annotators. We apply this model to simulated data and clinical data from transcranial magnetic stimulation for chronic pain. RESULTS Our model estimates the probabilities of selecting the correct point in the range of 82.6[Formula: see text]88.6% with uncertainties in the range of 2.8[Formula: see text]4.0%. This agrees with the literature where ground truth points are known. The uncertainty has not previously been explored in the literature and gives an indication of the dataset's strength. CONCLUSIONS Our reference-free Bayesian framework easily models errors of type in pointing tasks. It allows for clinical studies to be performed with a limited number of annotators where the ground truth is not immediately known, which can be applied widely for better understanding human errors in neurosurgical planning.
Collapse
Affiliation(s)
- John S H Baxter
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Université de Rennes 1, Rennes, France.
| | | | | | | | - Jean-Pascal Lefaucheur
- ENT Team, EA4391, Faculty of Medicine, Paris Est Créteil University, Créteil, France
- Clinical Neurophysiology Unit, Department of Physiology, Henri Mondor Hospital, Hôpitaux de Paris, Créteil, France
| | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Université de Rennes 1, Rennes, France
| |
Collapse
|
11
|
Nyangoh Timoh K, Huaulme A, Cleary K, Zaheer MA, Lavoué V, Donoho D, Jannin P. A systematic review of annotation for surgical process model analysis in minimally invasive surgery based on video. Surg Endosc 2023:10.1007/s00464-023-10041-w. [PMID: 37157035 DOI: 10.1007/s00464-023-10041-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/25/2023] [Indexed: 05/10/2023]
Abstract
BACKGROUND Annotated data are foundational to applications of supervised machine learning. However, there seems to be a lack of common language used in the field of surgical data science. The aim of this study is to review the process of annotation and semantics used in the creation of SPM for minimally invasive surgery videos. METHODS For this systematic review, we reviewed articles indexed in the MEDLINE database from January 2000 until March 2022. We selected articles using surgical video annotations to describe a surgical process model in the field of minimally invasive surgery. We excluded studies focusing on instrument detection or recognition of anatomical areas only. The risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data from the studies were visually presented in table using the SPIDER tool. RESULTS Of the 2806 articles identified, 34 were selected for review. Twenty-two were in the field of digestive surgery, six in ophthalmologic surgery only, one in neurosurgery, three in gynecologic surgery, and two in mixed fields. Thirty-one studies (88.2%) were dedicated to phase, step, or action recognition and mainly relied on a very simple formalization (29, 85.2%). Clinical information in the datasets was lacking for studies using available public datasets. The process of annotation for surgical process model was lacking and poorly described, and description of the surgical procedures was highly variable between studies. CONCLUSION Surgical video annotation lacks a rigorous and reproducible framework. This leads to difficulties in sharing videos between institutions and hospitals because of the different languages used. There is a need to develop and use common ontology to improve libraries of annotated surgical videos.
Collapse
Affiliation(s)
- Krystel Nyangoh Timoh
- Department of Gynecology and Obstetrics and Human Reproduction, CHU Rennes, Rennes, France.
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France.
- Laboratoire d'Anatomie et d'Organogenèse, Faculté de Médecine, Centre Hospitalier Universitaire de Rennes, 2 Avenue du Professeur Léon Bernard, 35043, Rennes Cedex, France.
- Department of Obstetrics and Gynecology, Rennes Hospital, Rennes, France.
| | - Arnaud Huaulme
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France
| | - Kevin Cleary
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, 20010, USA
| | - Myra A Zaheer
- George Washington University School of Medicine and Health Sciences, Washington, DC, USA
| | - Vincent Lavoué
- Department of Gynecology and Obstetrics and Human Reproduction, CHU Rennes, Rennes, France
| | - Dan Donoho
- Division of Neurosurgery, Center for Neuroscience, Children's National Hospital, Washington, DC, 20010, USA
| | - Pierre Jannin
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France
| |
Collapse
|
12
|
Tanguy D, Rametti-Lacroux A, Bouzigues A, Saracino D, Le Ber I, Godefroy V, Morandi X, Jannin P, Levy R, Batrancourt B, Migliaccio R, Azuar C, Dubois B, Lecouturier K, Araujo CM, Janvier E, Jourdain A, Rametti-Lacroux A, Coriou S, Brochard VB, Gaudebout C, Ferrand-Verdejo J, Bonnefous L, Pochan-Leva F, Jeanne L, Joulié M, Provost M, Renaud R, Hachemi S, Guillemot V, Bendetowicz D, Carle G, Socha J, Pineau F, Marin F, Liu Y, Mullot P, Mousli A, Blossier A, Visentin G, Tanguy D, Godefroy V, Sezer I, Boucly M, Cabrol-Douat B, Odobez R, Marque C, Tessereau-Barbot D, Raud A, Funkiewiez A, Chamayou C, Cognat E, Le Bozec M, Bouzigues A, Le Du V, Bombois S, Simard C, Fulcheri P, Guitton H, Peltier C, Lejeune FX, Jorgensen L, Mariani LL, Corvol JC, Valero-Cabre A, Garcin B, Volle E, Le Ber I, Migliaccio R, Levy R. Behavioural disinhibition in frontotemporal dementia investigated within an ecological framework. Cortex 2023; 160:152-166. [PMID: 36658040 DOI: 10.1016/j.cortex.2022.11.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 09/29/2022] [Accepted: 11/09/2022] [Indexed: 12/29/2022]
Abstract
Disinhibition is a core symptom in behavioural variant frontotemporal dementia (bvFTD) particularly affecting the daily lives of both patients and caregivers. Yet, characterisation of inhibition disorders is still unclear and management options of these disorders are limited. Questionnaires currently used to investigate behavioural disinhibition do not differentiate between several subtypes of disinhibition, encompass observation biases and lack of ecological validity. In the present work, we explored disinhibition in an original semi-ecological situation, by distinguishing three categories of disinhibition: compulsivity, impulsivity and social disinhibition. First, we measured prevalence and frequency of these disorders in 23 bvFTD patients and 24 healthy controls (HC) in order to identify the phenotypical heterogeneity of disinhibition. Then, we examined the relationships between these metrics, the neuropsychological scores and the behavioural states to propose a more comprehensive view of these neuropsychiatric manifestations. Finally, we studied the context of occurrence of these disorders by investigating environmental factors potentially promoting or reducing them. As expected, we found that patients were more compulsive, impulsive and socially disinhibited than HC. We found that 48% of patients presented compulsivity (e.g., repetitive actions), 48% impulsivity (e.g., oral production) and 100% of the patients group showed social disinhibition (e.g., disregards for rules or investigator). Compulsivity was negatively related with emotions recognition. BvFTD patients were less active if not encouraged in an activity, and their social disinhibition decreased as activity increased. Finally, impulsivity and social disinhibition decreased when patients were asked to focus on a task. Summarising, this study underlines the importance to differentiate subtypes of disinhibition as well as the setting in which they are exhibited, and points to stimulating area for non-pharmacological management.
Collapse
Affiliation(s)
- Delphine Tanguy
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, FrontLab, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France; Univ Rennes, CHU Rennes, Inserm, LTSI - UMR 1099, Rennes, France
| | - Armelle Rametti-Lacroux
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, FrontLab, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France
| | - Arabella Bouzigues
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, FrontLab, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France
| | - Dario Saracino
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, FrontLab, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France; AP-HP, Groupe Hospitalier Pitié-Salpêtriѐre, Department of Neurology, IM2A, Paris, France
| | - Isabelle Le Ber
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, FrontLab, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France; AP-HP, Groupe Hospitalier Pitié-Salpêtriѐre, Department of Neurology, IM2A, Paris, France
| | - Valérie Godefroy
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, FrontLab, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France
| | - Xavier Morandi
- Univ Rennes, CHU Rennes, Inserm, LTSI - UMR 1099, Rennes, France
| | - Pierre Jannin
- Univ Rennes, CHU Rennes, Inserm, LTSI - UMR 1099, Rennes, France
| | - Richard Levy
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, FrontLab, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France; Univ Rennes, CHU Rennes, Inserm, LTSI - UMR 1099, Rennes, France; AP-HP, Groupe Hospitalier Pitié-Salpêtriѐre, Department of Neurology, IM2A, Paris, France
| | - Bénédicte Batrancourt
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, FrontLab, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France.
| | - Raffaella Migliaccio
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, FrontLab, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France; AP-HP, Groupe Hospitalier Pitié-Salpêtriѐre, Department of Neurology, IM2A, Paris, France.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
13
|
Le Duff M, Michinov E, Bracq MS, Mukae N, Eto M, Descamps J, Hashizume M, Jannin P. Virtual reality environments to train soft skills in medical and nursing education: a technical feasibility study between France and Japan. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02834-0. [PMID: 36689148 DOI: 10.1007/s11548-023-02834-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 01/06/2023] [Indexed: 01/24/2023]
Abstract
PURPOSE To meet the urgent and massive training needs of healthcare professionals, the use of digital technologies is proving increasingly relevant, and the rise of digital training platforms shows their usefulness and possibilities. However, despite the impact of these platforms on the medical skills learning, cultural differences are rarely factored in the implementation of these training environments. METHODS By using the Scrub Nurse Non-Technical Skills Training System (SunSet), we developed a methodology enabling the adaptation of a virtual reality-based environment and scenarios from French to Japanese cultural and medical practices. We then conducted a technical feasibility study between France and Japan to assess virtual reality simulations acceptance among scrub nurses. RESULTS Results in term of acceptance do not reveal major disparity between both populations, and the only emerging significant difference between both groups is on the Behavioral Intention, which is significantly higher for the French scrub nurses. In both cases, participants had a positive outlook. CONCLUSION The findings suggest that the methodology we have implemented can be further used in the context of cultural adaptation of non-technical skills learning scenarios in virtual environments for the training and assessment of health care personnel.
Collapse
Affiliation(s)
- Marie Le Duff
- Inserm, LTSI - UMR 1099, Université de Rennes, 35000, Rennes, France
| | | | - Marie-Stéphanie Bracq
- Inserm, LTSI - UMR 1099, Université de Rennes, 35000, Rennes, France.,LP3C (EA 1285), Université de Rennes, 35000, Rennes, France
| | - Nobutaka Mukae
- Department of Neurosurgery, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Masatoshi Eto
- Department of Urology, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan.,Department of Advanced Medicine and Innovative Technology, Kyushu University Hospital, Fukuoka, Japan
| | - Jeanne Descamps
- Ecole d'Infirmier(e)s de Bloc Opératoire - Pôle de formation des professionnels de santé du CHU de Rennes, Rennes, France
| | - Makoto Hashizume
- Department of Advanced Medicine and Innovative Technology, Kyushu University Hospital, Fukuoka, Japan
| | - Pierre Jannin
- Inserm, LTSI - UMR 1099, Université de Rennes, 35000, Rennes, France.
| |
Collapse
|
14
|
Lam K, Abràmoff MD, Balibrea JM, Bishop SM, Brady RR, Callcut RA, Chand M, Collins JW, Diener MK, Eisenmann M, Fermont K, Neto MG, Hager GD, Hinchliffe RJ, Horgan A, Jannin P, Langerman A, Logishetty K, Mahadik A, Maier-Hein L, Antona EM, Mascagni P, Mathew RK, Müller-Stich BP, Neumuth T, Nickel F, Park A, Pellino G, Rudzicz F, Shah S, Slack M, Smith MJ, Soomro N, Speidel S, Stoyanov D, Tilney HS, Wagner M, Darzi A, Kinross JM, Purkayastha S. A Delphi consensus statement for digital surgery. NPJ Digit Med 2022; 5:100. [PMID: 35854145 PMCID: PMC9296639 DOI: 10.1038/s41746-022-00641-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 06/24/2022] [Indexed: 12/13/2022] Open
Abstract
The use of digital technology is increasing rapidly across surgical specialities, yet there is no consensus for the term ‘digital surgery’. This is critical as digital health technologies present technical, governance, and legal challenges which are unique to the surgeon and surgical patient. We aim to define the term digital surgery and the ethical issues surrounding its clinical application, and to identify barriers and research goals for future practice. 38 international experts, across the fields of surgery, AI, industry, law, ethics and policy, participated in a four-round Delphi exercise. Issues were generated by an expert panel and public panel through a scoping questionnaire around key themes identified from the literature and voted upon in two subsequent questionnaire rounds. Consensus was defined if >70% of the panel deemed the statement important and <30% unimportant. A final online meeting was held to discuss consensus statements. The definition of digital surgery as the use of technology for the enhancement of preoperative planning, surgical performance, therapeutic support, or training, to improve outcomes and reduce harm achieved 100% consensus agreement. We highlight key ethical issues concerning data, privacy, confidentiality and public trust, consent, law, litigation and liability, and commercial partnerships within digital surgery and identify barriers and research goals for future practice. Developers and users of digital surgery must not only have an awareness of the ethical issues surrounding digital applications in healthcare, but also the ethical considerations unique to digital surgery. Future research into these issues must involve all digital surgery stakeholders including patients.
Collapse
Affiliation(s)
- Kyle Lam
- Department of Surgery and Cancer, Imperial College, London, UK.,Institute of Global Health Innovation, Imperial College London, London, UK
| | - Michael D Abràmoff
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, USA.,Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA, USA
| | - José M Balibrea
- Department of Gastrointestinal Surgery, Hospital Clínic de Barcelona, Barcelona, Spain.,Universitat de Barcelona, Barcelona, Spain
| | | | - Richard R Brady
- Newcastle Centre for Bowel Disease Research Hub, Newcastle University, Newcastle, UK.,Department of Colorectal Surgery, Newcastle Hospitals, Newcastle, UK
| | | | - Manish Chand
- Department of Surgery and Interventional Sciences, University College London, London, UK
| | - Justin W Collins
- CMR Surgical Limited, Cambridge, UK.,Department of Surgery and Interventional Sciences, University College London, London, UK
| | - Markus K Diener
- Department of General and Visceral Surgery, University of Freiburg, Freiburg im Breisgau, Germany.,Faculty of Medicine, University of Freiburg, Freiburg im Breisgau, Germany
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Kelly Fermont
- Solicitor of the Senior Courts of England and Wales, Independent Researcher, Bristol, UK
| | - Manoel Galvao Neto
- Endovitta Institute, Sao Paulo, Brazil.,FMABC Medical School, Santo Andre, Brazil
| | - Gregory D Hager
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, MD, USA.,Department of Computer Science, The Johns Hopkins University, Baltimore, MD, USA
| | | | - Alan Horgan
- Department of Colorectal Surgery, Newcastle Hospitals, Newcastle, UK
| | - Pierre Jannin
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Alexander Langerman
- Otolaryngology, Head & Neck Surgery and Radiology & Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,International Centre for Surgical Safety, Li Ka Shing Knowledge Institute, St. Michael's Hospital, University of Toronto, Toronto, ON, Canada
| | | | | | - Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany.,Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.,Medical Faculty, Heidelberg University, Heidelberg, Germany.,LKSK Institute of St. Michael's Hospital, Toronto, ON, Canada
| | | | - Pietro Mascagni
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy.,IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France.,ICube, University of Strasbourg, Strasbourg, France
| | - Ryan K Mathew
- School of Medicine, University of Leeds, Leeds, UK.,Department of Neurosurgery, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Beat P Müller-Stich
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany.,National Center for Tumor Diseases, Heidelberg, Germany
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), Universität Leipzig, Leipzig, Germany
| | - Felix Nickel
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Adrian Park
- Department of Surgery, Anne Arundel Medical Center, School of Medicine, Johns Hopkins University, Annapolis, MD, USA
| | - Gianluca Pellino
- Department of Advanced Medical and Surgical Sciences, Università degli Studi della Campania "Luigi Vanvitelli", Naples, Italy.,Colorectal Surgery, Vall d'Hebron University Hospital, Barcelona, Spain
| | - Frank Rudzicz
- Department of Computer Science, University of Toronto, Toronto, ON, Canada.,Vector Institute for Artificial Intelligence, Toronto, ON, Canada.,Unity Health Toronto, Toronto, ON, Canada.,Surgical Safety Technologies Inc, Toronto, ON, Canada
| | - Sam Shah
- Faculty of Future Health, College of Medicine and Dentistry, Ulster University, Birmingham, UK
| | - Mark Slack
- CMR Surgical Limited, Cambridge, UK.,Department of Urogynaecology, Addenbrooke's Hospital, Cambridge, UK.,University of Cambridge, Cambridge, UK
| | - Myles J Smith
- The Royal Marsden Hospital, London, UK.,Institute of Cancer Research, London, UK
| | - Naeem Soomro
- Department of Urology, Newcastle Upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany.,Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Henry S Tilney
- Department of Surgery and Cancer, Imperial College, London, UK.,Department of Colorectal Surgery, Frimley Health NHS Foundation Trust, Frimley, UK
| | - Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany.,National Center for Tumor Diseases, Heidelberg, Germany
| | - Ara Darzi
- Department of Surgery and Cancer, Imperial College, London, UK.,Institute of Global Health Innovation, Imperial College London, London, UK
| | - James M Kinross
- Department of Surgery and Cancer, Imperial College, London, UK.
| | | |
Collapse
|
15
|
Estudillo-Romero A, Haegelen C, Jannin P, Baxter JSH. Voxel-based diktiometry: Combining convolutional neural networks with voxel-based analysis and its application in diffusion tensor imaging for Parkinson's disease. Hum Brain Mapp 2022; 43:4835-4851. [PMID: 35841274 PMCID: PMC9582380 DOI: 10.1002/hbm.26009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 06/10/2022] [Accepted: 06/22/2022] [Indexed: 11/11/2022] Open
Abstract
Extracting population‐wise information from medical images, specifically in the neurological domain, is crucial to better understanding disease processes and progression. This is frequently done in a whole‐brain voxel‐wise manner, in which a population of patients and healthy controls are registered to a common co‐ordinate space and a statistical test is performed on the distribution of image intensities for each location. Although this method has yielded a number of scientific insights, it is further from clinical applicability as the differences are often small and altogether do not permit for a high‐performing classifier. In this article, we take the opposite approach of using a high‐performing classifier, specifically a traditional convolutional neural network, and then extracting insights from it which can be applied in a population‐wise manner, a method we call voxel‐based diktiometry. We have applied this method to diffusion tensor imaging (DTI) analysis for Parkinson's disease (PD), using the Parkinson's Progression Markers Initiative database. By using the network sensitivity information, we can decompose what elements of the DTI contribute the most to the network's performance, drawing conclusions about diffusion biomarkers for PD that are based on metrics which are not readily expressed in the voxel‐wise approach.
Collapse
Affiliation(s)
| | - Claire Haegelen
- LTSI-INSERM UMR 1099, Université de Rennes 1, Rennes, France.,Département de Neurochirurgie, CHU Rennes, Rennes, France
| | - Pierre Jannin
- LTSI-INSERM UMR 1099, Université de Rennes 1, Rennes, France
| | - John S H Baxter
- LTSI-INSERM UMR 1099, Université de Rennes 1, Rennes, France
| |
Collapse
|
16
|
Baxter JSH, Jannin P. Combining simple interactivity and machine learning: a separable deep learning approach to subthalamic nucleus localization and segmentation in MRI for deep brain stimulation surgical planning. J Med Imaging (Bellingham) 2022; 9:045001. [PMID: 35836671 DOI: 10.1117/1.jmi.9.4.045001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 06/16/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Deep brain stimulation (DBS) is an interventional treatment for some neurological and neurodegenerative diseases. For example, in Parkinson's disease, DBS electrodes are positioned at particular locations within the basal ganglia to alleviate the patient's motor symptoms. These interventions depend greatly on a preoperative planning stage in which potential targets and electrode trajectories are identified in a preoperative MRI. Due to the small size and low contrast of targets such as the subthalamic nucleus (STN), their segmentation is a difficult task. Machine learning provides a potential avenue for development, but it has difficulty in segmenting such small structures in volumetric images due to additional problems such as segmentation class imbalance. Approach: We present a two-stage separable learning workflow for STN segmentation consisting of a localization step that detects the STN and crops the image to a small region and a segmentation step that delineates the structure within that region. The goal of this decoupling is to improve accuracy and efficiency and to provide an intermediate representation that can be easily corrected by a clinical user. This correction capability was then studied through a human-computer interaction experiment with seven novice participants and one expert neurosurgeon. Results: Our two-step segmentation significantly outperforms the comparative registration-based method currently used in clinic and approaches the fundamental limit on variability due to the image resolution. In addition, the human-computer interaction experiment shows that the additional interaction mechanism allowed by separating STN segmentation into two steps significantly improves the users' ability to correct errors and further improves performance. Conclusions: Our method shows that separable learning not only is feasible for fully automatic STN segmentation but also leads to improved interactivity that can ease its translation into clinical use.
Collapse
Affiliation(s)
- John S H Baxter
- Université de Rennes 1, Laboratoire Traitement du Signal et de l'Image (INSERM UMR 1099), Rennes, France
| | - Pierre Jannin
- Université de Rennes 1, Laboratoire Traitement du Signal et de l'Image (INSERM UMR 1099), Rennes, France
| |
Collapse
|
17
|
Tanguy D, Batrancourt B, Estudillo-Romero A, Baxter JSH, Le Ber I, Bouzigues A, Godefroy V, Funkiewiez A, Chamayou C, Volle E, Saracino D, Rametti-Lacroux A, Morandi X, Jannin P, Levy R, Migliaccio R. An ecological approach to identify distinct neural correlates of disinhibition in frontotemporal dementia. Neuroimage Clin 2022; 35:103079. [PMID: 35700600 PMCID: PMC9194654 DOI: 10.1016/j.nicl.2022.103079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 05/24/2022] [Accepted: 06/03/2022] [Indexed: 11/27/2022]
Abstract
Disinhibition is a core symptom of many neurodegenerative diseases, particularly frontotemporal dementia, and is a major cause of stress for caregivers. While a distinction between behavioural and cognitive disinhibition is common, an operational definition of behavioural disinhibition is still missing. Furthermore, conventional assessment of behavioural disinhibition, based on questionnaires completed by the caregivers, often lacks ecological validity. Therefore, their neuroanatomical correlates are non-univocal. In the present work, we used an original behavioural approach in a semi-ecological situation to assess two specific dimensions of behavioural disinhibition: compulsivity and social disinhibition. First, we investigated disinhibition profile in patients compared to controls. Then, to validate our approach, compulsivity and social disinhibition scores were correlated with classic cognitive tests measuring disinhibition (Hayling Test) and social cognition (mini-Social cognition & Emotional Assessment). Finally, we disentangled the anatomical networks underlying these two subtypes of behavioural disinhibition, taking in account the grey (voxel-based morphometry) and white matter (diffusion tensor imaging tractography). We included 17 behavioural variant frontotemporal dementia patients and 18 healthy controls. We identified patients as more compulsive and socially disinhibited than controls. We found that behavioural metrics in the semi-ecological task were related to cognitive performance: compulsivity correlated with the Hayling test and both compulsivity and social disinhibition were associated with the emotion recognition test. Based on voxel-based morphometry and tractography, compulsivity correlated with atrophy in the bilateral orbitofrontal cortex, the right temporal region and subcortical structures, as well as with alterations of the bilateral cingulum and uncinate fasciculus, the right inferior longitudinal fasciculus and the right arcuate fasciculus. Thus, the network of regions related to compulsivity matched the "semantic appraisal" network. Social disinhibition was associated with bilateral frontal atrophy and impairments in the forceps minor, the bilateral cingulum and the left uncinate fasciculus, regions corresponding to the frontal component of the "salience" network. Summarizing, this study validates our semi-ecological approach, through the identification of two subtypes of behavioural disinhibition, and highlights different neural networks underlying compulsivity and social disinhibition. Taken together, these findings are promising for clinical practice by providing a better characterisation of inhibition disorders, promoting their detection and consequently a more adapted management of patients.
Collapse
Affiliation(s)
- Delphine Tanguy
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France; Univ Rennes, CHU Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France.
| | - Bénédicte Batrancourt
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France
| | | | - John S H Baxter
- Univ Rennes, CHU Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| | - Isabelle Le Ber
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France; AP-HP, Groupe Hospitalier Pitié-Salpêtrière, Department of Neurology, IM2A, Paris, France
| | - Arabella Bouzigues
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France
| | - Valérie Godefroy
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France
| | - Aurélie Funkiewiez
- AP-HP, Groupe Hospitalier Pitié-Salpêtrière, Department of Neurology, IM2A, Paris, France
| | - Céline Chamayou
- AP-HP, Groupe Hospitalier Pitié-Salpêtrière, Department of Neurology, IM2A, Paris, France
| | - Emmanuelle Volle
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France
| | - Dario Saracino
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France; AP-HP, Groupe Hospitalier Pitié-Salpêtrière, Department of Neurology, IM2A, Paris, France
| | - Armelle Rametti-Lacroux
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France
| | - Xavier Morandi
- Univ Rennes, CHU Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| | - Pierre Jannin
- Univ Rennes, CHU Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| | - Richard Levy
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France; AP-HP, Groupe Hospitalier Pitié-Salpêtrière, Department of Neurology, IM2A, Paris, France
| | - Raffaella Migliaccio
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, France; AP-HP, Groupe Hospitalier Pitié-Salpêtrière, Department of Neurology, IM2A, Paris, France.
| | | |
Collapse
|
18
|
Affiliation(s)
- John S H Baxter
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Universite de Rennes 1, Rennes, France
| | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Universite de Rennes 1, Rennes, France
| |
Collapse
|
19
|
Maier-Hein L, Eisenmann M, Sarikaya D, März K, Collins T, Malpani A, Fallert J, Feussner H, Giannarou S, Mascagni P, Nakawala H, Park A, Pugh C, Stoyanov D, Vedula SS, Cleary K, Fichtinger G, Forestier G, Gibaud B, Grantcharov T, Hashizume M, Heckmann-Nötzel D, Kenngott HG, Kikinis R, Mündermann L, Navab N, Onogur S, Roß T, Sznitman R, Taylor RH, Tizabi MD, Wagner M, Hager GD, Neumuth T, Padoy N, Collins J, Gockel I, Goedeke J, Hashimoto DA, Joyeux L, Lam K, Leff DR, Madani A, Marcus HJ, Meireles O, Seitel A, Teber D, Ückert F, Müller-Stich BP, Jannin P, Speidel S. Surgical data science - from concepts toward clinical translation. Med Image Anal 2022; 76:102306. [PMID: 34879287 PMCID: PMC9135051 DOI: 10.1016/j.media.2021.102306] [Citation(s) in RCA: 69] [Impact Index Per Article: 34.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 11/03/2021] [Accepted: 11/08/2021] [Indexed: 02/06/2023]
Abstract
Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany.
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Duygu Sarikaya
- Department of Computer Engineering, Faculty of Engineering, Gazi University, Ankara, Turkey; LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Keno März
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Anand Malpani
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | | | - Hubertus Feussner
- Department of Surgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Stamatia Giannarou
- The Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | | | - Adrian Park
- Department of Surgery, Anne Arundel Health System, Annapolis, Maryland, USA; Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Carla Pugh
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Swaroop S Vedula
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Kevin Cleary
- The Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, D.C., USA
| | | | - Germain Forestier
- L'Institut de Recherche en Informatique, Mathématiques, Automatique et Signal (IRIMAS), University of Haute-Alsace, Mulhouse, France; Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - Bernard Gibaud
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Teodor Grantcharov
- University of Toronto, Toronto, Ontario, Canada; The Li Ka Shing Knowledge Institute of St. Michael's Hospital, Toronto, Ontario, Canada
| | - Makoto Hashizume
- Kyushu University, Fukuoka, Japan; Kitakyushu Koga Hospital, Fukuoka, Japan
| | - Doreen Heckmann-Nötzel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | | | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Sinan Onogur
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tobias Roß
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Russell H Taylor
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Minu D Tizabi
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Gregory D Hager
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig, Germany
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | - Justin Collins
- Division of Surgery and Interventional Science, University College London, London, United Kingdom
| | - Ines Gockel
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, Leipzig University Hospital, Leipzig, Germany
| | - Jan Goedeke
- Pediatric Surgery, Dr. von Hauner Children's Hospital, Ludwig-Maximilians-University, Munich, Germany
| | - Daniel A Hashimoto
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA; Surgical AI and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Luc Joyeux
- My FetUZ Fetal Research Center, Department of Development and Regeneration, Biomedical Sciences, KU Leuven, Leuven, Belgium; Center for Surgical Technologies, Faculty of Medicine, KU Leuven, Leuven, Belgium; Department of Obstetrics and Gynecology, Division Woman and Child, Fetal Medicine Unit, University Hospitals Leuven, Leuven, Belgium; Michael E. DeBakey Department of Surgery, Texas Children's Hospital and Baylor College of Medicine, Houston, Texas, USA
| | - Kyle Lam
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Daniel R Leff
- Department of BioSurgery and Surgical Technology, Imperial College London, London, United Kingdom; Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Breast Unit, Imperial Healthcare NHS Trust, London, United Kingdom
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Ontario, Canada
| | - Hani J Marcus
- National Hospital for Neurology and Neurosurgery, and UCL Queen Square Institute of Neurology, London, United Kingdom
| | - Ozanan Meireles
- Massachusetts General Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Alexander Seitel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Dogu Teber
- Department of Urology, City Hospital Karlsruhe, Karlsruhe, Germany
| | - Frank Ückert
- Institute for Applied Medical Informatics, Hamburg University Hospital, Hamburg, Germany
| | - Beat P Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Pierre Jannin
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC) Dresden, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| |
Collapse
|
20
|
Tanguy D, Batrancourt B, Bouzigues A, Godefroy V, Bendetowicz D, Rametti‐Lacroux A, Bombois S, Cognat E, Le Ber I, Morandi X, Jannin P, Levy R, Migliaccio R. Reduction of behavioural inhibition disorders in behavioural variant frontotemporal dementia patients observed under semi‐ecological conditions. Alzheimers Dement 2021. [DOI: 10.1002/alz.051639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Delphine Tanguy
- Univ Rennes, CHU Rennes, Inserm, LTSI – UMR 1099 Rennes France
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, UPMC Univ Paris 06, ICM Paris France
| | - Bénédicte Batrancourt
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, UPMC Univ Paris 06, ICM Paris France
| | - Arabella Bouzigues
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, UPMC Univ Paris 06, ICM Paris France
| | - Valérie Godefroy
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, UPMC Univ Paris 06, ICM Paris France
| | - David Bendetowicz
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, UPMC Univ Paris 06, ICM Paris France
- AP‐HP, Groupe Hospitalier Pitié‐Salpêtrière, Department of Neurology, IM2A Paris France
| | | | - Stéphanie Bombois
- AP‐HP, Groupe Hospitalier Pitié‐Salpêtrière, Department of Neurology, IM2A Paris France
| | - Emmanuel Cognat
- Cognitive Neurology Center, GH Saint‐Louis ‐ Lariboisière ‐ Fernand‐Widal, APHP Paris France
| | - Isabelle Le Ber
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, UPMC Univ Paris 06, ICM Paris France
- AP‐HP, Groupe Hospitalier Pitié‐Salpêtrière, Department of Neurology, IM2A Paris France
| | - Xavier Morandi
- Univ Rennes, CHU Rennes, Inserm, LTSI – UMR 1099 Rennes France
| | - Pierre Jannin
- Univ Rennes, CHU Rennes, Inserm, LTSI – UMR 1099 Rennes France
| | - Richard Levy
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, UPMC Univ Paris 06, ICM Paris France
- AP‐HP, Groupe Hospitalier Pitié‐Salpêtrière, Department of Neurology, IM2A Paris France
| | - Raffaella Migliaccio
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, UPMC Univ Paris 06, ICM Paris France
- AP‐HP, Groupe Hospitalier Pitié‐Salpêtrière, Department of Neurology, IM2A Paris France
| |
Collapse
|
21
|
Tronchot A, Berthelemy J, Thomazeau H, Huaulmé A, Walbron P, Sirveaux F, Jannin P. Validation of virtual reality arthroscopy simulator relevance in characterising experienced surgeons. Orthop Traumatol Surg Res 2021; 107:103079. [PMID: 34597826 DOI: 10.1016/j.otsr.2021.103079] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 04/28/2021] [Accepted: 05/17/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND Virtual reality (VR) simulation is particularly suitable for learning arthroscopy skills. Despite significant research, one drawback often outlined is the difficulty in distinguishing performance levels (Construct Validity) in experienced surgeons. Therefore, it seems adequate to search new methods of performance measurements using probe trajectories instead of commonly used metrics. HYPOTHESIS It was hypothesized that a larger experience in surgical shoulder arthroscopy would be correlated with better performance on a VR shoulder arthroscopy simulator and that experienced operators would share similar probe trajectories. MATERIALS & METHODS After answering to standardized questionnaires, 104 trajectories from 52 surgeons divided into 2 cohorts (26 intermediates and 26 experts) were recorded on a shoulder arthroscopy simulator. The procedure analysed was the "loose body removal" in a right shoulder joint. 10 metrics were computed on the trajectories including procedure duration, overall path length, economy of motion and smoothness. Additionally, Dynamic Time Warping (DTW) was computed on the trajectories for unsupervised hierarchical clustering of the surgeons. RESULTS Experts were significantly faster (Median 70.9s Interquartile range [56.4-86.3] vs. 116.1s [82.8-154.2], p<0.01), more fluid (4.6.105mm.s-3 [3.1.105-7.2.105] vs. 1.5.106mm.s-3 [2.6.106-3.5.106], p=0.05), and economical in their motion (19.3mm2 [9.1-25.9] vs. 33.8mm2 [14.8-50.5], p<0.01), but there was no significant difference in performance for path length (671.4mm [503.8-846.1] vs 694.6mm [467.0-1090.1], p=0.62). The DTW clustering differentiates two expertise related groups of trajectories with performance similarities, respectively including 48 expert trajectories for the first group and 52 intermediates and 4 expert trajectories for the second group (Sensitivity of 92%, Specificity of 100%). Hierarchical clustering with DTW significantly identified expert operators from intermediate operators and found trajectory similarities among 24/26 experts. CONCLUSION This study demonstrated the Construct Validity of the VR shoulder arthroscopy simulator within groups of experienced surgeons. With new types of metrics simply based on the simulator's raw trajectories, it was possible to significantly distinguish levels of expertise. We demonstrated that clustering analysis with Dynamic Time Warping was able to reliably discriminate between expert operators and intermediate operators. CLINICAL RELEVANCE The results have implications for the future of arthroscopic surgical training or post-graduate accreditation programs using virtual reality simulation. LEVEL OF EVIDENCE III; prospective comparative study.
Collapse
Affiliation(s)
- Alexandre Tronchot
- University Rennes, Inserm, LTSI-UMR 1099, 35000 Rennes, France; Orthopaedics and Trauma Department, Rennes University Hospital, 2 rue Henri Le Guilloux, 35000 Rennes, France.
| | | | - Hervé Thomazeau
- University Rennes, Inserm, LTSI-UMR 1099, 35000 Rennes, France; Orthopaedics and Trauma Department, Rennes University Hospital, 2 rue Henri Le Guilloux, 35000 Rennes, France
| | - Arnaud Huaulmé
- University Rennes, Inserm, LTSI-UMR 1099, 35000 Rennes, France
| | - Paul Walbron
- Orthopaedics Department, Nancy University Hospital, Centre Chirurgical Emile Gallé, 49 rue Hermite, 54000 Nancy, France
| | - François Sirveaux
- Orthopaedics Department, Nancy University Hospital, Centre Chirurgical Emile Gallé, 49 rue Hermite, 54000 Nancy, France
| | - Pierre Jannin
- University Rennes, Inserm, LTSI-UMR 1099, 35000 Rennes, France
| |
Collapse
|
22
|
Peralta M, Jannin P, Baxter JSH. Machine learning in deep brain stimulation: A systematic review. Artif Intell Med 2021; 122:102198. [PMID: 34823832 DOI: 10.1016/j.artmed.2021.102198] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 09/23/2021] [Accepted: 10/12/2021] [Indexed: 11/16/2022]
Abstract
Deep Brain Stimulation (DBS) is an increasingly common therapy for a large range of neurological disorders, such as abnormal movement disorders. The effectiveness of DBS in terms of controlling patient symptomatology has made this procedure increasingly used over the past few decades. Concurrently, the popularity of Machine Learning (ML), a subfield of artificial intelligence, has skyrocketed and its influence has more recently extended to medical domains such as neurosurgery. Despite its growing research interest, there has yet to be a literature review specifically on the use of ML in DBS. We have followed a fully systematic methodology to obtain a corpus of 73 papers. In each paper, we identified the clinical application, the type/amount of data used, the method employed, and the validation strategy, further decomposed into 12 different sub-categories. The papers overall illustrated some existing trends in how ML is used in the context of DBS, including the breath of the problem domain and evolving techniques, as well as common frameworks and limitations. This systematic review analyzes at a broad level how ML have been recently used to address clinical problems on DBS, giving insight into how these new computational methods are helping to push the state-of-the-art of functional neurosurgery. DBS clinical workflow is complex, involves many specialists, and raises several clinical issues which have partly been addressed with artificial intelligence. However, several areas remain and those that have been recently addressed with ML are by no means considered "solved" by the community nor are they closed to new and evolving methods.
Collapse
Affiliation(s)
- Maxime Peralta
- Univ Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| | - Pierre Jannin
- Univ Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| | - John S H Baxter
- Univ Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France.
| |
Collapse
|
23
|
Huaulmé A, Sarikaya D, Le Mut K, Despinoy F, Long Y, Dou Q, Chng CB, Lin W, Kondo S, Bravo-Sánchez L, Arbeláez P, Reiter W, Mitsuishi M, Harada K, Jannin P. MIcro-surgical anastomose workflow recognition challenge report. Comput Methods Programs Biomed 2021; 212:106452. [PMID: 34688174 DOI: 10.1016/j.cmpb.2021.106452] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/28/2021] [Indexed: 05/22/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic surgical workflow recognition is an essential step in developing context-aware computer-assisted surgical systems. Video recordings of surgeries are becoming widely accessible, as the operational field view is captured during laparoscopic surgeries. Head and ceiling mounted cameras are also increasingly being used to record videos in open surgeries. This makes videos a common choice in surgical workflow recognition. Additional modalities, such as kinematic data captured during robot-assisted surgeries, could also improve workflow recognition. This paper presents the design and results of the MIcro-Surgical Anastomose Workflow recognition on training sessions (MISAW) challenge whose objective was to develop workflow recognition models based on kinematic data and/or videos. METHODS The MISAW challenge provided a data set of 27 sequences of micro-surgical anastomosis on artificial blood vessels. This data set was composed of videos, kinematics, and workflow annotations. The latter described the sequences at three different granularity levels: phase, step, and activity. Four tasks were proposed to the participants: three of them were related to the recognition of surgical workflow at three different granularity levels, while the last one addressed the recognition of all granularity levels in the same model. We used the average application-dependent balanced accuracy (AD-Accuracy) as the evaluation metric. This takes unbalanced classes into account and it is more clinically relevant than a frame-by-frame score. RESULTS Six teams participated in at least one task. All models employed deep learning models, such as convolutional neural networks (CNN), recurrent neural networks (RNN), or a combination of both. The best models achieved accuracy above 95%, 80%, 60%, and 75% respectively for recognition of phases, steps, activities, and multi-granularity. The RNN-based models outperformed the CNN-based ones as well as the dedicated modality models compared to the multi-granularity except for activity recognition. CONCLUSION For high levels of granularity, the best models had a recognition rate that may be sufficient for applications such as prediction of remaining surgical time. However, for activities, the recognition rate was still low for applications that can be employed clinically. The MISAW data set is publicly available at http://www.synapse.org/MISAW to encourage further research in surgical workflow recognition.
Collapse
Affiliation(s)
- Arnaud Huaulmé
- Univ Rennes,INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| | - Duygu Sarikaya
- Gazi University, Faculty of Engineering; Department of Computer Engineering, Ankara, Turkey
| | - Kévin Le Mut
- Univ Rennes,INSERM, LTSI - UMR 1099, Rennes, F35000, France
| | | | - Yonghao Long
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, China; T Stone Robotics Institute, The Chinese University of Hong Kong, China
| | - Qi Dou
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, China; T Stone Robotics Institute, The Chinese University of Hong Kong, China
| | - Chin-Boon Chng
- National University of Singapore(NUS), Singapore, Singapore; Southern University of Science and Technology (SUSTech), Shenzhen, China
| | - Wenjun Lin
- National University of Singapore(NUS), Singapore, Singapore; Southern University of Science and Technology (SUSTech), Shenzhen, China
| | | | - Laura Bravo-Sánchez
- Center for Research and Formation in Artificial Intelligence, Department of Biomedical Engineering, Universidad de los Andes, Bogotá, Colombia
| | - Pablo Arbeláez
- Center for Research and Formation in Artificial Intelligence, Department of Biomedical Engineering, Universidad de los Andes, Bogotá, Colombia
| | | | - Manoru Mitsuishi
- Department of Mechanical Engineering, the University of Tokyo,Tokyo 113-8656, Japan
| | - Kanako Harada
- Department of Mechanical Engineering, the University of Tokyo,Tokyo 113-8656, Japan
| | - Pierre Jannin
- Univ Rennes,INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| |
Collapse
|
24
|
Peralta M, Haegelen C, Jannin P, Baxter JSH. PassFlow: a multimodal workflow for predicting deep brain stimulation outcomes. Int J Comput Assist Radiol Surg 2021; 16:1361-1370. [PMID: 34216319 DOI: 10.1007/s11548-021-02435-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 06/17/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE Deep Brain Stimulation (DBS) is a proven therapy for Parkinson's Disease (PD), frequently resulting in an enhancement of motor function. Nonetheless, several undesirable side effects can occur after DBS, which can worsen the quality of life of the patient. Thus, the clinical team has to carefully select patients on whom to perform DBS. Over the past decade, there have been some attempts to relate pre-operative data and DBS clinical outcomes, with most focused on the motor symptomatology. In this paper, we propose a machine learning-based method able to predict a large number of DBS clinical outcomes for PD. METHODS We propose a multimodal pipeline, referred to as PassFlow, which predicts 84 clinical post-operative clinical scores. PassFlow is composed of an artificial neural network to compress clinical information, an image processing method from the state-of-the-art to extract morphological biomarkers our of T1 imaging, and an SVM to perform the regressions. We validated PassFlow on 196 PD patients who undergone a DBS. RESULTS PassFlow showed correlation coefficients as high as 0.71 and were able to significantly predict 63 out of the 84 scores, outperforming a comparative linear method. The number of metrics that are predicted with this pre-operative information was also found to be correlated with the number of patients with this information available, indicating that the PassFlow method is still actively learning. CONCLUSION We presented a novel, machine learning-based pipeline to predict a variety of post-operative clinical outcomes of DBS for PD patients. PassFlow took into account various bio-markers, arising from different data modalities, showing high correlation coefficients for some scores from pre-operative data only. It indicates that many clinical outcomes of DBS can be predicted agnostic to the specific simulation parameters, as PassFlow has been validated without such stimulation-related information.
Collapse
Affiliation(s)
- Maxime Peralta
- Université de Rennes 1, INSERM, LTSI - UMR 1099, 35000, Rennes, France
| | - Claire Haegelen
- Department of Neurosurgery, Centre Hospitalier Universitaire de Rennes, Rennes, France
| | - Pierre Jannin
- Université de Rennes 1, INSERM, LTSI - UMR 1099, 35000, Rennes, France
| | - John S H Baxter
- Université de Rennes 1, INSERM, LTSI - UMR 1099, 35000, Rennes, France.
| |
Collapse
|
25
|
Baxter JSH, Bui QA, Maguet E, Croci S, Delmas A, Lefaucheur JP, Bredoux L, Jannin P. Automatic cortical target point localisation in MRI for transcranial magnetic stimulation via a multi-resolution convolutional neural network. Int J Comput Assist Radiol Surg 2021; 16:1077-1087. [PMID: 34089439 DOI: 10.1007/s11548-021-02386-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/23/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE Transcranial magnetic stimulation (TMS) is a growing therapy for a variety of psychiatric and neurological disorders that arise from or are modulated by cortical regions of the brain represented by singular 3D target points. These target points are often determined manually with assistance from a pre-operative T1-weighted MRI, although there is growing interest in automatic target point localisation using an atlas. However, both approaches can be time-consuming which has an effect on the clinical workflow, and the latter does not take into account patient variability such as the varying number of cortical gyri where these targets are located. METHODS This paper proposes a multi-resolution convolutional neural network for point localisation in MR images for a priori defined points in increasingly finely resolved versions of the input image. This approach is both fast and highly memory efficient, allowing it to run in high-throughput centres, and has the capability of distinguishing between patients with high levels of anatomical variability. RESULTS Preliminary experiments have found the accuracy of this network to be [Formula: see text] mm, compared to [Formula: see text] mm for deformable registration and [Formula: see text] mm for a human expert. For most treatment points, the human expert and proposed CNN statistically significantly outperform registration, but neither statistically significantly outperforms the other, suggesting that the proposed network has human-level performance. CONCLUSIONS The human-level performance of this network indicates that it can improve TMS planning by automatically localising target points in seconds, avoiding more time-consuming registration or manual point localisation processes. This is particularly beneficial for out-of-hospital centres with limited computational resources where TMS is increasingly being administered.
Collapse
Affiliation(s)
- John S H Baxter
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Université de Rennes 1, Rennes, France.
| | - Quoc Anh Bui
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Université de Rennes 1, Rennes, France
| | - Ehouarn Maguet
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Université de Rennes 1, Rennes, France
| | | | | | - Jean-Pascal Lefaucheur
- ENT Team, EA4391, Faculty of Medicine, Paris Est Créteil University, Créteil, France.,Clinical Neurophysiology Unit, Department of Physiology, Henri Mondor Hospital, Assistance Publique - Hôpitaux de Paris, Créteil, France
| | | | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Université de Rennes 1, Rennes, France
| |
Collapse
|
26
|
Bracq MS, Michinov E, Duff ML, Arnaldi B, Gouranton V, Jannin P. “Doctor, please”: Educating Nurses to Speak Up With Interactive Digital Simulation Tablets. Clin Simul Nurs 2021. [DOI: 10.1016/j.ecns.2021.01.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
27
|
Martin T, Peralta M, Gilmore G, Sauleau P, Haegelen C, Jannin P, Baxter JS. Extending convolutional neural networks for localizing the subthalamic nucleus from micro-electrode recordings in Parkinson’s disease. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
28
|
Collins JW, Marcus HJ, Ghazi A, Sridhar A, Hashimoto D, Hager G, Arezzo A, Jannin P, Maier-Hein L, Marz K, Valdastri P, Mori K, Elson D, Giannarou S, Slack M, Hares L, Beaulieu Y, Levy J, Laplante G, Ramadorai A, Jarc A, Andrews B, Garcia P, Neemuchwala H, Andrusaite A, Kimpe T, Hawkes D, Kelly JD, Stoyanov D. Ethical implications of AI in robotic surgical training: A Delphi consensus statement. Eur Urol Focus 2021; 8:613-622. [PMID: 33941503 DOI: 10.1016/j.euf.2021.04.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 03/02/2021] [Accepted: 04/08/2021] [Indexed: 12/12/2022]
Abstract
CONTEXT As the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them. OBJECTIVES To provide ethical guidance on developing narrow AI applications for surgical training curricula. We define standardised approaches to developing AI driven applications in surgical training that address current recognised ethical implications of utilising AI on surgical data. We aim to describe an ethical approach based on the current evidence, understanding of AI and available technologies, by seeking consensus from an expert committee. EVIDENCE ACQUISITION The project was carried out in 3 phases: (1) A steering group was formed to review the literature and summarize current evidence. (2) A larger expert panel convened and discussed the ethical implications of AI application based on the current evidence. A survey was created, with input from panel members. (3) Thirdly, panel-based consensus findings were determined using an online Delphi process to formulate guidance. 30 experts in AI implementation and/or training including clinicians, academics and industry contributed. The Delphi process underwent 3 rounds. Additions to the second and third-round surveys were formulated based on the answers and comments from previous rounds. Consensus opinion was defined as ≥ 80% agreement. EVIDENCE SYNTHESIS There was 100% response from all 3 rounds. The resulting formulated guidance showed good internal consistency, with a Cronbach alpha of >0.8. There was 100% consensus that there is currently a lack of guidance on the utilisation of AI in the setting of robotic surgical training. Consensus was reached in multiple areas, including: 1. Data protection and privacy; 2. Reproducibility and transparency; 3. Predictive analytics; 4. Inherent biases; 5. Areas of training most likely to benefit from AI. CONCLUSIONS Using the Delphi methodology, we achieved international consensus among experts to develop and reach content validation for guidance on ethical implications of AI in surgical training. Providing an ethical foundation for launching narrow AI applications in surgical training. This guidance will require further validation. PATIENT SUMMARY As the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them.In this paper we provide guidance on ethical implications of AI in surgical training.
Collapse
Affiliation(s)
- Justin W Collins
- University College London, Division of Surgery and Interventional Science, Research Department of Targeted Intervention; Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London; University College London Hospital, Division of Uro-oncology.
| | - Hani J Marcus
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London
| | - Ahmed Ghazi
- Simulation Innovation Laboratory, University of Rochester, USA
| | - Ashwin Sridhar
- University College London, Division of Surgery and Interventional Science, Research Department of Targeted Intervention; University College London Hospital, Division of Uro-oncology
| | - Daniel Hashimoto
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, USA
| | - Gregory Hager
- Malone Center for engineering in healthcare, Department of Computer Science, John Hopkins University, Baltimore, USA
| | - Alberto Arezzo
- Department of Surgical Sciences, University of Torino, Italy
| | | | - Lena Maier-Hein
- Deutsches Krebsforschungszentrum, Division of Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Keno Marz
- Deutsches Krebsforschungszentrum, Division of Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Pietro Valdastri
- STORM Lab, School of Electronic and Electrical Engineering, University of Leeds, Leeds, UK
| | - Kensaku Mori
- Director of Information Technology Center, Nagoya University, Japan
| | - Daniel Elson
- Hamlyn Centre for robotic surgery, Department of Surgery and cancer, Imperial College London, UK
| | - Stamatia Giannarou
- Hamlyn Centre for robotic surgery, Department of Surgery and cancer, Imperial College London, UK
| | - Mark Slack
- Honorary Senior Lecturer, University of Cambridge, Cambridge UK; CMO CMR Surgical, Cambridge, UK
| | - Luke Hares
- Chief technology director, CMR Surgical, Cambridge, UK
| | - Yanick Beaulieu
- Division of Cardiology and Critical Care, Sacré-Coeur Hospital, University of Montreal, Montreal, Canada
| | - Jeff Levy
- Institute for Surgical Excellence, Philadelphia, USA
| | - Guy Laplante
- Director, Global Medical Affairs at Medtronic Minimally Invasive Therapies, Brampton, Canada
| | - Arvind Ramadorai
- Director, Digital-Assisted Surgery (DAS), Medtronic Surgical Robotics, North Haven, CT, USA
| | - Anthony Jarc
- Applied Research, Intuitive Surgical, Inc., Sunnyvale, CA, USA
| | - Ben Andrews
- Strategy, Intuitive Surgical, Inc., Sunnyvale, CA, USA
| | | | | | | | - Tom Kimpe
- BARCO NV - Healthcare division, Kortrijk, Belgium
| | - David Hawkes
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London
| | - John D Kelly
- University College London, Division of Surgery and Interventional Science, Research Department of Targeted Intervention; Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London; University College London Hospital, Division of Uro-oncology
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London
| |
Collapse
|
29
|
Bracq MS, Michinov E, Le Duff M, Arnaldi B, Gouranton V, Jannin P. Training situational awareness for scrub nurses: Error recognition in a virtual operating room. Nurse Educ Pract 2021; 53:103056. [PMID: 33930750 DOI: 10.1016/j.nepr.2021.103056] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Revised: 02/12/2021] [Accepted: 04/12/2021] [Indexed: 11/29/2022]
Abstract
Virtual reality simulation provides interesting opportunities to train nurses in a safe environment. While the virtual operating room has proven to be a useful training tool for technical skills, it has been less studied for non-technical skills. This study aimed to assess "Error recognition in a virtual operating room", using a simulation scenario designed to improve situation awareness. Eighteen scrub-nurse students and 8 expert scrub-nurses took part in the experiment. They were immersed in a virtual operating room and reported any errors they observed. There were nineteen errors with various degrees of severity. Measures were retrieved from logs (number of errors, time for detection, movements) and from questionnaires (situation awareness, subjective workload, anxiety and user experience). The results showed that the participants who detected most errors had a higher level of situation awareness, detected high-risk errors faster and felt more immersed in the virtual operating room than those detecting fewer errors. They also felt the workload was lighter and experienced more satisfaction. Students explored the operating room more than experts did and detected more errors, especially those with moderate risk. Debriefings confirmed that virtual simulation is acceptable to trainees and motivates them. It also provides useful and original material for debriefings.
Collapse
Affiliation(s)
- Marie-Stéphanie Bracq
- Univ Rennes, LP3C (EA 1285), F-35000 Rennes, France; Univ Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France.
| | | | - Marie Le Duff
- Univ Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France.
| | - Bruno Arnaldi
- Univ Rennes, INSA Rennes, Inria, CNRS, IRISA, F-35000 Rennes, France.
| | - Valérie Gouranton
- Univ Rennes, INSA Rennes, Inria, CNRS, IRISA, F-35000 Rennes, France.
| | - Pierre Jannin
- Univ Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France.
| |
Collapse
|
30
|
Godefroy V, Tanguy D, Bouzigues A, Sezer I, Ferrand‐Verdejo J, Azuar C, Bendetowicz D, Carle G, Rametti‐Lacroux A, Bombois S, Cognat E, Jannin P, Morandi X, Ber IL, Levy R, Batrancourt B, Migliaccio R. Frontotemporal dementia subtypes based on behavioral inhibition deficits. Alzheimers Dement (Amst) 2021; 13:e12178. [PMID: 33851004 PMCID: PMC8022767 DOI: 10.1002/dad2.12178] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 02/16/2021] [Accepted: 02/18/2021] [Indexed: 01/19/2023]
Abstract
INTRODUCTION We aimed to investigate phenotypic heterogeneity in the behavioral variant of frontotemporal dementia (bvFTD) through assessment of inhibition deficits. METHODS We assessed occurrences of 16 behavioral inhibition deficits from video recordings of 15 bvFTD patients (early stage) and 15 healthy controls (HC) in an ecological setting. We extracted dimensions of inhibition deficit and analyzed their correlations with cognitive and clinical measures. Using these dimensions, we isolated patient clusters whose atrophy patterns were explored. RESULTS After identifying two patterns of inhibition deficit (compulsive automatic behaviors and socially unconventional behaviors), we isolated three behavioral clusters with distinct atrophy patterns. BvFTD-G0 (N = 3), an outlier group, showed severe behavioral disturbances and more severe ventromedial prefrontal cortex/orbitofrontal cortex atrophy. Compared to bvFTD-G1 (N = 6), bvFTD-G2 (N = 6) presented higher anxiety and depression along with less diffuse atrophy especially in midline regions. DISCUSSION Identifying clinico-anatomical profiles through behavior observation could help to stratify bvFTD patients for adapted treatments.
Collapse
Affiliation(s)
| | - Delphine Tanguy
- Paris Brain InstituteSorbonne UniversitésParisFrance
- CHU RennesUniversité RennesRennesFrance
| | | | - Idil Sezer
- Paris Brain InstituteSorbonne UniversitésParisFrance
| | | | - Carole Azuar
- Department of NeurologyGroupe Hospitalier Pitié‐SalpêtrièreParisFrance
| | - David Bendetowicz
- Paris Brain InstituteSorbonne UniversitésParisFrance
- Department of NeurologyGroupe Hospitalier Pitié‐SalpêtrièreParisFrance
- Behavioural Neuropsychiatry UnitHôpital de la SalpêtrièreParisFrance
| | - Guilhem Carle
- Behavioural Neuropsychiatry UnitHôpital de la SalpêtrièreParisFrance
| | | | - Stéphanie Bombois
- Department of NeurologyGroupe Hospitalier Pitié‐SalpêtrièreParisFrance
| | - Emmanuel Cognat
- Université de ParisParisFrance
- Centre de Neurologie CognitiveHôpital Lariboisière Fernand‐WidalParisFrance
| | | | | | - Isabelle Le Ber
- Paris Brain InstituteSorbonne UniversitésParisFrance
- Department of NeurologyGroupe Hospitalier Pitié‐SalpêtrièreParisFrance
| | - Richard Levy
- Paris Brain InstituteSorbonne UniversitésParisFrance
- Department of NeurologyGroupe Hospitalier Pitié‐SalpêtrièreParisFrance
- Behavioural Neuropsychiatry UnitHôpital de la SalpêtrièreParisFrance
| | | | - Raffaella Migliaccio
- Paris Brain InstituteSorbonne UniversitésParisFrance
- Department of NeurologyGroupe Hospitalier Pitié‐SalpêtrièreParisFrance
- Behavioural Neuropsychiatry UnitHôpital de la SalpêtrièreParisFrance
| |
Collapse
|
31
|
Peralta M, Jannin P, Haegelen C, Baxter JSH. Data imputation and compression for Parkinson's disease clinical questionnaires. Artif Intell Med 2021; 114:102051. [PMID: 33875162 DOI: 10.1016/j.artmed.2021.102051] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Revised: 01/27/2021] [Accepted: 02/21/2021] [Indexed: 10/22/2022]
Abstract
Medical questionnaires are a valuable source of information but are often difficult to analyse due to both their size and the high possibility of them having missing values. This is a problematic issue in biomedical data science as it may complicate how individual questionnaire data is represented for statistical or machine learning analysis. In this paper, we propose a deeply-learnt residual autoencoder to simultaneously perform non-linear data imputation and dimensionality reduction. We present an extensive analysis of the dynamics of the performance of this autoencoder regarding the compression rate and the proportion of missing values. This method is evaluated on motor and non-motor clinical questionnaires of the Parkinson's Progression Markers Initiative (PPMI) database and consistently outperforms linear coupled imputation and reduction approaches.
Collapse
Affiliation(s)
- Maxime Peralta
- Laboratoire Traitement du Signal et de l'Image - INSERM UMR 1099, Université de Rennes 1, F-35000 Rennes, France
| | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image - INSERM UMR 1099, Université de Rennes 1, F-35000 Rennes, France
| | - Claire Haegelen
- Laboratoire Traitement du Signal et de l'Image - INSERM UMR 1099, Université de Rennes 1, F-35000 Rennes, France; Neurosurgery Department, Centre Hospitalier Universitaire de Rennes, F-35000 Rennes, France
| | - John S H Baxter
- Laboratoire Traitement du Signal et de l'Image - INSERM UMR 1099, Université de Rennes 1, F-35000 Rennes, France.
| |
Collapse
|
32
|
Le Lous M, Despinoy F, Klein M, Fustec E, Lavoue V, Jannin P. Impact of Physician Expertise on Probe Trajectory During Obstetric Ultrasound: A Quantitative Approach for Skill Assessment. Simul Healthc 2021; 16:67-72. [PMID: 32502122 DOI: 10.1097/sih.0000000000000465] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
INTRODUCTION The objective of the study was to identify objective metrics to evaluate the significance of a sonographer's expertise on trajectories of ultrasound probe during obstetric ultrasound training procedures. METHODS This prospective observational study was conducted at Rennes University Hospital, Department of Obstetrics and Gynecology. We evaluated a panel of sonographers (expert, intermediate, and novice) in performing 3 tasks (brain, heart, and spine) with an obstetric ultrasound simulator (Scantrainer; Medaphor, Cardiff, UK). The trajectories of the probe were logged and recorded by a custom data acquisition software. We computed metrics on the trajectories (duration, path length, average velocity, average acceleration, jerk, working volume) to compare the 3 groups and identify discriminating metrics. RESULTS A total of 33 participants were enrolled: 5 experts, 12 intermediates, and 16 novices. Discriminatory metrics were observed among the 3 levels of expertise for duration, velocity, acceleration, and jerk for brain and spine tasks. Working volume was discriminatory for the brain and the heart task. Path length was discriminatory for the brain task. CONCLUSIONS Our results suggest a relationship between the sonographer's level of expertise and probe trajectory metrics. Such measurements could be used as an indicator of sonographer proficiency and contribute to automatic analysis of probe trajectory to evaluate the quality of sonography and the sonographer.
Collapse
Affiliation(s)
- Maela Le Lous
- From the Universty of Rennes 1 (M.L.L. F.D., P.J.), INSERM, LTSI - UMR 1099; Department of Obstetrics and Gynecology (M.L.L. M.K., E.F., V.L.), University Hospital of Rennes; and INSERM 1242 (V.L.), Chemistry, Oncogenesis, Stress and Signaling, Rennes, France
| | | | | | | | | | | |
Collapse
|
33
|
Le Lous M, Klein M, Tesson C, Berthelemy J, Lavoue V, Jannin P. Metrics used to evaluate obstetric ultrasound skills on simulators: A systematic review. Eur J Obstet Gynecol Reprod Biol 2020; 258:16-22. [PMID: 33387982 DOI: 10.1016/j.ejogrb.2020.12.034] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 12/08/2020] [Accepted: 12/19/2020] [Indexed: 11/25/2022]
Abstract
Obstetric ultrasound simulators are now used for training and evaluating OB/GYN students but there is a lack of literature about evaluation metrics in this setting. In this literature review, we searched MEDLINE and the COCHRANE database using the keywords: (Obstetric OR Fetal) AND (Sonography OR Ultrasound) AND Simulation. Of a total of 263 studies screened, we selected nine articles from the title and the abstract in PubMed, in the past 5 years. Two more article were added from bibliographies. A total of 11 articles were therefore included. from which nine articles were selected from the title and the abstract in PubMed. Two more articles were added from the bibliographies For each study, data about the type of simulation, and the metrics (qualitative or quantitative) used for assessment were collected. The selection of studies shows that evaluation criteria for ultrasound training were qualitative metrics (binary success/fail exercise ; dexterity quoted by an external observer ; Objective Structured Assessment of Ultrasound Skills (OSAUS) Score ; quality of images according to Salomon's score) or quantitative criteria (Accuracy of Biometry - Simulator generated metrics). Most studies used a combination of both. To date, simulator metrics used to discriminate ultrasound skills are performance score quoted by external observers and image quality scoring. Whether probe trajectory metrics can be used to discriminate skills is unknown.
Collapse
Affiliation(s)
- Maela Le Lous
- Univ Rennes, INSERM, LTSI - UMR 1099, F35000, Rennes, France; Department of Obstetrics and Gynecology, University Hospital of Rennes, France; CIC Inserm 1414, University Hospital of Rennes, University of Rennes 1, Rennes, France.
| | - Margaux Klein
- Department of Obstetrics and Gynecology, University Hospital of Rennes, France
| | - Caroline Tesson
- Department of Obstetrics and Gynecology, University Hospital of Rennes, France
| | | | - Vincent Lavoue
- Department of Obstetrics and Gynecology, University Hospital of Rennes, France; CIC Inserm 1414, University Hospital of Rennes, University of Rennes 1, Rennes, France
| | - Pierre Jannin
- Univ Rennes, INSERM, LTSI - UMR 1099, F35000, Rennes, France
| |
Collapse
|
34
|
Maier-Hein L, Reinke A, Kozubek M, Martel AL, Arbel T, Eisenmann M, Hanbury A, Jannin P, Müller H, Onogur S, Saez-Rodriguez J, van Ginneken B, Kopp-Schneider A, Landman BA. BIAS: Transparent reporting of biomedical image analysis challenges. Med Image Anal 2020; 66:101796. [PMID: 32911207 PMCID: PMC7441980 DOI: 10.1016/j.media.2020.101796] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Revised: 06/12/2020] [Accepted: 07/27/2020] [Indexed: 12/12/2022]
Abstract
The number of biomedical image analysis challenges organized per year is steadily increasing. These international competitions have the purpose of benchmarking algorithms on common data sets, typically to identify the best method for a given problem. Recent research, however, revealed that common practice related to challenge reporting does not allow for adequate interpretation and reproducibility of results. To address the discrepancy between the impact of challenges and the quality (control), the Biomedical Image Analysis ChallengeS (BIAS) initiative developed a set of recommendations for the reporting of challenges. The BIAS statement aims to improve the transparency of the reporting of a biomedical image analysis challenge regardless of field of application, image modality or task category assessed. This article describes how the BIAS statement was developed and presents a checklist which authors of biomedical image analysis challenges are encouraged to include in their submission when giving a paper on a challenge into review. The purpose of the checklist is to standardize and facilitate the review process and raise interpretability and reproducibility of challenge results by making relevant information explicit.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, Heidelberg 69120, Germany.
| | - Annika Reinke
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, Heidelberg 69120, Germany
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Masaryk University, Botanická 68a, Brno 60200, Czech Republic
| | - Anne L Martel
- Physical Sciences, Sunnybrook Research Institute, 2075 Bayview Avenue, Rm M6-609, Toronto ON M4N 3M5, Canada; Department Medical Biophysics, University of Toronto, 101 College St Suite 15-701, Toronto, ON M5G 1L7, Canada
| | - Tal Arbel
- Centre for Intelligent Machines, McGill University, 3480 University Street, McConnell Engineering Building, Room 425, Montreal QC H3A 0E9, Canada
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, Heidelberg 69120, Germany
| | - Allan Hanbury
- Institute of Information Systems Engineering, Technische Universität (TU) Wien, Favoritenstraße 9-11/194-04, Vienna 1040, Austria; Complexity Science Hub Vienna, Josefstädter Straße 39, Vienna 1080, Austria
| | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image (LTSI) - UMR_S 1099, Université de Rennes 1, Inserm, Rennes, Cedex 35043, France
| | - Henning Müller
- University of Applied Sciences Western Switzerland (HES-SO), Rue du Technopole 3, Sierre 3960, Switzerland; Medical Faculty, University of Geneva, Rue Gabrielle-Perret-Gentil 4, Geneva 1211, Switzerland
| | - Sinan Onogur
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, Heidelberg 69120, Germany
| | - Julio Saez-Rodriguez
- Institute of Computational Biomedicine, Heidelberg University, Faculty of Medicine, Im Neuenheimer Feld 267, Heidelberg 69120, Germany; Heidelberg University Hospital, Im Neuenheimer Feld 267, Heidelberg 69120, Germany; Joint Research Centre for Computational Biomedicine, Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen, Faculty of Medicine, Aachen 52074, Germany
| | - Bram van Ginneken
- Department of Radiology and Nuclear Medicine, Medical Image Analysis, Radboud University Center, Nijmegen 6525 GA, The Netherlands
| | - Annette Kopp-Schneider
- Division of Biostatistics, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 581, Heidelberg, 69120, Germany
| | - Bennett A Landman
- Electrical Engineering, Vanderbilt University, Nashville, Tennessee TN 37235-1679, USA
| |
Collapse
|
35
|
Peralta M, Bui QA, Ackaouy A, Martin T, Gilmore G, Haegelen C, Sauleau P, Baxter JSH, Jannin P. SepaConvNet for Localizing the Subthalamic Nucleus Using One Second Micro-electrode Recordings. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2020:888-893. [PMID: 33018127 DOI: 10.1109/embc44109.2020.9175294] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Micro-electrode recording (MER) is a powerful way of localizing target structures during neurosurgical procedures such as the implantation of deep brain stimulation electrodes, which is a common treatment for Parkinson's disease and other neurological disorders. While Micro-electrode Recording (MER) provides adjunctive information to guidance assisted by pre-operative imaging, it is not unanimously used in the operating room. The lack of standard use of MER may be in part due to its long duration, which can lead to complications during the operation, or due to high degree of expertise required for their interpretation. Over the past decade, various approaches addressing automating MER analysis for target localization have been proposed, which have mainly focused on feature engineering. While the accuracies obtained are acceptable in certain configurations, one issue with handcrafted MER features is that they do not necessarily capture more subtle differences in MER that could be detected auditorily by an expert neurophysiologist. In this paper, we propose and validate a deep learning-based pipeline for subthalamic nucleus (STN) localization with micro-electrode recordings motivated by the human auditory system. Our proposed Convolutional Neural Network (CNN), referred as SepaConvNet, shows improved accuracy over two comparative networks for locating the STN from one second MER samples.
Collapse
|
36
|
Peralta M, Baxter JSH, Khan AR, Haegelen C, Jannin P. Striatal shape alteration as a staging biomarker for Parkinson's Disease. Neuroimage Clin 2020; 27:102272. [PMID: 32473544 PMCID: PMC7260673 DOI: 10.1016/j.nicl.2020.102272] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 04/20/2020] [Accepted: 04/21/2020] [Indexed: 12/13/2022]
Abstract
Parkinson's Disease provokes alterations of subcortical deep gray matter, leading to subtle changes in the shape of several subcortical structures even before the manifestation of motor and non-motor clinical symptoms. We used an automated registration and segmentation pipeline to measure this structural alteration in one early and one advanced Parkinson's Disease (PD) cohorts, one prodromal stage cohort and one healthy control cohort. These structural alterations are then passed to a machine learning pipeline to classify these populations. Our workflow is able to distinguish different stages of PD based solely on shape analysis of the bilateral caudate nucleus and putamen, with balanced accuracies in the range of 59% to 85%. Furthermore, we compared the significance of each of these subcortical structure, compared the performances of different classifiers on this task, thus quantifying the informativeness of striatal shape alteration as a staging bio-marker for PD.
Collapse
Affiliation(s)
- Maxime Peralta
- INSERM, LTSI - UMR 1099, University of Rennes, Rennes, France
| | - John S H Baxter
- INSERM, LTSI - UMR 1099, University of Rennes, Rennes, France
| | - Ali R Khan
- Imaging Research Laboratories, Robarts Research institute, Western University, London, Canada
| | - Claire Haegelen
- INSERM, LTSI - UMR 1099, University of Rennes, Rennes, France; CHU Rennes, Rennes, France
| | - Pierre Jannin
- INSERM, LTSI - UMR 1099, University of Rennes, Rennes, France.
| |
Collapse
|
37
|
Bretonnier M, Michinov E, Le Pabic E, Hénaux PL, Jannin P, Morandi X, Riffaud L. Impact of the complexity of surgical procedures and intraoperative interruptions on neurosurgical team workload. Neurochirurgie 2020; 66:203-211. [PMID: 32416100 DOI: 10.1016/j.neuchi.2020.02.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 02/02/2020] [Accepted: 02/22/2020] [Indexed: 10/24/2022]
Abstract
BACKGROUND Neurosurgical teams are exposed to various stressors: complexity of surgical procedures, environment, time pressure and interruptions contribute to increasing the perceived workload. OBJECTIVE This study aimed to evaluate the impact of interruptions and surgical complexity on neurosurgical team workload. METHODS A prospective observational study was conducted on thirty surgical procedures of graduated complexity recorded in our Department of Neurosurgery. A scale was created and used by neurosurgeons to evaluate the perceived complexity of the surgical procedure. Interruptions and severity of interruptions were noted. The workloads of the neurosurgeon, surgical assistant, scrub nurse and circulating nurse were measured on the Surgery Task Load Index (SURG-TLX) at the end of the procedure. RESULTS A mean 24.6 interruptions per hour were recorded. The mean interference level of the interruptions was 3.5/7. Mean surgical complexity was 4.3/10. Mean sterile team workload was 43.4/100. The multiple linear regression model showed that sterile team workload increased with surgical complexity (β=6.692, P=.0002) but decreased in spite of increases in the number of interruptions per hour (β=-0.855, P=.027). Neurosurgeon and surgical assistant workload increased with surgical complexity (β=11.53, P<0.0001 and β=7.42, P=0.0007, respectively). Scrub nurse workload decreased in spite of increases in the number of interruptions per hour (β=-1.11, P=.026). CONCLUSION Our study suggests positive effects of some interruptions during elective neurosurgical procedures with strong team familiarity.
Collapse
Affiliation(s)
- M Bretonnier
- Univ Rennes, INSERM, LTSI - UMR 1099, 35000 Rennes, France; Department of Neurosurgery, Pontchaillou University Hospital, 2 Rue Henri Le Guilloux, 35033 Rennes Cedex 9, France.
| | - E Michinov
- Univ Rennes, LP3C (Laboratoire de Psychologie : Cognition, Comportement, Communication) - EA 1285, 35000 Rennes, France
| | - E Le Pabic
- Clinical Data Center, Pontchaillou University Hospital, Rennes, France; INSERM, CIC 1414, 35000 Rennes, France
| | - P-L Hénaux
- Univ Rennes, INSERM, LTSI - UMR 1099, 35000 Rennes, France; Department of Neurosurgery, Pontchaillou University Hospital, 2 Rue Henri Le Guilloux, 35033 Rennes Cedex 9, France
| | - P Jannin
- Univ Rennes, INSERM, LTSI - UMR 1099, 35000 Rennes, France
| | - X Morandi
- Univ Rennes, INSERM, LTSI - UMR 1099, 35000 Rennes, France; Department of Neurosurgery, Pontchaillou University Hospital, 2 Rue Henri Le Guilloux, 35033 Rennes Cedex 9, France
| | - L Riffaud
- Univ Rennes, INSERM, LTSI - UMR 1099, 35000 Rennes, France; Department of Neurosurgery, Pontchaillou University Hospital, 2 Rue Henri Le Guilloux, 35033 Rennes Cedex 9, France
| |
Collapse
|
38
|
Derathé A, Reche F, Moreau-Gaudry A, Jannin P, Gibaud B, Voros S. Predicting the quality of surgical exposure using spatial and procedural features from laparoscopic videos. Int J Comput Assist Radiol Surg 2019; 15:59-67. [PMID: 31673963 DOI: 10.1007/s11548-019-02072-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 09/30/2019] [Indexed: 10/25/2022]
Abstract
PURPOSE : Evaluating the quality of surgical procedures is a major concern in minimally invasive surgeries. We propose a bottom-up approach based on the study of Sleeve Gastrectomy procedures, for which we analyze what we assume to be an important indicator of the surgical expertise: the exposure of the surgical scene. We first aim at predicting this indicator with features extracted from the laparoscopic video feed, and second to analyze how the extracted features describing the surgical practice influence this indicator. METHOD : Twenty-nine patients underwent Sleeve Gastrectomy performed by two confirmed surgeons in a monocentric study. Features were extracted from spatial and procedural annotations of the videos, and an expert surgeon evaluated the quality of the surgical exposure at specific instants. The features were used as input of a classifier (linear discriminant analysis followed by a support vector machine) to predict the expertise indicator. Features selected in different configurations of the algorithm were compared to understand their relationships with the surgical exposure and the surgeon's practice. RESULTS : The optimized algorithm giving the best performance used spatial features as input ([Formula: see text]). It also predicted equally the two classes of the indicator, despite their strong imbalance. Analyzing the selection of input features in the algorithm allowed a comparison of different configurations of the algorithm and showed a link between the surgical exposure and the surgeon's practice. CONCLUSION : This preliminary study validates that a prediction of the surgical exposure from spatial features is possible. The analysis of the clusters of feature selected by the algorithm also shows encouraging results and potential clinical interpretations.
Collapse
Affiliation(s)
- Arthur Derathé
- Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC-IMAG, 38000, Grenoble, France
| | - Fabian Reche
- Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC-IMAG, 38000, Grenoble, France.,Department of Digestive Surgery, CHU de Grenoble, 38000, Grenoble, France
| | - Alexandre Moreau-Gaudry
- Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC-IMAG, 38000, Grenoble, France.,Clinical Investigation Centre - Innovative Technology, INSERM & CHUGA & UGA, 38000, Grenoble, France
| | - Pierre Jannin
- LTSI - UMR_S 1099, Université de Rennes, 35000, Rennes, France.,INSERM, 35000, Rennes, France
| | - Bernard Gibaud
- LTSI - UMR_S 1099, Université de Rennes, 35000, Rennes, France.,INSERM, 35000, Rennes, France
| | - Sandrine Voros
- Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC-IMAG, 38000, Grenoble, France.
| |
Collapse
|
39
|
Hénaux PL, Jannin P, Riffaud L. Nontechnical Skills in Neurosurgery: A Systematic Review of the Literature. World Neurosurg 2019; 130:e726-e736. [DOI: 10.1016/j.wneu.2019.06.204] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2019] [Revised: 06/26/2019] [Accepted: 06/27/2019] [Indexed: 01/10/2023]
|
40
|
Duprez J, Houvenaghel JF, Dondaine T, Péron J, Haegelen C, Drapier S, Modolo J, Jannin P, Vérin M, Sauleau P. Subthalamic nucleus local field potentials recordings reveal subtle effects of promised reward during conflict resolution in Parkinson's disease. Neuroimage 2019; 197:232-242. [DOI: 10.1016/j.neuroimage.2019.04.071] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Revised: 02/20/2019] [Accepted: 04/26/2019] [Indexed: 10/26/2022] Open
|
41
|
Bracq MS, Michinov E, Arnaldi B, Caillaud B, Gibaud B, Gouranton V, Jannin P. Learning procedural skills with a virtual reality simulator: An acceptability study. Nurse Educ Today 2019; 79:153-160. [PMID: 31132727 DOI: 10.1016/j.nedt.2019.05.026] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Revised: 04/25/2019] [Accepted: 05/16/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND Virtual Reality (VR) simulation has recently been developed and has improved surgical training. Most VR simulators focus on learning technical skills and few on procedural skills. Studies that evaluated VR simulators focused on feasibility, reliability or easiness of use, but few of them used a specific acceptability measurement tool. OBJECTIVES The aim of the study was to assess acceptability and usability of a new VR simulator for procedural skill training among scrub nurses, based on the Unified Theory of Acceptance and Use of Technology (UTAUT) model. PARTICIPANTS The simulator training system was tested with a convenience sample of 16 non-expert users and 13 expert scrub nurses from the neurosurgery department of a French University Hospital. METHODS The scenario was designed to train scrub nurses in the preparation of the instrumentation table for a craniotomy in the operating room (OR). RESULTS Acceptability of the VR simulator was demonstrated with no significant difference between expert scrub nurses and non-experts. There was no effect of age, gender or expertise. Workload, immersion and simulator sickness were also rated equally by all participants. Most participants stressed its pedagogical interest, fun and realism, but some of them also regretted its lack of visual comfort. CONCLUSION This VR simulator designed to teach surgical procedures can be widely used as a tool in initial or vocational training.
Collapse
Affiliation(s)
- Marie-Stéphanie Bracq
- Univ Rennes, LP3C (EA 1285), F-35000 Rennes, France; Univ Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France.
| | | | | | | | - Bernard Gibaud
- Univ Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France.
| | | | - Pierre Jannin
- Univ Rennes, Inserm, LTSI - UMR 1099, F-35000 Rennes, France.
| |
Collapse
|
42
|
Ahrweiller K, Houvenaghel JF, Riou A, Drapier S, Sauleau P, Haegelen C, Jannin P, Vérin M, Palard X, Le Jeune F. Postural instability and gait disorders after subthalamic nucleus deep brain stimulation in Parkinson's disease: a PET study. J Neurol 2019; 266:2764-2771. [PMID: 31350641 DOI: 10.1007/s00415-019-09482-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Revised: 07/09/2019] [Accepted: 07/22/2019] [Indexed: 01/13/2023]
Abstract
INTRODUCTION Patients with Parkinson's disease sometimes report postural instability and gait disorders (PIGD) after subthalamic nucleus deep brain stimulation (STN-DBS). Whether this is the direct consequence of DBS or the result of natural disease progression is still subject to debate. OBJECTIVE To compare changes in brain metabolism during STN-DBS between patients with and without PIGD after surgery. METHODS We extracted consecutive patients from a database where all Rennes Hospital patients undergoing STN-DBS are registered, with regular prospective updates of their clinical data. Patients were divided into two groups (PIGD and No PIGD) according to changes after surgery, as measured with a composite score based on the selected Unified Parkinson's Disease Rating Scale items. All patients underwent positron emission tomography with 18[F]-fluorodeoxyglucose 3 months before and after surgery. We ran an ANOVA with two factors (group: PIGD vs. No PIGD; and phase: preoperative vs. postoperative) on SPM8 to compare changes in brain metabolism between the two groups. RESULTS Participants were 56 patients, including 10 in the PIGD group. The two groups had similar baseline (i.e., before surgery) characteristics. We found two clusters of increased metabolism in the PIGD group relative to the No PIGD group: dorsal midbrain/pons, including locomotor mesencephalic region and reticular pontine formation, and right motor cerebellum. CONCLUSION We found different metabolic changes during DBS-STN among patients with PIGD, concerning brain regions that are already known to be involved in gait disorders in Parkinson's disease, suggesting that DBS is responsible for the appearance of PIGD.
Collapse
Affiliation(s)
- Kévin Ahrweiller
- "Behavior and Basal Ganglia" Research Unit, University of Rennes 1, Rennes, France. .,Department of Neurology, University Hospital of Rennes, Rennes, France.
| | - J F Houvenaghel
- "Behavior and Basal Ganglia" Research Unit, University of Rennes 1, Rennes, France.,Department of Neurology, University Hospital of Rennes, Rennes, France
| | - A Riou
- "Behavior and Basal Ganglia" Research Unit, University of Rennes 1, Rennes, France.,Department of Neurology, University Hospital of Rennes, Rennes, France
| | - S Drapier
- "Behavior and Basal Ganglia" Research Unit, University of Rennes 1, Rennes, France.,Department of Neurology, University Hospital of Rennes, Rennes, France
| | - P Sauleau
- "Behavior and Basal Ganglia" Research Unit, University of Rennes 1, Rennes, France.,Department of Neurophysiology, University Hospital of Rennes, Rennes, France
| | - C Haegelen
- Department of Neurosurgery, University Hospital of Rennes, Rennes, France.,MediCIS" Laboratory, INSERM/University of Rennes 1, Rennes, France
| | - P Jannin
- MediCIS" Laboratory, INSERM/University of Rennes 1, Rennes, France
| | - M Vérin
- "Behavior and Basal Ganglia" Research Unit, University of Rennes 1, Rennes, France.,Department of Neurology, University Hospital of Rennes, Rennes, France
| | - X Palard
- "Behavior and Basal Ganglia" Research Unit, University of Rennes 1, Rennes, France.,Department of Nuclear Medicine, Eugene Marquis Hospital Centre, University Hospital of Rennes, Rennes, France
| | - F Le Jeune
- "Behavior and Basal Ganglia" Research Unit, University of Rennes 1, Rennes, France.,Department of Nuclear Medicine, Eugene Marquis Hospital Centre, University Hospital of Rennes, Rennes, France
| |
Collapse
|
43
|
Huaulmé A, Despinoy F, Perez SAH, Harada K, Mitsuishi M, Jannin P. Automatic annotation of surgical activities using virtual reality environments. Int J Comput Assist Radiol Surg 2019; 14:1663-1671. [PMID: 31177422 DOI: 10.1007/s11548-019-02008-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 03/21/2019] [Indexed: 12/12/2022]
Abstract
PURPOSE Annotation of surgical activities becomes increasingly important for many recent applications such as surgical workflow analysis, surgical situation awareness, and the design of the operating room of the future, especially to train machine learning methods in order to develop intelligent assistance. Currently, annotation is mostly performed by observers with medical background and is incredibly costly and time-consuming, creating a major bottleneck for the above-mentioned technologies. In this paper, we propose a way to eliminate, or at least limit, the human intervention in the annotation process. METHODS Meaningful information about interaction between objects is inherently available in virtual reality environments. We propose a strategy to convert automatically this information into annotations in order to provide as output individual surgical process models. VALIDATION We implemented our approach through a peg-transfer task simulator and compared it to manual annotations. To assess the impact of our contribution, we studied both intra- and inter-observer variability. RESULTS AND CONCLUSION In average, manual annotations took more than 12 min for 1 min of video to achieve low-level physical activity annotation, whereas automatic annotation is achieved in less than a second for the same video period. We also demonstrated that manual annotation introduced mistakes as well as intra- and inter-observer variability that our method is able to suppress due to the high precision and reproducibility.
Collapse
Affiliation(s)
- Arnaud Huaulmé
- INSERM, LTSI - UMR 1099, Univ Rennes, 35000, Rennes, France.
| | | | - Saul Alexis Heredia Perez
- Department of Mechanical Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Kanako Harada
- Department of Mechanical Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Mamoru Mitsuishi
- Department of Mechanical Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Pierre Jannin
- INSERM, LTSI - UMR 1099, Univ Rennes, 35000, Rennes, France
| |
Collapse
|
44
|
Descoteaux M, Maier-Hein L, Franz A, Jannin P, Collins DL, Duchesne S. Special Issue on MICCAI 2017. Med Image Anal 2019; 48:259. [PMID: 30072019 DOI: 10.1016/j.media.2018.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
45
|
Kobayashi S, Cho B, Huaulmé A, Tatsugami K, Honda H, Jannin P, Hashizumea M, Eto M. Assessment of surgical skills by using surgical navigation in robot-assisted partial nephrectomy. Int J Comput Assist Radiol Surg 2019; 14:1449-1459. [PMID: 31119486 DOI: 10.1007/s11548-019-01980-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2019] [Accepted: 04/16/2019] [Indexed: 01/15/2023]
Abstract
PURPOSE To assess surgical skills in robot-assisted partial nephrectomy (RAPN) with and without surgical navigation (SN). METHODS We employed an SN system that synchronizes the real-time endoscopic image with a virtual reality three-dimensional (3D) model for RAPN and evaluated the skills of two expert surgeons with regard to the identification and dissection of the renal artery (non-SN group, n = 21 [first surgeon n = 9, second surgeon n = 12]; SN group, n = 32 [first surgeon n = 11, second surgeon n = 21]). We converted all movements of the robotic forceps during RAPN into a dedicated vocabulary. Using RAPN videos, we classified all movements of the robotic forceps into direct action (defined as movements of the robotic forceps that directly affect tissues) and connected motion (defined as movements that link actions). In addition, we analyzed the frequency, duration, and occupancy rate of the connected motion. RESULTS In the SN group, the R.E.N.A.L nephrometry score was lower (7 vs. 6, P = 0.019) and the time to identify and dissect the renal artery (16 vs. 9 min, P = 0.008) was significantly shorter. The connected motions of inefficient "insert," "pull," and "rotate" motions were significantly improved by SN. SN significantly improved the frequency, duration, and occupancy rate of connected motions of the right hand of the first surgeon and of both hands of the second surgeon. The improvements in connected motions were positively associated with SN for both surgeons. CONCLUSION This is the first study to investigate SN for nephron-sparing surgery. SN with 3D models might help improve the connected motions of expert surgeons to ensure efficient RAPN.
Collapse
Affiliation(s)
- Satoshi Kobayashi
- Department of Advanced Medical Initiatives Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan.,Department of Urology, Kyushu University, Fukuoka, Japan
| | - Byunghyun Cho
- Department of Advanced Medical Initiatives Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan.
| | - Arnaud Huaulmé
- Faculty of Medicine, National Institute of Health and Scientific Research, University of Rennes 1, Rennes, France
| | | | - Hiroshi Honda
- Department of Radiology, Kyushu University, Fukuoka, Japan
| | - Pierre Jannin
- Faculty of Medicine, National Institute of Health and Scientific Research, University of Rennes 1, Rennes, France
| | - Makoto Hashizumea
- Department of Advanced Medical Initiatives Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Masatoshi Eto
- Department of Advanced Medical Initiatives Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan.,Department of Urology, Kyushu University, Fukuoka, Japan
| |
Collapse
|
46
|
Maier-Hein L, Eisenmann M, Reinke A, Onogur S, Stankovic M, Scholz P, Arbel T, Bogunovic H, Bradley AP, Carass A, Feldmann C, Frangi AF, Full PM, van Ginneken B, Hanbury A, Honauer K, Kozubek M, Landman BA, März K, Maier O, Maier-Hein K, Menze BH, Müller H, Neher PF, Niessen W, Rajpoot N, Sharp GC, Sirinukunwattana K, Speidel S, Stock C, Stoyanov D, Taha AA, van der Sommen F, Wang CW, Weber MA, Zheng G, Jannin P, Kopp-Schneider A. Why rankings of biomedical image analysis competitions should be interpreted with care. Nat Commun 2018; 9:5217. [PMID: 30523263 PMCID: PMC6284017 DOI: 10.1038/s41467-018-07619-7] [Citation(s) in RCA: 143] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2018] [Accepted: 11/07/2018] [Indexed: 11/08/2022] Open
Abstract
International challenges have become the standard for validation of biomedical image analysis methods. Given their scientific impact, it is surprising that a critical analysis of common practices related to the organization of challenges has not yet been performed. In this paper, we present a comprehensive analysis of biomedical image analysis challenges conducted up to now. We demonstrate the importance of challenges and show that the lack of quality control has critical consequences. First, reproducibility and interpretation of the results is often hampered as only a fraction of relevant information is typically provided. Second, the rank of an algorithm is generally not robust to a number of variables such as the test data used for validation, the ranking scheme applied and the observers that make the reference annotations. To overcome these problems, we recommend best practice guidelines and define open research questions to be addressed in the future.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany.
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Annika Reinke
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Sinan Onogur
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Marko Stankovic
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Patrick Scholz
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Tal Arbel
- Centre for Intelligent Machines, McGill University, Montreal, QC, H3A0G4, Canada
| | - Hrvoje Bogunovic
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University Vienna, 1090, Vienna, Austria
| | - Andrew P Bradley
- Science and Engineering Faculty, Queensland University of Technology, Brisbane, QLD, 4001, Australia
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Department of Computer Science, Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Carolin Feldmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Alejandro F Frangi
- CISTIB - Center for Computational Imaging & Simulation Technologies in Biomedicine, The University of Leeds, Leeds, Yorkshire, LS2 9JT, UK
| | - Peter M Full
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Bram van Ginneken
- Department of Radiology and Nuclear Medicine, Medical Image Analysis, Radboud University Center, 6525 GA, Nijmegen, The Netherlands
| | - Allan Hanbury
- Institute of Information Systems Engineering, TU Wien, 1040, Vienna, Austria
- Complexity Science Hub Vienna, 1080, Vienna, Austria
| | - Katrin Honauer
- Heidelberg Collaboratory for Image Processing (HCI), Heidelberg University, 69120, Heidelberg, Germany
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Masaryk University, 60200, Brno, Czech Republic
| | - Bennett A Landman
- Electrical Engineering, Vanderbilt University, Nashville, TN, 37235-1679, USA
| | - Keno März
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Oskar Maier
- Institute of Medical Informatics, Universität zu Lübeck, 23562, Lübeck, Germany
| | - Klaus Maier-Hein
- Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Bjoern H Menze
- Institute for Advanced Studies, Department of Informatics, Technical University of Munich, 80333, Munich, Germany
| | - Henning Müller
- Information System Institute, HES-SO, Sierre, 3960, Switzerland
| | - Peter F Neher
- Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Wiro Niessen
- Departments of Radiology, Nuclear Medicine and Medical Informatics, Erasmus MC, 3015 GD, Rotterdam, The Netherlands
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK
| | - Gregory C Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, 02114, USA
| | | | - Stefanie Speidel
- Division of Translational Surgical Oncology (TCO), National Center for Tumor Diseases Dresden, 01307, Dresden, Germany
| | - Christian Stock
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Danail Stoyanov
- Centre for Medical Image Computing (CMIC) & Department of Computer Science, University College London, London, W1W 7TS, UK
| | - Abdel Aziz Taha
- Data Science Studio, Research Studios Austria FG, 1090, Vienna, Austria
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven, The Netherlands
| | - Ching-Wei Wang
- AIExplore, NTUST Center of Computer Vision and Medical Imaging, Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, 106, Taiwan
| | - Marc-André Weber
- Institute of Diagnostic and Interventional Radiology, University Medical Center Rostock, 18051, Rostock, Germany
| | - Guoyan Zheng
- Institute for Surgical Technology and Biomechanics, University of Bern, Bern, 3014, Switzerland
| | - Pierre Jannin
- Univ Rennes, Inserm, LTSI (Laboratoire Traitement du Signal et de l'Image) - UMR_S 1099, Rennes, 35043, Cedex, France
| | - Annette Kopp-Schneider
- Division of Biostatistics, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| |
Collapse
|
47
|
Forestier G, Petitjean F, Senin P, Despinoy F, Huaulmé A, Fawaz HI, Weber J, Idoumghar L, Muller PA, Jannin P. Surgical motion analysis using discriminative interpretable patterns. Artif Intell Med 2018; 91:3-11. [PMID: 30172445 DOI: 10.1016/j.artmed.2018.08.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Revised: 07/06/2018] [Accepted: 08/13/2018] [Indexed: 11/29/2022]
Abstract
OBJECTIVE The analysis of surgical motion has received a growing interest with the development of devices allowing their automatic capture. In this context, the use of advanced surgical training systems makes an automated assessment of surgical trainee possible. Automatic and quantitative evaluation of surgical skills is a very important step in improving surgical patient care. MATERIAL AND METHOD In this paper, we present an approach for the discovery and ranking of discriminative and interpretable patterns of surgical practice from recordings of surgical motions. A pattern is defined as a series of actions or events in the kinematic data that together are distinctive of a specific gesture or skill level. Our approach is based on the decomposition of continuous kinematic data into a set of overlapping gestures represented by strings (bag of words) for which we compute comparative numerical statistic (tf-idf) enabling the discriminative gesture discovery via its relative occurrence frequency. RESULTS We carried out experiments on three surgical motion datasets. The results show that the patterns identified by the proposed method can be used to accurately classify individual gestures, skill levels and surgical interfaces. We also present how the patterns provide a detailed feedback on the trainee skill assessment. CONCLUSIONS The proposed approach is an interesting addition to existing learning tools for surgery as it provides a way to obtain a feedback on which parts of an exercise have been used to classify the attempt as correct or incorrect.
Collapse
Affiliation(s)
- Germain Forestier
- IRIMAS, Université de Haute-Alsace, Mulhouse, France; Faculty of Information Technology, Monash University, Melbourne, Australia.
| | - François Petitjean
- Faculty of Information Technology, Monash University, Melbourne, Australia.
| | - Pavel Senin
- Los Alamos National Laboratory, University Of Hawai'i at Mānoa, United States.
| | - Fabien Despinoy
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| | - Arnaud Huaulmé
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| | | | | | | | | | - Pierre Jannin
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| |
Collapse
|
48
|
Descoteaux M, Maier-Hein L, Franz A, Jannin P, Collins LD, Duchesne S. Guest editorial for the IJCARS special issue on MICCAI 2017. Int J Comput Assist Radiol Surg 2018; 13:1309-1310. [PMID: 30120692 DOI: 10.1007/s11548-018-1847-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Maxime Descoteaux
- Computer Science, Université de Sherbrooke, 2500 Boul. Université, Sherbrooke, QC, J1K2R1, Canada.
| | - Lena Maier-Hein
- Computer-Assisted Medical Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 581, 69120, Heidelberg, Germany.
| | - Alfred Franz
- Department of Computer Science, Ulm University of Applied Sciences, Albert-Einstein-Allee 55, 89081, Ulm, Germany
| | - Pierre Jannin
- Modeling Surgical Knowledge and Processes, U1099 INSERM Rennes, Faculté de Médecine, Université de Rennes 1, CS34317, 35043, Rennes cedex, France
| | - Louis D Collins
- Neurology and Neurosurgery, Biomedical Engineering, McGill University, 3801 University Street, Montreal, QC, H3A 2B4, Canada
| | - Simon Duchesne
- Radiology Department, Université Laval, 2601 Chemin de la Canardière, Québec, QC, G1J 2G3, Canada
| |
Collapse
|
49
|
Baumgarten C, Haegelen C, Zhao Y, Sauleau P, Jannin P. Data-Driven Prediction of the Therapeutic Window during Subthalamic Deep Brain Stimulation Surgery. Stereotact Funct Neurosurg 2018; 96:142-150. [DOI: 10.1159/000488683] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Accepted: 03/21/2018] [Indexed: 11/19/2022]
|
50
|
Haegelen C, Baumgarten C, Houvenaghel JF, Zhao Y, Péron J, Drapier S, Jannin P, Morandi X. Functional atlases for analysis of motor and neuropsychological outcomes after medial globus pallidus and subthalamic stimulation. PLoS One 2018; 13:e0200262. [PMID: 30005077 PMCID: PMC6044526 DOI: 10.1371/journal.pone.0200262] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Accepted: 06/24/2018] [Indexed: 11/18/2022] Open
Abstract
Anatomical atlases have been developed to improve the targeting of basal ganglia in deep brain stimulation. However, the sole anatomy cannot predict the functional outcome of this surgery. Deep brain stimulation is often a compromise between several functional outcomes: motor, fluency and neuropsychological outcomes in particular. In this study, we have developed anatomo-clinical atlases for the targeting of subthalamic and medial globus pallidus deep brain stimulation. The activated electrode coordinates of 42 patients implanted in the subthalamic nucleus and 29 patients in the medial globus pallidus were studied. The atlas was built using the representation of the volume of tissue theoretically activated by the stimulation. The UPDRS score was used to represent the motor outcome. The Stroop test was represented as well as semantic and phonemic fluencies. For the subthalamic nucleus, best motor outcomes were obtained when the supero-lateral part of the nucleus was stimulated whereas the semantic fluency was impaired in this same region. For the medial globus pallidus, best outcomes were obtained when the postero ventral part of the nucleus was stimulated whereas the phonemic fluency was impaired in this same region. There was no significant neuropsychological impairment. We have proposed new anatomo-clinical atlases to visualize the motor and neuropsychological consequences at 6 months of subthalamic nucleus and pallidal stimulation in patients with Parkinson's disease.
Collapse
Affiliation(s)
- Claire Haegelen
- Department of Neurosurgery, CHU Pontchaillou, Rennes, France
- INSERM, LTSI U1099, Faculté de Médecine, Rennes, France
- University of Rennes I, Rennes, France
- * E-mail:
| | - Clément Baumgarten
- INSERM, LTSI U1099, Faculté de Médecine, Rennes, France
- University of Rennes I, Rennes, France
| | - Jean-François Houvenaghel
- Department of Neurology, CHU Pontchaillou, Rennes, France
- Behavior and Basal Ganglia host team 4712, University of Rennes 1, Rennes, France
| | - Yulong Zhao
- INSERM, LTSI U1099, Faculté de Médecine, Rennes, France
- University of Rennes I, Rennes, France
| | - Julie Péron
- Swiss Centre for Affective Sciences, Geneva, Switzerland
| | - Sophie Drapier
- Department of Neurology, CHU Pontchaillou, Rennes, France
- Behavior and Basal Ganglia host team 4712, University of Rennes 1, Rennes, France
| | - Pierre Jannin
- INSERM, LTSI U1099, Faculté de Médecine, Rennes, France
- University of Rennes I, Rennes, France
| | - Xavier Morandi
- Department of Neurosurgery, CHU Pontchaillou, Rennes, France
- INSERM, LTSI U1099, Faculté de Médecine, Rennes, France
- University of Rennes I, Rennes, France
| |
Collapse
|