1
|
Li J, Zhou Z, Yang J, Pepe A, Gsaxner C, Luijten G, Qu C, Zhang T, Chen X, Li W, Wodzinski M, Friedrich P, Xie K, Jin Y, Ambigapathy N, Nasca E, Solak N, Melito GM, Vu VD, Memon AR, Schlachta C, De Ribaupierre S, Patel R, Eagleson R, Chen X, Mächler H, Kirschke JS, de la Rosa E, Christ PF, Li HB, Ellis DG, Aizenberg MR, Gatidis S, Küstner T, Shusharina N, Heller N, Andrearczyk V, Depeursinge A, Hatt M, Sekuboyina A, Löffler MT, Liebl H, Dorent R, Vercauteren T, Shapey J, Kujawa A, Cornelissen S, Langenhuizen P, Ben-Hamadou A, Rekik A, Pujades S, Boyer E, Bolelli F, Grana C, Lumetti L, Salehi H, Ma J, Zhang Y, Gharleghi R, Beier S, Sowmya A, Garza-Villarreal EA, Balducci T, Angeles-Valdez D, Souza R, Rittner L, Frayne R, Ji Y, Ferrari V, Chatterjee S, Dubost F, Schreiber S, Mattern H, Speck O, Haehn D, John C, Nürnberger A, Pedrosa J, Ferreira C, Aresta G, Cunha A, Campilho A, Suter Y, Garcia J, Lalande A, Vandenbossche V, Van Oevelen A, Duquesne K, Mekhzoum H, Vandemeulebroucke J, Audenaert E, Krebs C, van Leeuwen T, Vereecke E, Heidemeyer H, Röhrig R, Hölzle F, Badeli V, Krieger K, Gunzer M, Chen J, van Meegdenburg T, Dada A, Balzer M, Fragemann J, Jonske F, Rempe M, Malorodov S, Bahnsen FH, Seibold C, Jaus A, Marinov Z, Jaeger PF, Stiefelhagen R, Santos AS, Lindo M, Ferreira A, Alves V, Kamp M, Abourayya A, Nensa F, Hörst F, Brehmer A, Heine L, Hanusrichter Y, Weßling M, Dudda M, Podleska LE, Fink MA, Keyl J, Tserpes K, Kim MS, Elhabian S, Lamecker H, Zukić D, Paniagua B, Wachinger C, Urschler M, Duong L, Wasserthal J, Hoyer PF, Basu O, Maal T, Witjes MJH, Schiele G, Chang TC, Ahmadi SA, Luo P, Menze B, Reyes M, Deserno TM, Davatzikos C, Puladi B, Fua P, Yuille AL, Kleesiek J, Egger J. MedShapeNet - a large-scale dataset of 3D medical shapes for computer vision. BIOMED ENG-BIOMED TE 2025; 70:71-90. [PMID: 39733351 DOI: 10.1515/bmt-2024-0396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Accepted: 09/21/2024] [Indexed: 12/31/2024]
Abstract
OBJECTIVES The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing. METHODS We present MedShapeNet to translate data-driven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing. RESULTS By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. CONCLUSIONS MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. The project page is: https://medshapenet.ikim.nrw/.
Collapse
Affiliation(s)
- Jianning Li
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
| | - Zongwei Zhou
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Jiancheng Yang
- Computer Vision Laboratory, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
| | - Antonio Pepe
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Gijs Luijten
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
- Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, University Medicine Essen, Essen, Germany
| | - Chongyu Qu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Tiezheng Zhang
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Xiaoxi Chen
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Wenxuan Li
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland
| | - Paul Friedrich
- Center for Medical Image Analysis & Navigation (CIAN), Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Kangxian Xie
- Department of Computer Science and Engineering, University at Buffalo, SUNY, NY, 14260, USA
| | - Yuan Jin
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
- Research Center for Connected Healthcare Big Data, ZhejiangLab, Hangzhou, Zhejiang, China
| | - Narmada Ambigapathy
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Enrico Nasca
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Naida Solak
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
| | - Gian Marco Melito
- Institute of Mechanics, Graz University of Technology, Graz, Austria
| | - Viet Duc Vu
- Department of Diagnostic and Interventional Radiology, University Hospital Giessen, Justus-Liebig-University Giessen, Giessen, Germany
| | - Afaque R Memon
- Department of Mechanical Engineering, Mehran University of Engineering and Technology, Jamshoro, Sindh, Pakistan
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Christopher Schlachta
- Canadian Surgical Technologies & Advanced Robotics (CSTAR), University Hospital, London, Canada
| | - Sandrine De Ribaupierre
- Canadian Surgical Technologies & Advanced Robotics (CSTAR), University Hospital, London, Canada
| | - Rajnikant Patel
- Canadian Surgical Technologies & Advanced Robotics (CSTAR), University Hospital, London, Canada
| | - Roy Eagleson
- Canadian Surgical Technologies & Advanced Robotics (CSTAR), University Hospital, London, Canada
| | - Xiaojun Chen
- State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Institute of Biomedical Manufacturing and Life Quality Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Heinrich Mächler
- Department of Cardiac Surgery, Medical University Graz, Graz, Austria
| | - Jan Stefan Kirschke
- Geschäftsführender Oberarzt Abteilung für Interventionelle und Diagnostische Neuroradiologie, Universitätsklinikum der Technischen Universität München, München, Germany
| | - Ezequiel de la Rosa
- icometrix, Leuven, Belgium
- Department of Informatics, Technical University of Munich, Garching bei München, Germany
| | | | - Hongwei Bran Li
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - David G Ellis
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, USA
| | - Michele R Aizenberg
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, USA
| | - Sergios Gatidis
- University Hospital of Tuebingen Diagnostic and Interventional Radiology Medical Image and Data Analysis (MIDAS.lab), Tuebingen, Germany
| | - Thomas Küstner
- University Hospital of Tuebingen Diagnostic and Interventional Radiology Medical Image and Data Analysis (MIDAS.lab), Tuebingen, Germany
| | - Nadya Shusharina
- Division of Radiation Biophysics, Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | | | - Vincent Andrearczyk
- Institute of Informatics, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland
| | - Adrien Depeursinge
- Institute of Informatics, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital (CHUV), Lausanne, Switzerland
| | - Mathieu Hatt
- LaTIM, INSERM UMR 1101, Univ Brest, Brest, France
| | - Anjany Sekuboyina
- Department of Informatics, Technical University of Munich, Garching bei München, Germany
| | | | - Hans Liebl
- Department of Neuroradiology, Klinikum Rechts der Isar, Munich, Germany
| | - Reuben Dorent
- King's College London, Strand, London, UK
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | | | | | | | - Stefan Cornelissen
- Elisabeth-TweeSteden Hospital, Tilburg, Netherlands
- Video Coding & Architectures Research Group, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Patrick Langenhuizen
- Elisabeth-TweeSteden Hospital, Tilburg, Netherlands
- Video Coding & Architectures Research Group, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Achraf Ben-Hamadou
- Centre de Recherche en Numérique de Sfax, Laboratory of Signals, Systems, Artificial Intelligence and Networks, Sfax, Tunisia
- Udini, Aix-en-Provence, France
| | - Ahmed Rekik
- Centre de Recherche en Numérique de Sfax, Laboratory of Signals, Systems, Artificial Intelligence and Networks, Sfax, Tunisia
- Udini, Aix-en-Provence, France
| | - Sergi Pujades
- Inria, Université Grenoble Alpes, CNRS, Grenoble, France
| | - Edmond Boyer
- Inria, Université Grenoble Alpes, CNRS, Grenoble, France
| | - Federico Bolelli
- "Enzo Ferrari" Department of Engineering, University of Modena and Reggio Emilia, Modena, Italy
| | - Costantino Grana
- "Enzo Ferrari" Department of Engineering, University of Modena and Reggio Emilia, Modena, Italy
| | - Luca Lumetti
- "Enzo Ferrari" Department of Engineering, University of Modena and Reggio Emilia, Modena, Italy
| | - Hamidreza Salehi
- Department of Artificial Intelligence in Medical Sciences, Faculty of Advanced Technologies in Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Jun Ma
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON, Canada
- Peter Munk Cardiac Centre, University Health Network, Toronto, ON, Canada
- Vector Institute, Toronto, ON, Canada
| | - Yao Zhang
- Shanghai AI Laboratory, Shanghai, People's Republic of China
| | - Ramtin Gharleghi
- School of Mechanical and Manufacturing Engineering, UNSW, Sydney, NSW, Australia
| | - Susann Beier
- School of Mechanical and Manufacturing Engineering, UNSW, Sydney, NSW, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, UNSW, Sydney, NSW, Australia
| | | | - Thania Balducci
- Institute of Neurobiology, Universidad Nacional Autónoma de México Campus Juriquilla, Querétaro, Mexico
| | - Diego Angeles-Valdez
- Institute of Neurobiology, Universidad Nacional Autónoma de México Campus Juriquilla, Querétaro, Mexico
- Department of Biomedical Sciences of Cells and Systems, Cognitive Neuroscience Center, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Roberto Souza
- Advanced Imaging and Artificial Intelligence Lab, Electrical and Software Engineering Department, The Hotchkiss Brain Institute, University of Calgary, Calgary, Canada
| | - Leticia Rittner
- Medical Image Computing Lab, School of Electrical and Computer Engineering (FEEC), University of Campinas, Campinas, Brazil
| | - Richard Frayne
- Radiology and Clinical Neurosciences Departments, The Hotchkiss Brain Institute, University of Calgary, Calgary, Canada
- Seaman Family MR Research Centre, Foothills Medical Center, Calgary, Canada
| | - Yuanfeng Ji
- University of Hongkong, Pok Fu Lam, Hong Kong, People's Republic of China
| | - Vincenzo Ferrari
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
- EndoCAS Center, Department of Translational Research and of New Surgical and Medical Technologies, University of Pisa, Pisa, Italy
| | - Soumick Chatterjee
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
- Genomics Research Centre, Human Technopole, Milan, Italy
| | | | - Stefanie Schreiber
- German Centre for Neurodegenerative Disease, Magdeburg, Germany
- Centre for Behavioural Brain Sciences, Magdeburg, Germany
- Department of Neurology, Medical Faculty, University Hospital of Magdeburg, Magdeburg, Germany
| | - Hendrik Mattern
- German Centre for Neurodegenerative Disease, Magdeburg, Germany
- Centre for Behavioural Brain Sciences, Magdeburg, Germany
- Department of Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Oliver Speck
- German Centre for Neurodegenerative Disease, Magdeburg, Germany
- Centre for Behavioural Brain Sciences, Magdeburg, Germany
- Department of Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Daniel Haehn
- University of Massachusetts Boston, Boston, MA, USA
| | | | - Andreas Nürnberger
- Centre for Behavioural Brain Sciences, Magdeburg, Germany
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - João Pedrosa
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal
- Faculty of Engineering, University of Porto (FEUP), Porto, Portugal
| | - Carlos Ferreira
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal
- Faculty of Engineering, University of Porto (FEUP), Porto, Portugal
| | - Guilherme Aresta
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - António Cunha
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal
- Universidade of Trás-os-Montes and Alto Douro (UTAD), Vila Real, Portugal
| | - Aurélio Campilho
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal
- Faculty of Engineering, University of Porto (FEUP), Porto, Portugal
| | - Yannick Suter
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Jose Garcia
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA
| | - Alain Lalande
- ICMUB Laboratory, Faculty of Medicine, CNRS UMR 6302, University of Burgundy, Dijon, France
- Medical Imaging Department, University Hospital of Dijon, Dijon, France
| | | | - Aline Van Oevelen
- Department of Human Structure and Repair, Ghent University, Ghent, Belgium
| | - Kate Duquesne
- Department of Human Structure and Repair, Ghent University, Ghent, Belgium
| | - Hamza Mekhzoum
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium
| | - Jef Vandemeulebroucke
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium
| | - Emmanuel Audenaert
- Department of Human Structure and Repair, Ghent University, Ghent, Belgium
| | - Claudia Krebs
- Department of Cellular and Physiological Sciences, Life Sciences Centre, University of British Columbia, Vancouver, BC, Canada
| | - Timo van Leeuwen
- Department of Development & Regeneration, KU Leuven Campus Kulak, Kortrijk, Belgium
| | - Evie Vereecke
- Department of Development & Regeneration, KU Leuven Campus Kulak, Kortrijk, Belgium
| | - Hauke Heidemeyer
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Rainer Röhrig
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Vahid Badeli
- Institute of Fundamentals and Theory in Electrical Engineering, Graz University of Technology, Graz, Austria
| | - Kathrin Krieger
- Leibniz-Institut für Analytische Wissenschaften-ISAS-e.V., Dortmund, Germany
| | - Matthias Gunzer
- Leibniz-Institut für Analytische Wissenschaften-ISAS-e.V., Dortmund, Germany
- Institute for Experimental Immunology and Imaging, University Hospital, University Duisburg-Essen, Essen, Germany
| | - Jianxu Chen
- Leibniz-Institut für Analytische Wissenschaften-ISAS-e.V., Dortmund, Germany
| | - Timo van Meegdenburg
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Faculty of Statistics, Technical University Dortmund, Dortmund, Germany
| | - Amin Dada
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Miriam Balzer
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Jana Fragemann
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Frederic Jonske
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Moritz Rempe
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Stanislav Malorodov
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Fin H Bahnsen
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Constantin Seibold
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Alexander Jaus
- Computer Vision for Human-Computer Interaction Lab, Department of Informatics, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Zdravko Marinov
- Computer Vision for Human-Computer Interaction Lab, Department of Informatics, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Paul F Jaeger
- German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group, Heidelberg, Germany
- Helmholtz Imaging, DKFZ Heidelberg, Heidelberg, Germany
| | - Rainer Stiefelhagen
- Computer Vision for Human-Computer Interaction Lab, Department of Informatics, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Ana Sofia Santos
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Center Algoritmi, LASI, University of Minho, Braga, Portugal
| | - Mariana Lindo
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Center Algoritmi, LASI, University of Minho, Braga, Portugal
| | - André Ferreira
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Center Algoritmi, LASI, University of Minho, Braga, Portugal
| | - Victor Alves
- Center Algoritmi, LASI, University of Minho, Braga, Portugal
| | - Michael Kamp
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
- Institute for Neuroinformatics, Ruhr University Bochum, Bochum, Germany
- Department of Data Science & AI, Monash University, Clayton, VIC, Australia
| | - Amr Abourayya
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute for Neuroinformatics, Ruhr University Bochum, Bochum, Germany
| | - Felix Nensa
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany
| | - Fabian Hörst
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Alexander Brehmer
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Lukas Heine
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Yannik Hanusrichter
- Department of Tumour Orthopaedics and Revision Arthroplasty, Orthopaedic Hospital Volmarstein, Wetter, Germany
- Center for Musculoskeletal Surgery, University Hospital of Essen, Essen, Germany
| | - Martin Weßling
- Department of Tumour Orthopaedics and Revision Arthroplasty, Orthopaedic Hospital Volmarstein, Wetter, Germany
- Center for Musculoskeletal Surgery, University Hospital of Essen, Essen, Germany
| | - Marcel Dudda
- Department of Trauma, Hand and Reconstructive Surgery, University Hospital Essen, Essen, Germany
- Department of Orthopaedics and Trauma Surgery, BG-Klinikum Duisburg, University of Duisburg-Essen, Essen , Germany
| | - Lars E Podleska
- Department of Tumor Orthopedics and Sarcoma Surgery, University Hospital Essen (AöR), Essen, Germany
| | - Matthias A Fink
- Clinic for Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Julius Keyl
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Konstantinos Tserpes
- Department of Informatics and Telematics, Harokopio University of Athens, Tavros, Greece
| | - Moon-Sung Kim
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Shireen Elhabian
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, USA
| | | | - Dženan Zukić
- Medical Computing, Kitware Inc., Carrboro, NC, USA
| | | | - Christian Wachinger
- Lab for Artificial Intelligence in Medical Imaging, Department of Radiology, Technical University Munich, Munich, Germany
| | - Martin Urschler
- Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Graz, Austria
| | - Luc Duong
- Department of Software and IT Engineering, Ecole de Technologie Superieure, Montreal, Quebec, Canada
| | - Jakob Wasserthal
- Clinic of Radiology & Nuclear Medicine, University Hospital Basel, Basel, Switzerland
| | - Peter F Hoyer
- Pediatric Clinic II, University Children's Hospital Essen, University Duisburg-Essen, Essen, Germany
| | - Oliver Basu
- Pediatric Clinic III, University Children's Hospital Essen, University Duisburg-Essen, Essen, Germany
- Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, University Medicine Essen, Essen, Germany
| | - Thomas Maal
- Radboudumc 3D-Lab , Department of Oral and Maxillofacial Surgery , Radboud University Nijmegen Medical Centre, Nijmegen , The Netherlands
| | - Max J H Witjes
- 3D Lab, Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, Groningen, the Netherlands
| | - Gregor Schiele
- Intelligent Embedded Systems Lab, University of Duisburg-Essen, Bismarckstraße 90, 47057 Duisburg, Germany
| | | | | | - Ping Luo
- University of Hongkong, Pok Fu Lam, Hong Kong, People's Republic of China
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - Thomas M Deserno
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Braunschweig, Germany
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics , Penn Neurodegeneration Genomics Center , University of Pennsylvania, Philadelphia , PA , USA ; and Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, PA, USA
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Pascal Fua
- Computer Vision Laboratory, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
| | - Alan L Yuille
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner Site Essen, Essen, Germany
- Department of Physics, TU Dortmund University, Dortmund, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
- Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, University Medicine Essen, Essen, Germany
| |
Collapse
|
2
|
Xu J, Wei Y, Jiang S, Zhou H, Li Y, Chen X. Intelligent surgical planning for automatic reconstruction of orbital blowout fracture using a prior adversarial generative network. Med Image Anal 2025; 99:103332. [PMID: 39321669 DOI: 10.1016/j.media.2024.103332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 08/27/2024] [Accepted: 08/29/2024] [Indexed: 09/27/2024]
Abstract
Orbital blowout fracture (OBF) is a disease that can result in herniation of orbital soft tissue, enophthalmos, and even severe visual dysfunction. Given the complex and diverse types of orbital wall fractures, reconstructing the orbital wall presents a significant challenge in OBF repair surgery. Accurate surgical planning is crucial in addressing this issue. However, there is currently a lack of efficient and precise surgical planning methods. Therefore, we propose an intelligent surgical planning method for automatic OBF reconstruction based on a prior adversarial generative network (GAN). Firstly, an automatic generation method of symmetric prior anatomical knowledge (SPAK) based on spatial transformation is proposed to guide the reconstruction of fractured orbital wall. Secondly, a reconstruction network based on SPAK-guided GAN is proposed to achieve accurate and automatic reconstruction of fractured orbital wall. Building upon this, a new surgical planning workflow based on the proposed reconstruction network and 3D Slicer software is developed to simplify the operational steps. Finally, the proposed surgical planning method is successfully applied in OBF repair surgery, verifying its reliability. Experimental results demonstrate that the proposed reconstruction network achieves relatively accurate automatic reconstruction of the orbital wall, with an average DSC of 92.35 ± 2.13% and a 95% Hausdorff distance of 0.59 ± 0.23 mm, markedly outperforming the compared state-of-the-art networks. Additionally, the proposed surgical planning workflow reduces the traditional planning time from an average of 25 min and 17.8 s to just 1 min and 35.1 s, greatly enhancing planning efficiency. In the future, the proposed surgical planning method will have good application prospects in OBF repair surgery.
Collapse
Affiliation(s)
- Jiangchang Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200241, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yining Wei
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China; Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai 200011, China
| | - Shuanglin Jiang
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200241, China
| | - Huifang Zhou
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China; Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai 200011, China
| | - Yinwei Li
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China; Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai 200011, China.
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200241, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai 200241, China.
| |
Collapse
|
3
|
Wodzinski M, Kwarciak K, Daniol M, Hemmerling D. Improving deep learning-based automatic cranial defect reconstruction by heavy data augmentation: From image registration to latent diffusion models. Comput Biol Med 2024; 182:109129. [PMID: 39265478 DOI: 10.1016/j.compbiomed.2024.109129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 08/28/2024] [Accepted: 09/06/2024] [Indexed: 09/14/2024]
Abstract
Modeling and manufacturing of personalized cranial implants are important research areas that may decrease the waiting time for patients suffering from cranial damage. The modeling of personalized implants may be partially automated by the use of deep learning-based methods. However, this task suffers from difficulties with generalizability into data from previously unseen distributions that make it difficult to use the research outcomes in real clinical settings. Due to difficulties with acquiring ground-truth annotations, different techniques to improve the heterogeneity of datasets used for training the deep networks have to be considered and introduced. In this work, we present a large-scale study of several augmentation techniques, varying from classical geometric transformations, image registration, variational autoencoders, and generative adversarial networks, to the most recent advances in latent diffusion models. We show that the use of heavy data augmentation significantly increases both the quantitative and qualitative outcomes, resulting in an average Dice Score above 0.94 for the SkullBreak and above 0.96 for the SkullFix datasets. The results show that latent diffusion models combined with vector quantized variational autoencoder outperform other generative augmentation strategies. Moreover, we show that the synthetically augmented network successfully reconstructs real clinical defects, without the need to acquire costly and time-consuming annotations. The findings of the work will lead to easier, faster, and less expensive modeling of personalized cranial implants. This is beneficial to numerous people suffering from cranial injuries. The work constitutes a considerable contribution to the field of artificial intelligence in the automatic modeling of personalized cranial implants.
Collapse
Affiliation(s)
- Marek Wodzinski
- AGH University of Krakow, Department of Measurement and Electronics, Kraków, al. Mickiewicza 30, PL32059, Poland; University of Applied Sciences Western Switzerland (HES-SO Valais), Information Systems Institute, Sierre, Rue de Technopôle 3, 3960, Switzerland.
| | - Kamil Kwarciak
- AGH University of Krakow, Department of Measurement and Electronics, Kraków, al. Mickiewicza 30, PL32059, Poland
| | - Mateusz Daniol
- AGH University of Krakow, Department of Measurement and Electronics, Kraków, al. Mickiewicza 30, PL32059, Poland
| | - Daria Hemmerling
- AGH University of Krakow, Department of Measurement and Electronics, Kraków, al. Mickiewicza 30, PL32059, Poland
| |
Collapse
|
4
|
Zhong C, Xiong Y, Tang W, Guo J. A Stage-Wise Residual Attention Generation Adversarial Network for Mandibular Defect Repairing and Reconstruction. Int J Neural Syst 2024; 34:2450033. [PMID: 38623651 DOI: 10.1142/s0129065724500333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/17/2024]
Abstract
Surgical reconstruction of mandibular defects is a clinical routine manner for the rehabilitation of patients with deformities. The mandible plays a crucial role in maintaining the facial contour and ensuring the speech and mastication functions. The repairing and reconstruction of mandible defects is a significant yet challenging task in oral-maxillofacial surgery. Currently, the mainly available methods are traditional digitalized design methods that suffer from substantial artificial operations, limited applicability and high reconstruction error rates. An automated, precise, and individualized method is imperative for maxillofacial surgeons. In this paper, we propose a Stage-wise Residual Attention Generative Adversarial Network (SRA-GAN) for mandibular defect reconstruction. Specifically, we design a stage-wise residual attention mechanism for generator to enhance the extraction capability of mandibular remote spatial information, making it adaptable to various defects. For the discriminator, we propose a multi-field perceptual network, consisting of two parallel discriminators with different perceptual fields, to reduce the cumulative reconstruction errors. Furthermore, we design a self-encoder perceptual loss function to ensure the correctness of mandibular anatomical structures. The experimental results on a novel custom-built mandibular defect dataset demonstrate that our method has a promising prospect in clinical application, achieving the best Dice Similarity Coefficient (DSC) of 94.238% and 95% Hausdorff Distance (HD95) of 4.787.
Collapse
Affiliation(s)
- Chenglan Zhong
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, P. R. China
| | - Yutao Xiong
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, P. R. China
| | - Wei Tang
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, P. R. China
| | - Jixiang Guo
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, P. R. China
| |
Collapse
|
5
|
Noel L, Fat SC, Causey JL, Dong W, Stubblefield J, Szymanski K, Chang JH, Wang PZ, Moore JH, Ray E, Huang X. Sex classification of 3D skull images using deep neural networks. Sci Rep 2024; 14:13707. [PMID: 38877045 PMCID: PMC11178899 DOI: 10.1038/s41598-024-61879-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Accepted: 05/10/2024] [Indexed: 06/16/2024] Open
Abstract
Determining the fundamental characteristics that define a face as "feminine" or "masculine" has long fascinated anatomists and plastic surgeons, particularly those involved in aesthetic and gender-affirming surgery. Previous studies in this area have relied on manual measurements, comparative anatomy, and heuristic landmark-based feature extraction. In this study, we collected retrospectively at Cedars Sinai Medical Center (CSMC) a dataset of 98 skull samples, which is the first dataset of this kind of 3D medical imaging. We then evaluated the accuracy of multiple deep learning neural network architectures on sex classification with this dataset. Specifically, we evaluated methods representing three different 3D data modeling approaches: Resnet3D, PointNet++, and MeshNet. Despite the limited number of imaging samples, our testing results show that all three approaches achieve AUC scores above 0.9 after convergence. PointNet++ exhibits the highest accuracy, while MeshNet has the lowest. Our findings suggest that accuracy is not solely dependent on the sparsity of data representation but also on the architecture design, with MeshNet's lower accuracy likely due to the lack of a hierarchical structure for progressive data abstraction. Furthermore, we studied a problem related to sex determination, which is the analysis of the various morphological features that affect sex classification. We proposed and developed a new method based on morphological gradients to visualize features that influence model decision making. The method based on morphological gradients is an alternative to the standard saliency map, and the new method provides better visualization of feature importance. Our study is the first to develop and evaluate deep learning models for analyzing 3D facial skull images to identify imaging feature differences between individuals assigned male or female at birth. These findings may be useful for planning and evaluating craniofacial surgery, particularly gender-affirming procedures, such as facial feminization surgery.
Collapse
Affiliation(s)
- Lake Noel
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles, CA, USA
| | - Shelby Chun Fat
- Department of Surgery, Cedars Sinai Medical Center, Los Angeles, CA, USA
| | - Jason L Causey
- Center for No-Boundary Thinking (CNBT), Arkansas State University, Jonesboro, AR, USA
- Department of Computer Science, Arkansas State University, Jonesboro, AR, USA
| | - Wei Dong
- Ann Arbor Algorithms, Ann Arbor, MI, USA
| | - Jonathan Stubblefield
- Center for No-Boundary Thinking (CNBT), Arkansas State University, Jonesboro, AR, USA
- Department of Computer Science, Arkansas State University, Jonesboro, AR, USA
| | | | - Jui-Hsuan Chang
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles, CA, USA
| | - Paul Zhiping Wang
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles, CA, USA
| | - Jason H Moore
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles, CA, USA.
| | - Edward Ray
- Department of Surgery, Cedars Sinai Medical Center, Los Angeles, CA, USA.
| | - Xiuzhen Huang
- Department of Computational Biomedicine, Cedars Sinai Medical Center, Los Angeles, CA, USA.
| |
Collapse
|
6
|
Li J, Ellis DG, Pepe A, Gsaxner C, Aizenberg MR, Kleesiek J, Egger J. Back to the Roots: Reconstructing Large and Complex Cranial Defects using an Image-based Statistical Shape Model. J Med Syst 2024; 48:55. [PMID: 38780820 PMCID: PMC11116219 DOI: 10.1007/s10916-024-02066-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 04/11/2024] [Indexed: 05/25/2024]
Abstract
Designing implants for large and complex cranial defects is a challenging task, even for professional designers. Current efforts on automating the design process focused mainly on convolutional neural networks (CNN), which have produced state-of-the-art results on reconstructing synthetic defects. However, existing CNN-based methods have been difficult to translate to clinical practice in cranioplasty, as their performance on large and complex cranial defects remains unsatisfactory. In this paper, we present a statistical shape model (SSM) built directly on the segmentation masks of the skulls represented as binary voxel occupancy grids and evaluate it on several cranial implant design datasets. Results show that, while CNN-based approaches outperform the SSM on synthetic defects, they are inferior to SSM when it comes to large, complex and real-world defects. Experienced neurosurgeons evaluate the implants generated by the SSM to be feasible for clinical use after minor manual corrections. Datasets and the SSM model are publicly available at https://github.com/Jianningli/ssm .
Collapse
Affiliation(s)
- Jianning Li
- Institute for Artificial Intelligence in Medicine (IKIM), Essen University Hospital, Girardetstraße 2, 45131, Essen, Germany.
| | - David G Ellis
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz, 8010, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz, 8010, Austria
| | - Michele R Aizenberg
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), Essen University Hospital, Girardetstraße 2, 45131, Essen, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine (IKIM), Essen University Hospital, Girardetstraße 2, 45131, Essen, Germany.
| |
Collapse
|
7
|
Amiranashvili T, Lüdke D, Li HB, Zachow S, Menze BH. Learning continuous shape priors from sparse data with neural implicit functions. Med Image Anal 2024; 94:103099. [PMID: 38395009 DOI: 10.1016/j.media.2024.103099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 10/31/2023] [Accepted: 01/30/2024] [Indexed: 02/25/2024]
Abstract
Statistical shape models are an essential tool for various tasks in medical image analysis, including shape generation, reconstruction and classification. Shape models are learned from a population of example shapes, which are typically obtained through segmentation of volumetric medical images. In clinical practice, highly anisotropic volumetric scans with large slice distances are prevalent, e.g., to reduce radiation exposure in CT or image acquisition time in MR imaging. For existing shape modeling approaches, the resolution of the emerging model is limited to the resolution of the training shapes. Therefore, any missing information between slices prohibits existing methods from learning a high-resolution shape prior. We propose a novel shape modeling approach that can be trained on sparse, binary segmentation masks with large slice distances. This is achieved through employing continuous shape representations based on neural implicit functions. After training, our model can reconstruct shapes from various sparse inputs at high target resolutions beyond the resolution of individual training examples. We successfully reconstruct high-resolution shapes from as few as three orthogonal slices. Furthermore, our shape model allows us to embed various sparse segmentation masks into a common, low-dimensional latent space - independent of the acquisition direction, resolution, spacing, and field of view. We show that the emerging latent representation discriminates between healthy and pathological shapes, even when provided with sparse segmentation masks. Lastly, we qualitatively demonstrate that the emerging latent space is smooth and captures characteristic modes of shape variation. We evaluate our shape model on two anatomical structures: the lumbar vertebra and the distal femur, both from publicly available datasets.
Collapse
Affiliation(s)
- Tamaz Amiranashvili
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland; Department of Computer Science, Technical University of Munich, Munich, Germany.
| | - David Lüdke
- Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany; Department of Computer Science, Technical University of Munich, Munich, Germany
| | - Hongwei Bran Li
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland; Department of Computer Science, Technical University of Munich, Munich, Germany
| | - Stefan Zachow
- Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany
| | - Bjoern H Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland; Department of Computer Science, Technical University of Munich, Munich, Germany
| |
Collapse
|
8
|
Fishman Z, Mainprize JG, Edwards G, Antonyshyn O, Hardisty M, Whyne CM. Thickness and design features of clinical cranial implants-what should automated methods strive to replicate? Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03068-4. [PMID: 38430381 DOI: 10.1007/s11548-024-03068-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 01/24/2024] [Indexed: 03/03/2024]
Abstract
PURPOSE New deep learning and statistical shape modelling approaches aim to automate the design process for patient-specific cranial implants, as highlighted by the MICCAI AutoImplant Challenges. To ensure applicability, it is important to determine if the training data used in developing these algorithms represent the geometry of implants designed for clinical use. METHODS Calavera Surgical Design provided a dataset of 206 post-craniectomy skull geometries and their clinically used implants. The MUG500+ dataset includes 29 post-craniectomy skull geometries and implants designed for automating design. For both implant and skull shapes, the inner and outer cortical surfaces were segmented, and the thickness between them was measured. For the implants, a 'rim' was defined that transitions from the repaired defect to the surrounding skull. For unilateral defect cases, skull implants were mirrored to the contra-lateral side and thickness differences were quantified. RESULTS The average thickness of the clinically used implants was 6.0 ± 0.5 mm, which approximates the thickness on the contra-lateral side of the skull (relative difference of -0.3 ± 1.4 mm). The average thickness of the MUG500+ implants was 2.9 ± 1.0 mm, significantly thinner than the intact skull thickness (relative difference of 2.9 ± 1.2 mm). Rim transitions in the clinical implants (average width of 8.3 ± 3.4 mm) were used to cap and create a smooth boundary with the skull. CONCLUSIONS For implant modelers or manufacturers, this shape analysis quantified differences of cranial implants (thickness, rim width, surface area, and volume) to help guide future automated design algorithms. After skull completion, a thicker implant can be more versatile for cases involving muscle hollowing or thin skulls, and wider rims can smooth over the defect margins to provide more stability. For clinicians, the differing measurements and implant designs can help inform the options available for their patient specific treatment.
Collapse
Affiliation(s)
- Z Fishman
- Orthopaedic Biomechanics Laboratory, Sunnybrook Research Institute, Toronto, ON, Canada.
| | | | | | - Oleh Antonyshyn
- Calavera Surgical Design Inc., Toronto, ON, Canada
- Division of Plastic Surgery, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | - Michael Hardisty
- Orthopaedic Biomechanics Laboratory, Sunnybrook Research Institute, Toronto, ON, Canada
- Department of Surgery, University of Toronto, Toronto, ON, Canada
| | - C M Whyne
- Orthopaedic Biomechanics Laboratory, Sunnybrook Research Institute, Toronto, ON, Canada
- Department of Surgery, University of Toronto, Toronto, ON, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
9
|
Wu CT, Yang YH, Chang YZ. Creating high-resolution 3D cranial implant geometry using deep learning techniques. Front Bioeng Biotechnol 2023; 11:1297933. [PMID: 38149174 PMCID: PMC10750412 DOI: 10.3389/fbioe.2023.1297933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 11/22/2023] [Indexed: 12/28/2023] Open
Abstract
Creating a personalized implant for cranioplasty can be costly and aesthetically challenging, particularly for comminuted fractures that affect a wide area. Despite significant advances in deep learning techniques for 2D image completion, generating a 3D shape inpainting remains challenging due to the higher dimensionality and computational demands for 3D skull models. Here, we present a practical deep-learning approach to generate implant geometry from defective 3D skull models created from CT scans. Our proposed 3D reconstruction system comprises two neural networks that produce high-quality implant models suitable for clinical use while reducing training time. The first network repairs low-resolution defective models, while the second network enhances the volumetric resolution of the repaired model. We have tested our method in simulations and real-life surgical practices, producing implants that fit naturally and precisely match defect boundaries, particularly for skull defects above the Frankfort horizontal plane.
Collapse
Affiliation(s)
- Chieh-Tsai Wu
- Department of Neurosurgery, Linkou Chang Gung Memorial Hospital, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | | | - Yau-Zen Chang
- Department of Neurosurgery, Linkou Chang Gung Memorial Hospital, Taoyuan, Taiwan
- Department of Mechanical Engineering, Chang Gung University, Taoyuan, Taiwan
- Department of Mechanical Engineering, Ming Chi University of Technology, New Taipei City, Taiwan
| |
Collapse
|
10
|
Li J, Gsaxner C, Pepe A, Schmalstieg D, Kleesiek J, Egger J. Sparse convolutional neural network for high-resolution skull shape completion and shape super-resolution. Sci Rep 2023; 13:20229. [PMID: 37981641 PMCID: PMC10658170 DOI: 10.1038/s41598-023-47437-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 11/14/2023] [Indexed: 11/21/2023] Open
Abstract
Traditional convolutional neural network (CNN) methods rely on dense tensors, which makes them suboptimal for spatially sparse data. In this paper, we propose a CNN model based on sparse tensors for efficient processing of high-resolution shapes represented as binary voxel occupancy grids. In contrast to a dense CNN that takes the entire voxel grid as input, a sparse CNN processes only on the non-empty voxels, thus reducing the memory and computation overhead caused by the sparse input data. We evaluate our method on two clinically relevant skull reconstruction tasks: (1) given a defective skull, reconstruct the complete skull (i.e., skull shape completion), and (2) given a coarse skull, reconstruct a high-resolution skull with fine geometric details (shape super-resolution). Our method outperforms its dense CNN-based counterparts in the skull reconstruction task quantitatively and qualitatively, while requiring substantially less memory for training and inference. We observed that, on the 3D skull data, the overall memory consumption of the sparse CNN grows approximately linearly during inference with respect to the image resolutions. During training, the memory usage remains clearly below increases in image resolution-an [Formula: see text] increase in voxel number leads to less than [Formula: see text] increase in memory requirements. Our study demonstrates the effectiveness of using a sparse CNN for skull reconstruction tasks, and our findings can be applied to other spatially sparse problems. We prove this by additional experimental results on other sparse medical datasets, like the aorta and the heart. Project page at https://github.com/Jianningli/SparseCNN .
Collapse
Affiliation(s)
- Jianning Li
- Institute for AI in Medicine (IKIM), University Medicine Essen (AöR), Girardetstraße 2, 45131, Essen, Germany.
| | - Christina Gsaxner
- Institute of computer graphics and vision, Graz University of Technology, Graz, Austria
| | - Antonio Pepe
- Institute of computer graphics and vision, Graz University of Technology, Graz, Austria
| | - Dieter Schmalstieg
- Institute of computer graphics and vision, Graz University of Technology, Graz, Austria
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen (AöR), Girardetstraße 2, 45131, Essen, Germany
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Medicine Essen (AöR), Girardetstraße 2, 45131, Essen, Germany.
- Institute of computer graphics and vision, Graz University of Technology, Graz, Austria.
| |
Collapse
|
11
|
Haque F, Luscher AF, Mitchell KAS, Sutradhar A. Optimization of Fixations for Additively Manufactured Cranial Implants: Insights from Finite Element Analysis. Biomimetics (Basel) 2023; 8:498. [PMID: 37887630 PMCID: PMC10603949 DOI: 10.3390/biomimetics8060498] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 10/01/2023] [Accepted: 10/10/2023] [Indexed: 10/28/2023] Open
Abstract
With the emergence of additive manufacturing technology, patient-specific cranial implants using 3D printing have massively influenced the field. These implants offer improved surgical outcomes and aesthetic preservation. However, as additive manufacturing in cranial implants is still emerging, ongoing research is investigating their reliability and sustainability. The long-term biomechanical performance of these implants is critically influenced by factors such as implant material, anticipated loads, implant-skull interface geometry, and structural constraints, among others. The efficacy of cranial implants involves an intricate interplay of these factors, with fixation playing a pivotal role. This study addresses two critical concerns: determining the ideal number of fixation points for cranial implants and the optimal curvilinear distance between those points, thereby establishing a minimum threshold. Employing finite element analysis, the research incorporates variables such as implant shapes, sizes, materials, the number of fixation points, and their relative positions. The study reveals that the optimal number of fixation points ranges from four to five, accounting for defect size and shape. Moreover, the optimal curvilinear distance between two screws is approximately 40 mm for smaller implants and 60 mm for larger implants. Optimal fixation placement away from the center mitigates higher deflection due to overhangs. Notably, a symmetric screw orientation reduces deflection, enhancing implant stability. The findings offer crucial insights into optimizing fixation strategies for cranial implants, thereby aiding surgical decision-making guidelines.
Collapse
Affiliation(s)
- Fariha Haque
- Department of Mechanical and Aerospace Engineering, The Ohio State University, Columbus, OH 43210, USA; (F.H.); (A.F.L.)
| | - Anthony F. Luscher
- Department of Mechanical and Aerospace Engineering, The Ohio State University, Columbus, OH 43210, USA; (F.H.); (A.F.L.)
| | - Kerry-Ann S. Mitchell
- Department of Plastic Surgery, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA;
| | - Alok Sutradhar
- Department of Mechanical and Aerospace Engineering, The Ohio State University, Columbus, OH 43210, USA; (F.H.); (A.F.L.)
| |
Collapse
|
12
|
Thimukonda Jegadeesan J, Baldia M, Basu B. Next-generation personalized cranioplasty treatment. Acta Biomater 2022; 154:63-82. [PMID: 36272686 DOI: 10.1016/j.actbio.2022.10.030] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 10/10/2022] [Accepted: 10/13/2022] [Indexed: 12/14/2022]
Abstract
Decompressive craniectomy (DC) is a surgical procedure, that is followed by cranioplasty surgery. DC is usually performed to treat patients with traumatic brain injury, intracranial hemorrhage, cerebral infarction, brain edema, skull fractures, etc. In many published clinical case studies and systematic reviews, cranioplasty surgery is reported to restore cranial symmetry with good cosmetic outcomes and neurophysiologically relevant functional outcomes in hundreds of patients. In this review article, we present a number of key issues related to the manufacturing of patient-specific implants, clinical complications, cosmetic outcomes, and newer alternative therapies. While discussing alternative therapeutic treatments for cranioplasty, biomolecules and cellular-based approaches have been emphasized. The current clinical practices in the restoration of cranial defects involve 3D printing to produce patient-specific prefabricated cranial implants, that provide better cosmetic outcomes. Regardless of the advancements in image processing and 3D printing, the complete clinical procedure is time-consuming and requires significant costs. To reduce manual intervention and to address unmet clinical demands, it has been highlighted that automated implant fabrication by data-driven methods can accelerate the design and manufacturing of patient-specific cranial implants. The data-driven approaches, encompassing artificial intelligence (machine learning/deep learning) and E-platforms, such as publicly accessible clinical databases will lead to the development of the next generation of patient-specific cranial implants, which can provide predictable clinical outcomes. STATEMENT OF SIGNIFICANCE: Cranioplasty is performed to reconstruct cranial defects of patients who have undergone decompressive craniectomy. Cranioplasty surgery improves the aesthetic and functional outcomes of those patients. To meet the clinical demands of cranioplasty surgery, accelerated designing and manufacturing of 3D cranial implants are required. This review provides an overview of biomaterial implants and bone flap manufacturing methods for cranioplasty surgery. In addition, tissue engineering and regenerative medicine-based approaches to reduce clinical complications are also highlighted. The potential use of data-driven computer applications and data-driven artificial intelligence-based approaches are emphasized to accelerate the clinical protocols of cranioplasty treatment with less manual intervention and shorter intraoperative time.
Collapse
Affiliation(s)
| | - Manish Baldia
- Department of Neurosurgery, Jaslok Hospital and Research Centre, Mumbai, Maharashtra 400026, India
| | - Bikramjit Basu
- Materials Research Centre, Indian Institute of Science, CV Raman Road, Bangalore, Karnataka 560012, India; Centre for Biosystems Science and Engineering, Indian Institute of Science, Bangalore, Karnataka 560012, India.
| |
Collapse
|
13
|
Wodzinski M, Daniol M, Socha M, Hemmerling D, Stanuch M, Skalski A. Deep learning-based framework for automatic cranial defect reconstruction and implant modeling. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107173. [PMID: 36257198 DOI: 10.1016/j.cmpb.2022.107173] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 08/19/2022] [Accepted: 10/02/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE This article presents a robust, fast, and fully automatic method for personalized cranial defect reconstruction and implant modeling. METHODS We propose a two-step deep learning-based method using a modified U-Net architecture to perform the defect reconstruction, and a dedicated iterative procedure to improve the implant geometry, followed by an automatic generation of models ready for 3-D printing. We propose a cross-case augmentation based on imperfect image registration combining cases from different datasets. Additional ablation studies compare different augmentation strategies and other state-of-the-art methods. RESULTS We evaluate the method on three datasets introduced during the AutoImplant 2021 challenge, organized jointly with the MICCAI conference. We perform the quantitative evaluation using the Dice and boundary Dice coefficients, and the Hausdorff distance. The Dice coefficient, boundary Dice coefficient, and the 95th percentile of Hausdorff distance averaged across all test sets, are 0.91, 0.94, and 1.53 mm respectively. We perform an additional qualitative evaluation by 3-D printing and visualization in mixed reality to confirm the implant's usefulness. CONCLUSION The article proposes a complete pipeline that enables one to create the cranial implant model ready for 3-D printing. The described method is a greatly extended version of the method that scored 1st place in all AutoImplant 2021 challenge tasks. We freely release the source code, which together with the open datasets, makes the results fully reproducible. The automatic reconstruction of cranial defects may enable manufacturing personalized implants in a significantly shorter time, possibly allowing one to perform the 3-D printing process directly during a given intervention. Moreover, we show the usability of the defect reconstruction in a mixed reality that may further reduce the surgery time.
Collapse
Affiliation(s)
- Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland; MedApp S.A., Krakow, Poland; Information Systems Institute, University of Applied Sciences Western Switzerland, Sierre, Switzerland.
| | - Mateusz Daniol
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland; MedApp S.A., Krakow, Poland
| | - Miroslaw Socha
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland
| | - Daria Hemmerling
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland
| | - Maciej Stanuch
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland; MedApp S.A., Krakow, Poland
| | - Andrzej Skalski
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland; MedApp S.A., Krakow, Poland
| |
Collapse
|
14
|
Sulakhe H, Li J, Egger J, Goyal P. CranGAN: Adversarial Point Cloud Reconstruction for patient-specific Cranial Implant Design. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:603-608. [PMID: 36085744 DOI: 10.1109/embc48229.2022.9871069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Automatizing cranial implant design has become an increasingly important avenue in biomedical research. Benefits in terms of financial resources, time and patient safety necessitate the formulation of an efficient and accurate procedure for the same. This paper attempts to provide a new research direction to this problem, through an adversarial deep learning solution. Specifically, in this work, we present CranGAN - a 3D Conditional Generative Adversarial Network designed to reconstruct a 3D representation of a complete skull given its defective counterpart. A novel solution of employing point cloud representations instead of conventional 3D meshes and voxel grids is proposed. We provide both qualitative and quantitative analysis of our experiments with three separate GAN objectives, and compare the utility of two 3D reconstruction loss functions viz. Hausdorff Distance and Chamfer Distance. We hope that our work inspires further research in this direction. Clinical relevance- This paper establishes a new research direction to assist in automated implant design for cranioplasty.
Collapse
|
15
|
Egger J, Wild D, Weber M, Bedoya CAR, Karner F, Prutsch A, Schmied M, Dionysio C, Krobath D, Jin Y, Gsaxner C, Li J, Pepe A. Studierfenster: an Open Science Cloud-Based Medical Imaging Analysis Platform. J Digit Imaging 2022; 35:340-355. [PMID: 35064372 PMCID: PMC8782222 DOI: 10.1007/s10278-021-00574-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 12/14/2021] [Accepted: 12/16/2021] [Indexed: 02/06/2023] Open
Abstract
Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are widely used in diagnostics, clinical studies, and treatment planning. Automatic algorithms for image analysis have thus become an invaluable tool in medicine. Examples of this are two- and three-dimensional visualizations, image segmentation, and the registration of all anatomical structure and pathology types. In this context, we introduce Studierfenster ( www.studierfenster.at ): a free, non-commercial open science client-server framework for (bio-)medical image analysis. Studierfenster offers a wide range of capabilities, including the visualization of medical data (CT, MRI, etc.) in two-dimensional (2D) and three-dimensional (3D) space in common web browsers, such as Google Chrome, Mozilla Firefox, Safari, or Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images, manual placing of (anatomical) landmarks in medical imaging data, visualization of medical data in virtual reality (VR), and a facial reconstruction and registration of medical data for augmented reality (AR). More sophisticated features include the automatic cranial implant design with a convolutional neural network (CNN), the inpainting of aortic dissections with a generative adversarial network, and a CNN for automatic aortic landmark detection in CT angiography images. A user study with medical and non-medical experts in medical image analysis was performed, to evaluate the usability and the manual functionalities of Studierfenster. When participants were asked about their overall impression of Studierfenster in an ISO standard (ISO-Norm) questionnaire, a mean of 6.3 out of 7.0 possible points were achieved. The evaluation also provided insights into the results achievable with Studierfenster in practice, by comparing these with two ground truth segmentations performed by a physician of the Medical University of Graz in Austria. In this contribution, we presented an online environment for (bio-)medical image analysis. In doing so, we established a client-server-based architecture, which is able to process medical data, especially 3D volumes. Our online environment is not limited to medical applications for humans. Rather, its underlying concept could be interesting for researchers from other fields, in applying the already existing functionalities or future additional implementations of further image processing applications. An example could be the processing of medical acquisitions like CT or MRI from animals [Clinical Pharmacology & Therapeutics, 84(4):448-456, 68], which get more and more common, as veterinary clinics and centers get more and more equipped with such imaging devices. Furthermore, applications in entirely non-medical research in which images/volumes need to be processed are also thinkable, such as those in optical measuring techniques, astronomy, or archaeology.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia.
- Computer Algorithms for Medicine Laboratory, Graz, Austria.
- Institute for Artificial Intelligence in Medicine, AI-guided Therapies, University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.
| | - Daniel Wild
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Maximilian Weber
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christopher A Ramirez Bedoya
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Florian Karner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Alexander Prutsch
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Michael Schmied
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Dionysio
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Dominik Krobath
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Research Center for Connected Healthcare Big Data, ZhejiangLab, 311121, Hangzhou, Zhejiang, China
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for Artificial Intelligence in Medicine, AI-guided Therapies, University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| |
Collapse
|
16
|
Li J, Krall M, Trummer F, Memon AR, Pepe A, Gsaxner C, Jin Y, Chen X, Deutschmann H, Zefferer U, Schäfer U, Campe GV, Egger J. MUG500+: Database of 500 high-resolution healthy human skulls and 29 craniotomy skulls and implants. Data Brief 2021; 39:107524. [PMID: 34815988 PMCID: PMC8591340 DOI: 10.1016/j.dib.2021.107524] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Accepted: 10/25/2021] [Indexed: 11/19/2022] Open
Abstract
In this article, we present a skull database containing 500 healthy skulls segmented from high-resolution head computed-tomography (CT) scans and 29 defective skulls segmented from craniotomy head CTs. Each healthy skull contains the complete anatomical structures of human skulls, including the cranial bones, facial bones and other subtle structures. For each craniotomy skull, a part of the cranial bone is missing, leaving a defect on the skull. The defects have various sizes, shapes and positions, depending on the specific pathological conditions of each patient. Along with each craniotomy skull, a cranial implant, which is designed manually by an expert and can fit with the defect, is provided. Considering the large volume of the healthy skull collection, the dataset can be used to study the geometry/shape variabilities of human skulls and create a robust statistical model of the shape of human skulls, which can be used for various tasks such as cranial implant design. The craniotomy collection can serve as an evaluation set for automatic cranial implant design algorithms.
Collapse
Affiliation(s)
- Jianning Li
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
- Corresponding authors.
| | - Marcell Krall
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Florian Trummer
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Afaque Rafique Memon
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Antonio Pepe
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
| | - Christina Gsaxner
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Yuan Jin
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Research Center for Connected Healthcare Big Data, ZhejiangLab, Hangzhou, Zhejiang, China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | | | - Ulrike Zefferer
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Ute Schäfer
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Gord von Campe
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Jan Egger
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
- Corresponding authors.
| |
Collapse
|
17
|
Egger J, Pepe A, Gsaxner C, Jin Y, Li J, Kern R. Deep learning-a first meta-survey of selected reviews across scientific disciplines, their commonalities, challenges and research impact. PeerJ Comput Sci 2021; 7:e773. [PMID: 34901429 PMCID: PMC8627237 DOI: 10.7717/peerj-cs.773] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 10/15/2021] [Indexed: 05/07/2023]
Abstract
Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by drawing inspiration from the learning of a human brain. Similar to the basic structure of a brain, which consists of (billions of) neurons and connections between them, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term 'deep learning', and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines and outline the research impact that they already have during a short period of time. The categories (computer vision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image, language, medical, mixed). In addition, we review the common architectures, methods, pros, cons, evaluations, challenges and future directions for every sub-category.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Graz, Austria
- Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, Zhejiang, China
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
- Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University of Graz, Graz, Austria
| | - Roman Kern
- Knowledge Discovery, Know-Center, Graz, Austria
- Institute of Interactive Systems and Data Science, Graz University of Technology, Graz, Austria
| |
Collapse
|