1
|
He D, Liu Z, Yin X, Liu H, Gao W, Fu Y. Synthesized colonoscopy dataset from high-fidelity virtual colon with abnormal simulation. Comput Biol Med 2025; 186:109672. [PMID: 39826299 DOI: 10.1016/j.compbiomed.2025.109672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2024] [Revised: 01/06/2025] [Accepted: 01/08/2025] [Indexed: 01/22/2025]
Abstract
With the advent of the deep learning-based colonoscopy system, the need for a vast amount of high-quality colonoscopy image datasets for training is crucial. However, the generalization ability of deep learning models is challenged by the limited availability of colonoscopy images due to regulatory restrictions and privacy concerns. In this paper, we propose a method for rendering high-fidelity 3D colon models and synthesizing diversified colonoscopy images with abnormalities such as polyps, bleeding, and ulcers, which can be used to train deep learning models. The geometric model of the colon is derived from CT images. We employed dedicated surface mesh deformation to mimic the shapes of polyps and ulcers and applied texture mapping techniques to generate realistic, lifelike appearances. The generated polyp models were then attached to the inner surface of the colon model, while the ulcers were created directly on the inner surface of the colon model. To realistically model blood behavior, we developed a simulation of the blood diffusion process on the colon's inner surface and colored vertices in the traversed region to reflect blood flow. Ultimately, we generated a comprehensive dataset comprising high-fidelity rendered colonoscopy images with the abnormalities. To validate the effectiveness of the synthesized colonoscopy dataset, we trained state-of-the-art deep learning models on it and other publicly available datasets and assessed the performance of these models in abnormal classification, detection, and segmentation. Notably, the models trained on the synthesized dataset exhibit an enhanced performance in the aforementioned tasks, as evident from the results.
Collapse
Affiliation(s)
- Dongdong He
- School of Life Science and Technology, Harbin Institute of Technology, Harbin, 150080, China
| | - Ziteng Liu
- School of Life Science and Technology, Harbin Institute of Technology, Harbin, 150080, China
| | - Xunhai Yin
- Department of Gastroenterology, The First Affiliated Hospital of Harbin Medical University, Harbin, 150001, China
| | - Hao Liu
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China
| | - Wenpeng Gao
- School of Life Science and Technology, Harbin Institute of Technology, Harbin, 150080, China; State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, 150080, China.
| | - Yili Fu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, 150080, China.
| |
Collapse
|
2
|
Ye Z, Shao C, Zhu K. Unsupervised neural network-based image stitching method for bladder endoscopy. PLoS One 2025; 20:e0311637. [PMID: 39964991 PMCID: PMC11835325 DOI: 10.1371/journal.pone.0311637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2024] [Accepted: 01/22/2025] [Indexed: 02/20/2025] Open
Abstract
Bladder endoscopy enables the observation of intravesical lesion characteristics, making it an essential tool in urology. Image stitching techniques are commonly employed to expand the field of view of bladder endoscopy. Traditional image stitching methods rely on feature matching. In recent years, deep-learning techniques have garnered significant attention in the field of computer vision. However, the commonly employed supervised learning approaches often require a substantial amount of labeled data, which can be challenging to acquire, especially in the context of medical data. To address this limitation, this study proposes an unsupervised neural network-based image stitching method for bladder endoscopy, which eliminates the need for labeled datasets. The method comprises two modules: an unsupervised alignment network and an unsupervised fusion network. In the unsupervised alignment network, we employed feature convolution, regression networks, and linear transformations to align images. In the unsupervised fusion network, we achieved image fusion from features to pixel by simultaneously eliminating artifacts and enhancing the resolution. Experiments demonstrated our method's consistent stitching success rate of 98.11% and robust image stitching accuracy at various resolutions. Our method eliminates sutures and flocculent debris from cystoscopy images, presenting good image smoothness while preserving rich textural features. Moreover, our method could successfully stitch challenging scenes such as dim and blurry scenes. Our application of unsupervised deep learning methods in the field of cystoscopy image stitching was successfully validated, laying the foundation for real-time panoramic stitching of bladder endoscopic video images. This advancement provides opportunities for the future development of computer-vision-assisted diagnostic systems for bladder cavities.
Collapse
Affiliation(s)
- Zixing Ye
- Department of Urology, Peking Union Medical College Hospital, Beijing, China
| | - Chenyu Shao
- National Elite Institute of Engineering, Northwestern Polytechnical University, Xi’an, China
| | - Kelei Zhu
- School of Software, Northwestern Polytechnical University, Xi’an, China
| |
Collapse
|
3
|
Ahn BY, Lee J, Seol J, Kim JY, Chung H. Evaluation of an artificial intelligence-based system for real-time high-quality photodocumentation during esophagogastroduodenoscopy. Sci Rep 2025; 15:4693. [PMID: 39920187 PMCID: PMC11806067 DOI: 10.1038/s41598-024-83721-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 12/17/2024] [Indexed: 02/09/2025] Open
Abstract
Complete and high-quality photodocumentation in esophagoduodenogastroscopy (EGD) is essential for accurately diagnosing upper gastrointestinal diseases by reducing blind spot rates. Automated Photodocumentation Task (APT), an artificial intelligence-based system for real-time photodocumentation during EGD, was developed to assist endoscopists in focusing more on the observation rather than repetitive capturing tasks. This study aimed to evaluate the completeness and quality of APT's photodocumentation compared to endoscopists. The dataset comprised 37 EGD videos recorded at Seoul National University Hospital between March and June 2023. Virtual endoscopy was conducted by seven endoscopists and APT, capturing 11 anatomical landmarks from the videos. The primary endpoints were the completeness of capturing landmarks and the quality of the images. APT achieved an average accuracy of 98.16% in capturing landmarks. Compared to that of endoscopists, APT demonstrated similar completeness in photodocumentation (87.72% vs. 85.75%, P = .0.258), and the combined photodocumentation of endoscopists and APT reached higher completeness (91.89% vs. 85.75%, P < .0.001). APT captured images with higher mean opinion scores than those of endoscopists (3.88 vs. 3.41, P < .0.001). In conclusion, APT provides clear, high-quality endoscopic images while minimizing blind spots during EGD in real-time.
Collapse
Affiliation(s)
- Byeong Yun Ahn
- Department of Internal Medicine and Liver Research Institute, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
| | | | | | - Ji Yoon Kim
- Department of Internal Medicine and Liver Research Institute, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
| | - Hyunsoo Chung
- Department of Internal Medicine and Liver Research Institute, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea.
| |
Collapse
|
4
|
Huber T, Weber J, von Bechtolsheim F, Flemming S, Fuchs HF, Grade M, Hummel R, Krautz C, Stockheim J, Thomaschewski M, Wilhelm D, Kalff JC, Nickel F, Matthaei H. Modified Delphi Procedure to Achieve Consensus for the Concept of a National Curriculum for Minimally Invasive and Robot-assisted Surgery in Germany (GeRMIQ). Zentralbl Chir 2025; 150:35-49. [PMID: 39667398 PMCID: PMC11798644 DOI: 10.1055/a-2386-9463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Accepted: 06/11/2024] [Indexed: 12/14/2024]
Abstract
The rapid development of minimally invasive surgery (MIS) and robot-assisted surgery (RAS) requires standardized training to ensure high-quality patient care. In Germany, there is currently a lack of a standardized curriculum that teaches these specialized skills. The aim of this study is to find a consensus for the development of a nationwide curriculum for MIS and RAS with the subsequent implementation of the consented content.A modified Delphi process was used to reach consensus among national experts in MIS and RAS. The process included a literature review, an online survey and an expert conference.All 12 invited experts participated in the survey. They primarily achieved consensus on 73% and secondarily within the expert conference on 95 out of 122 questions (77.9%). The preference for a basic curriculum as a foundation on which specialized modules can build on was particularly clear. The results support the development of an integrated curriculum for MIS and RAS that includes step-by-step training from theoretical knowledge via e-learning modules to practical skills in dry lab simulations and in the OR. Emphasis was placed on the need to promote clinical judgment and decision making through targeted assessment during the learning curve to ensure effective application of learned skills in clinical practice. There was also a consensus that training content must be aligned with learners' skill acquisition using objective performance assessments in line with the principle of proficiency-based progression (PBP). The continuous updating of the curriculum to keep it up to date with the latest technology was considered essential.The study underlines the urgent need for a standardized training curriculum for MIS and RAS in Germany in order to increase patient safety and improve the quality of surgical care. There is broad expert consensus for the implementation of such a curriculum. It aims to ensure a contemporary and internationally competitive uniform quality of training and to increase the attractiveness of surgical training.
Collapse
Affiliation(s)
- Tobias Huber
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Universitätsmedizin der Johannes Gutenberg-Universität Mainz, Mainz, Deutschland
| | - Julia Weber
- Klinik und Poliklinik für Allgemein-, Viszeral-, Thorax- und Gefäßchirurgie, Universitätsklinikum Bonn, Bonn, Deutschland
| | - Felix von Bechtolsheim
- Klinik und Poliklinik für Viszeral-, Thorax- und Gefäßchirurgie, Medizinische Fakultät und Universitätsklinikum Carl Gustav Carus, Technische Universität Dresden, Dresden, Deutschland
| | - Sven Flemming
- Klinik und Poliklinik für Allgemein-, Viszeral-, Transplantations-, Gefäß- und Kinderchirurgie, Universitätsklinikum Würzburg, Würzburg, Deutschland
| | - Hans Friedrich Fuchs
- Klinik für Allgemein-, Viszeral- und Tumorchirurgie, Universitätsklinikum Köln, Köln, Deutschland
| | - Marian Grade
- Klinik für Allgemein-, Viszeral- und Kinderchirurgie, Universitätsmedizin Göttingen, Göttingen, Deutschland
| | - Richard Hummel
- Klinik für Allgemeine Chirurgie, Viszeral-, Thorax- und Gefäßchirurgie, Universitätsmedizin Greifswald, Greifswald, Deutschland
| | - Christian Krautz
- Klinik für Allgemein- und Viszeralchirurgie, Universitätsklinikum Erlangen, Erlangen, Deutschland
| | - Jessica Stockheim
- Universitätsklinik für Allgemein-, Viszeral-, Gefäß- und Transplantationschirurgie, Universitätsklinikum Magdeburg, Magdeburg, Deutschland
| | - Michael Thomaschewski
- Klinik für Chirurgie, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Kiel, Deutschland
| | - Dirk Wilhelm
- Klinik und Poliklinik für Chirurgie, Technische Universität München, School of Medicine and Health, München, Deutschland
| | - Jörg C. Kalff
- Klinik und Poliklinik für Allgemein-, Viszeral-, Thorax- und Gefäßchirurgie, Universitätsklinikum Bonn, Bonn, Deutschland
| | - Felix Nickel
- Klinik und Poliklinik für Allgemein-, Viszeral- und Thoraxchirurgie, Universitätsklinikum Hamburg-Eppendorf, Hamburg, Deutschland
| | - Hanno Matthaei
- Klinik und Poliklinik für Allgemein-, Viszeral-, Thorax- und Gefäßchirurgie, Universitätsklinikum Bonn, Bonn, Deutschland
| |
Collapse
|
5
|
Shkolyar E, Zhou SR, Carlson CJ, Chang S, Laurie MA, Xing L, Bowden AK, Liao JC. Optimizing cystoscopy and TURBT: enhanced imaging and artificial intelligence. Nat Rev Urol 2025; 22:46-54. [PMID: 38982304 DOI: 10.1038/s41585-024-00904-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/03/2024] [Indexed: 07/11/2024]
Abstract
Diagnostic cystoscopy in combination with transurethral resection of the bladder tumour are the standard for the diagnosis, surgical treatment and surveillance of bladder cancer. The ability to inspect the bladder in its current form stems from a long chain of advances in imaging science and endoscopy. Despite these advances, bladder cancer recurrence and progression rates remain high after endoscopic resection. This stagnation is a result of the heterogeneity of cancer biology as well as limitations in surgical techniques and tools, as incomplete resection and provider-specific differences affect cancer persistence and early recurrence. An unmet clinical need remains for solutions that can improve tumour delineation and resection. Translational advances in enhanced cystoscopy technologies and artificial intelligence offer promising avenues to overcoming the progress plateau.
Collapse
Affiliation(s)
- Eugene Shkolyar
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, USA
| | - Steve R Zhou
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Camella J Carlson
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Shuang Chang
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Mark A Laurie
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, USA
| | - Audrey K Bowden
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Joseph C Liao
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA.
- Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, USA.
| |
Collapse
|
6
|
Yang HY, Hong SS, Yoon J, Park B, Yoon Y, Han DH, Choi GH, Choi MK, Kim SH. Deep learning-based surgical phase recognition in laparoscopic cholecystectomy. Ann Hepatobiliary Pancreat Surg 2024; 28:466-473. [PMID: 39069309 PMCID: PMC11599821 DOI: 10.14701/ahbps.24-091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 06/02/2024] [Accepted: 06/15/2024] [Indexed: 07/30/2024] Open
Abstract
Backgrounds/Aims Artificial intelligence (AI) technology has been used to assess surgery quality, educate, and evaluate surgical performance using video recordings in the minimally invasive surgery era. Much attention has been paid to automating surgical workflow analysis from surgical videos for an effective evaluation to achieve the assessment and evaluation. This study aimed to design a deep learning model to automatically identify surgical phases using laparoscopic cholecystectomy videos and automatically assess the accuracy of recognizing surgical phases. Methods One hundred and twenty cholecystectomy videos from a public dataset (Cholec80) and 40 laparoscopic cholecystectomy videos recorded between July 2022 and December 2022 at a single institution were collected. These datasets were split into training and testing datasets for the AI model at a 2:1 ratio. Test scenarios were constructed according to structural characteristics of the trained model. No pre- or post-processing of input data or inference output was performed to accurately analyze the effect of the label on model training. Results A total of 98,234 frames were extracted from 40 cases as test data. The overall accuracy of the model was 91.2%. The most accurate phase was Calot's triangle dissection (F1 score: 0.9421), whereas the least accurate phase was clipping and cutting (F1 score: 0.7761). Conclusions Our AI model identified phases of laparoscopic cholecystectomy with a high accuracy.
Collapse
Affiliation(s)
- Hye Yeon Yang
- Department of Liver Transplantation and Hepatobiliary and Pancreatic Surgery, Ajou University School of Medicine, Suwon, Korea
| | - Seung Soo Hong
- Department of Hepatobiliary and Pancreatic Surgery, Yonsei University College of Medicine, Seoul, Korea
| | | | | | | | - Dai Hoon Han
- Department of Hepatobiliary and Pancreatic Surgery, Yonsei University College of Medicine, Seoul, Korea
| | - Gi Hong Choi
- Department of Hepatobiliary and Pancreatic Surgery, Yonsei University College of Medicine, Seoul, Korea
| | | | - Sung Hyun Kim
- Department of Hepatobiliary and Pancreatic Surgery, Yonsei University College of Medicine, Seoul, Korea
| |
Collapse
|
7
|
Chen S, Xu L, Yan L, Zhang J, Zhou X, Wang J, Yan T, Wang J, He X, Ma H, Zhang X, Zhu S, Zhang Y, Xu C, Gao J, Ji X, Bai D, Chen Y, Chen H, Ke Y, Li L, Yu C, Mao X, Li T, Chen Y. A novel endoscopic artificial intelligence system to assist in the diagnosis of autoimmune gastritis: a multicenter study. Endoscopy 2024. [PMID: 39447610 DOI: 10.1055/a-2451-3071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/26/2024]
Abstract
BACKGROUND Autoimmune gastritis (AIG), distinct from Helicobacter pylori-associated atrophic gastritis (HpAG), is underdiagnosed due to limited awareness. This multicenter study aimed to develop a novel endoscopic artificial intelligence (AI) system for assisting in AIG diagnosis. METHODS Patients diagnosed with AIG, HpAG, or nonatrophic gastritis (NAG), were retrospectively enrolled from six centers. Endoscopic images with relevant demographic and medical data were collected for development of the AI-assisted system based on a multi-site feature fusion model. The diagnostic performance of the AI model was evaluated in internal and external datasets. Endoscopists' performance with and without AI support was tested and compared using Mann-Whitney U test. Heatmap analysis was performed to interpret AI model outputs. RESULTS 18 828 endoscopy images from 1070 patients (294 AIG, 386 HpAG, 390 NAG) were collected. On testing datasets, AI identified AIG with 96.9 % sensitivity, 92.2 % specificity, and area under the receiver operating characteristic curve (AUROC) of 0.990 (internal), and 90.3 % sensitivity, 93.1 % specificity, and AUROC of 0.973 (external). The performance of AI (sensitivity 91.3 %) was comparable to that of experts (87.3 %) and significantly outperformed nonexperts (70.0 %; P = 0.01). With AI support, the overall performance of endoscopists was improved (sensitivity 90.3 % [95 %CI 86.0 %-93.2 %] vs. 78.7 % [95 %CI 73.6 %-83.2 %]; P = 0.008). Heatmap analysis revealed consistent focus of AI on atrophic areas. CONCLUSIONS This novel AI system demonstrated expert-level performance in identifying AIG and enhanced the diagnostic ability of endoscopists. Its application could be useful in guiding biopsy sampling and improving early detection of AIG.
Collapse
Affiliation(s)
- Shurong Chen
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Louzhe Xu
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences & Peking Union Medical College, Tianjin, China
| | - Lingling Yan
- Department of Gastroenterology, Taizhou Hospital, Taizhou, China
| | - Jie Zhang
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Xuefeng Zhou
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
- Department of Gastroenterology, the Second Hospital of Jiaxing, Jiaxing, China
| | - Jiayi Wang
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
- Department of Gastroenterology, CHC international hospital, Ningbo, China
| | - Tianlian Yan
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Jinghua Wang
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Xinjue He
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Han Ma
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Xuequn Zhang
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Shenghua Zhu
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Yizhen Zhang
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Chengfu Xu
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Jianguo Gao
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Xia Ji
- Department of Gastroenterology, the Second Hospital of Jiaxing, Jiaxing, China
| | - Dezhi Bai
- Department of Gastroenterology, the First People's hospital of Yuhang, Hangzhou, China
| | - Yuan Chen
- Department of Gastroenterology, the Third People's hospital of Zhoushan, Zhoushan, China
| | - Hongda Chen
- Department of Gastroenterology, the Third Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| | - Yini Ke
- Department of Rheumatology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Lan Li
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Chaohui Yu
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| | - Xinli Mao
- Department of Gastroenterology, Taizhou Hospital, Taizhou, China
| | - Ting Li
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences & Peking Union Medical College, Tianjin, China
| | - Yi Chen
- Department of Gastroenterology, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China
| |
Collapse
|
8
|
Loper MR, Makary MS. Evolving and Novel Applications of Artificial Intelligence in Abdominal Imaging. Tomography 2024; 10:1814-1831. [PMID: 39590942 PMCID: PMC11598375 DOI: 10.3390/tomography10110133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2024] [Revised: 11/11/2024] [Accepted: 11/15/2024] [Indexed: 11/28/2024] Open
Abstract
Advancements in artificial intelligence (AI) have significantly transformed the field of abdominal radiology, leading to an improvement in diagnostic and disease management capabilities. This narrative review seeks to evaluate the current standing of AI in abdominal imaging, with a focus on recent literature contributions. This work explores the diagnosis and characterization of hepatobiliary, pancreatic, gastric, colonic, and other pathologies. In addition, the role of AI has been observed to help differentiate renal, adrenal, and splenic disorders. Furthermore, workflow optimization strategies and quantitative imaging techniques used for the measurement and characterization of tissue properties, including radiomics and deep learning, are highlighted. An assessment of how these advancements enable more precise diagnosis, tumor description, and body composition evaluation is presented, which ultimately advances the clinical effectiveness and productivity of radiology. Despite the advancements of AI in abdominal imaging, technical, ethical, and legal challenges persist, and these challenges, as well as opportunities for future development, are highlighted.
Collapse
Affiliation(s)
| | - Mina S. Makary
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA;
| |
Collapse
|
9
|
Zhou Q, Hao L. Critical insights on improving risk evaluation for metachronous colorectal cancer after serrated polypectomy. Gastrointest Endosc 2024; 100:962. [PMID: 39515926 DOI: 10.1016/j.gie.2024.06.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Accepted: 06/16/2024] [Indexed: 11/16/2024]
Affiliation(s)
- Qing Zhou
- Central Laboratory, The People's Hospital of Baoan Shenzhen
| | - Lu Hao
- Department of Science and Education, Shenzhen Baoan Shiyan People's Hospital, Shenzhen, China
| |
Collapse
|
10
|
Kapila AK, Georgiou L, Hamdi M. Decoding the Impact of AI on Microsurgery: Systematic Review and Classification of Six Subdomains for Future Development. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2024; 12:e6323. [PMID: 39568680 PMCID: PMC11578208 DOI: 10.1097/gox.0000000000006323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 08/27/2024] [Indexed: 11/22/2024]
Abstract
Background The advent of artificial intelligence (AI) in microsurgery has tremendous potential in plastic and reconstructive surgery, with possibilities to elevate surgical precision, planning, and patient outcomes. This systematic review seeks to summarize available studies on the implementation of AI in microsurgery and classify these into subdomains where AI can revolutionize our field. Methods Adhering to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, a meticulous search strategy was used across multiple databases. The inclusion criteria encompassed articles that explicitly discussed AI's integration in microsurgical practices. Our aim was to analyze and classify these studies across subdomains for future development. Results The search yielded 2377 articles, with 571 abstracts eligible for screening. After shortlisting and reviewing 86 full-text articles, 29 studies met inclusion criteria. Detailed analysis led to the classification of 6 subdomains within AI applications in microsurgery, including information and knowledge delivery, microsurgical skills training, preoperative planning, intraoperative navigational aids and automated surgical tool control, flap monitoring, and postoperative predictive analytics for patient outcomes. Each subtheme showcased the multifaceted impact of AI on enhancing microsurgical procedures, from preoperative planning to postoperative recovery. Conclusions The integration of AI into microsurgery signals a new dawn of surgical innovation, albeit with the caution warranted by its nascent stage and application diversity. The authors present a systematic review and 6 clear subdomains across which AI will likely play a role within microsurgery. Continuous research, ethical diligence, and cross-disciplinary cooperation is necessary for its successful integration within our specialty.
Collapse
Affiliation(s)
- Ayush K Kapila
- From the Department of Plastic, Reconstructive and Aesthetic Surgery, Brussels University Hospital (UZ Brussel), Brussels, Belgium
| | - Letizia Georgiou
- From the Department of Plastic, Reconstructive and Aesthetic Surgery, Brussels University Hospital (UZ Brussel), Brussels, Belgium
| | - Moustapha Hamdi
- From the Department of Plastic, Reconstructive and Aesthetic Surgery, Brussels University Hospital (UZ Brussel), Brussels, Belgium
| |
Collapse
|
11
|
Li Q, Wang W, Yin H, Zou K, Jiao Y, Zhang Y. One-Dimensional Implantable Sensors for Accurately Monitoring Physiological and Biochemical Signals. RESEARCH (WASHINGTON, D.C.) 2024; 7:0507. [PMID: 39417041 PMCID: PMC11480832 DOI: 10.34133/research.0507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2024] [Revised: 09/12/2024] [Accepted: 09/27/2024] [Indexed: 10/19/2024]
Abstract
In recent years, one-dimensional (1D) implantable sensors have received considerable attention and rapid development in the biomedical field due to their unique structural characteristics and high integration capability. These sensors can be implanted into the human body with minimal invasiveness, facilitating real-time and accurate monitoring of various physiological and pathological parameters. This review examines the latest advancements in 1D implantable sensors, focusing on the material design of sensors, device integration, implantation methods, and the construction of the stable sensor-tissue interface. Furthermore, a comprehensive overview is provided regarding the applications and future research directions for 1D implantable sensors with an ultimate aim to promote their utilization in personalized healthcare and precision medicine.
Collapse
Affiliation(s)
| | | | | | - Kuangyi Zou
- National Laboratory of Solid State Microstructures, Jiangsu Key Laboratory of Artificial Functional Materials, Chemistry and Biomedicine Innovation Center, Collaborative Innovation Center of Advanced Microstructures, College of Engineering and Applied Sciences,
Nanjing University, Nanjing 210023, China
| | - Yiding Jiao
- National Laboratory of Solid State Microstructures, Jiangsu Key Laboratory of Artificial Functional Materials, Chemistry and Biomedicine Innovation Center, Collaborative Innovation Center of Advanced Microstructures, College of Engineering and Applied Sciences,
Nanjing University, Nanjing 210023, China
| | - Ye Zhang
- National Laboratory of Solid State Microstructures, Jiangsu Key Laboratory of Artificial Functional Materials, Chemistry and Biomedicine Innovation Center, Collaborative Innovation Center of Advanced Microstructures, College of Engineering and Applied Sciences,
Nanjing University, Nanjing 210023, China
| |
Collapse
|
12
|
Lin J, Zhu S, Gao X, Liu X, Xu C, Xu Z, Zhu J. Evaluation of super resolution technology for digestive endoscopic images. Heliyon 2024; 10:e38920. [PMID: 39430485 PMCID: PMC11489312 DOI: 10.1016/j.heliyon.2024.e38920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 09/25/2024] [Accepted: 10/02/2024] [Indexed: 10/22/2024] Open
Abstract
Object This study aims to evaluate the value of super resolution (SR) technology in augmenting the quality of digestive endoscopic images. Methods In the retrospective study, we employed two advanced SR models, i.e., SwimIR and ESRGAN. Two discrete datasets were utilized, with training conducted using the dataset of the First Affiliated Hospital of Soochow University (12,212 high-resolution images) and evaluation conducted using the HyperKvasir dataset (2,566 low-resolution images). Furthermore, an assessment of the impact of enhanced low-resolution images was conducted using a 5-point Likert scale from the perspectives of endoscopists. Finally, two endoscopic image classification tasks were employed to evaluate the effect of SR technology on computer vision (CV). Results SwinIR demonstrated superior performance, which achieved a PSNR of 32.60, an SSIM of 0.90, and a VIF of 0.47 in test set. 90 % of endoscopists supported that SR preprocessing moderately ameliorated the readability of endoscopic images. For CV, enhanced images bolstered the performance of convolutional neural networks, whether in the classification task of Barrett's esophagus (improved F1-score: 0.04) or Mayo Endoscopy Score (improved F1-score: 0.04). Conclusions SR technology demonstrates the capacity to produce high-resolution endoscopic images. The approach enhanced clinical readability and CV models' performance of low-resolution endoscopic images.
Collapse
Affiliation(s)
- Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xin Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiaolin Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Chunfang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Zhonghua Xu
- Department of Orthopedics, Jintan Affiliated Hospital to Jiangsu University, Changzhou, China
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
- Key Laboratory of Hepatoaplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| |
Collapse
|
13
|
He Q, Feng G, Bano S, Stoyanov D, Zuo S. MonoLoT: Self-Supervised Monocular Depth Estimation in Low-Texture Scenes for Automatic Robotic Endoscopy. IEEE J Biomed Health Inform 2024; 28:6078-6091. [PMID: 38968011 DOI: 10.1109/jbhi.2024.3423791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/07/2024]
Abstract
The self-supervised monocular depth estimation framework is well-suited for medical images that lack ground-truth depth, such as those from digestive endoscopes, facilitating navigation and 3D reconstruction in the gastrointestinal tract. However, this framework faces several limitations, including poor performance in low-texture environments, limited generalisation to real-world datasets, and unclear applicability in downstream tasks like visual servoing. To tackle these challenges, we propose MonoLoT, a self-supervised monocular depth estimation framework featuring two key innovations: point matching loss and batch image shuffle. Extensive ablation studies on two publicly available datasets, namely C3VD and SimCol, have shown that methods enabled by MonoLoT achieve substantial improvements, with accuracies of 0.944 on C3VD and 0.959 on SimCol, surpassing both depth-supervised and self-supervised baselines on C3VD. Qualitative evaluations on real-world endoscopic data underscore the generalisation capabilities of our methods, outperforming both depth-supervised and self-supervised baselines. To demonstrate the feasibility of using monocular depth estimation for visual servoing, we have successfully integrated our method into a proof-of-concept robotic platform, enabling real-time automatic intervention and control in digestive endoscopy. In summary, our method represents a significant advancement in monocular depth estimation for digestive endoscopy, overcoming key challenges and opening promising avenues for medical applications.
Collapse
|
14
|
Han K, Zhao P, Chen S, Bao Y, Li B, Du J, Wu J, Li H, Chai N, Du X, Linghu E, Liu M. Systematic analysis of levels of evidence supporting Chinese clinical practice guidelines for gastrointestinal disease. MED 2024; 5:1112-1122.e3. [PMID: 38889718 DOI: 10.1016/j.medj.2024.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Revised: 03/12/2024] [Accepted: 05/11/2024] [Indexed: 06/20/2024]
Abstract
BACKGROUND Clinical practice guidelines (CPGs) inform healthcare decisions and improve patient care. However, an evaluation of guidelines on gastrointestinal diseases (GIDs) is lacking. This study aimed to systematically analyze the level of evidence (LOE) supporting Chinese CPGs for GIDs. METHODS CPGs for GIDs were identified by systematically searching major databases. Data on LOEs and classes of recommendations (CORs) were extracted. According to the Grades of Recommendation, Assessment, Development, and Evaluation system, LOEs were categorized as high, moderate, low, or very low, whereas CORs were classified as strong or weak. Statistical analyses were conducted to determine the distribution of LOEs and CORs across different subtopics and assess changes in evidence quality over time. FINDINGS Only 27.9% of these recommendations were supported by a high LOE, whereas approximately 70% were strong recommendations. There was a significant disparity among different subtopics in the proportion of strong recommendations supported by a high LOE. The number of guidelines has increased in the past 5 years, but there has been a concomitant decline in the proportion of recommendations supported by a high LOE. CONCLUSIONS There is a general lack of high-quality evidence supporting Chinese CPGs for GIDs, and there are inconsistencies in strong recommendations that have not improved. This study identified areas requiring further research, emphasizing the need to bridge these gaps and promote the conduct of high-quality clinical trials. FUNDING This study was supported by the National Key R&D Program of China (2022YFC2503604 and 2022YFC2503605) and Special Topics in Military Health Care (22BJZ25).
Collapse
Affiliation(s)
- Ke Han
- Department of Gastroenterology and Hepatology, First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Pengyue Zhao
- Department of General Surgery, First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Shimin Chen
- Institute of Geriatrics, Beijing Key Laboratory of Aging and Geriatrics, National Clinical Research Center for Geriatrics Diseases, Second Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Yinghui Bao
- Institute of Geriatrics, Beijing Key Laboratory of Aging and Geriatrics, National Clinical Research Center for Geriatrics Diseases, Second Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Boyan Li
- Institute of Geriatrics, Beijing Key Laboratory of Aging and Geriatrics, National Clinical Research Center for Geriatrics Diseases, Second Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Jiajun Du
- Library of Graduate School, Chinese People's Liberation Army General Hospital, Beijing, China
| | - Junwei Wu
- Library of Graduate School, Chinese People's Liberation Army General Hospital, Beijing, China
| | - Huikai Li
- Department of Gastroenterology and Hepatology, First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Ningli Chai
- Department of Gastroenterology and Hepatology, First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Xiaohui Du
- Department of General Surgery, First Medical Center of the Chinese PLA General Hospital, Beijing, China.
| | - Enqiang Linghu
- Department of Gastroenterology and Hepatology, First Medical Center of the Chinese PLA General Hospital, Beijing, China.
| | - Miao Liu
- Department of Anti-NBC Medicine, Graduate School, Chinese PLA General Hospital, Beijing, China.
| |
Collapse
|
15
|
Younis R, Yamlahi A, Bodenstedt S, Scheikl PM, Kisilenko A, Daum M, Schulze A, Wise PA, Nickel F, Mathis-Ullrich F, Maier-Hein L, Müller-Stich BP, Speidel S, Distler M, Weitz J, Wagner M. A surgical activity model of laparoscopic cholecystectomy for co-operation with collaborative robots. Surg Endosc 2024; 38:4316-4328. [PMID: 38872018 PMCID: PMC11289174 DOI: 10.1007/s00464-024-10958-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 05/24/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND Laparoscopic cholecystectomy is a very frequent surgical procedure. However, in an ageing society, less surgical staff will need to perform surgery on patients. Collaborative surgical robots (cobots) could address surgical staff shortages and workload. To achieve context-awareness for surgeon-robot collaboration, the intraoperative action workflow recognition is a key challenge. METHODS A surgical process model was developed for intraoperative surgical activities including actor, instrument, action and target in laparoscopic cholecystectomy (excluding camera guidance). These activities, as well as instrument presence and surgical phases were annotated in videos of laparoscopic cholecystectomy performed on human patients (n = 10) and on explanted porcine livers (n = 10). The machine learning algorithm Distilled-Swin was trained on our own annotated dataset and the CholecT45 dataset. The validation of the model was conducted using a fivefold cross-validation approach. RESULTS In total, 22,351 activities were annotated with a cumulative duration of 24.9 h of video segments. The machine learning algorithm trained and validated on our own dataset scored a mean average precision (mAP) of 25.7% and a top K = 5 accuracy of 85.3%. With training and validation on our dataset and CholecT45, the algorithm scored a mAP of 37.9%. CONCLUSIONS An activity model was developed and applied for the fine-granular annotation of laparoscopic cholecystectomies in two surgical settings. A machine recognition algorithm trained on our own annotated dataset and CholecT45 achieved a higher performance than training only on CholecT45 and can recognize frequently occurring activities well, but not infrequent activities. The analysis of an annotated dataset allowed for the quantification of the potential of collaborative surgical robots to address the workload of surgical staff. If collaborative surgical robots could grasp and hold tissue, up to 83.5% of the assistant's tissue interacting tasks (i.e. excluding camera guidance) could be performed by robots.
Collapse
Affiliation(s)
- R Younis
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - A Yamlahi
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - S Bodenstedt
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - P M Scheikl
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - A Kisilenko
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - M Daum
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - A Schulze
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - P A Wise
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - F Nickel
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg- Eppendorf, Hamburg, Germany
| | - F Mathis-Ullrich
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - L Maier-Hein
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - B P Müller-Stich
- Department for Abdominal Surgery, University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland
| | - S Speidel
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - M Distler
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - J Weitz
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - M Wagner
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany.
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany.
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany.
| |
Collapse
|
16
|
Rau A, Bano S, Jin Y, Azagra P, Morlana J, Kader R, Sanderson E, Matuszewski BJ, Lee JY, Lee DJ, Posner E, Frank N, Elangovan V, Raviteja S, Li Z, Liu J, Lalithkumar S, Islam M, Ren H, Lovat LB, Montiel JMM, Stoyanov D. SimCol3D - 3D reconstruction during colonoscopy challenge. Med Image Anal 2024; 96:103195. [PMID: 38815359 DOI: 10.1016/j.media.2024.103195] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 02/08/2024] [Accepted: 05/02/2024] [Indexed: 06/01/2024]
Abstract
Colorectal cancer is one of the most common cancers in the world. While colonoscopy is an effective screening technique, navigating an endoscope through the colon to detect polyps is challenging. A 3D map of the observed surfaces could enhance the identification of unscreened colon tissue and serve as a training platform. However, reconstructing the colon from video footage remains difficult. Learning-based approaches hold promise as robust alternatives, but necessitate extensive datasets. Establishing a benchmark dataset, the 2022 EndoVis sub-challenge SimCol3D aimed to facilitate data-driven depth and pose prediction during colonoscopy. The challenge was hosted as part of MICCAI 2022 in Singapore. Six teams from around the world and representatives from academia and industry participated in the three sub-challenges: synthetic depth prediction, synthetic pose prediction, and real pose prediction. This paper describes the challenge, the submitted methods, and their results. We show that depth prediction from synthetic colonoscopy images is robustly solvable, while pose estimation remains an open research question.
Collapse
Affiliation(s)
- Anita Rau
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK; Stanford University, Stanford, CA, USA.
| | - Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK.
| | - Yueming Jin
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK; National University of Singapore, Singapore.
| | | | | | - Rawen Kader
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | - Edward Sanderson
- Computer Vision and Machine Learning (CVML) Group, University of Central Lancashire, Preston, UK
| | - Bogdan J Matuszewski
- Computer Vision and Machine Learning (CVML) Group, University of Central Lancashire, Preston, UK
| | - Jae Young Lee
- Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Dong-Jae Lee
- Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | | | | | | | - Sista Raviteja
- Indian Institute of Technology Kharagpur, Kharagpur, India
| | - Zhengwen Li
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, China
| | - Jiquan Liu
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, China
| | - Seenivasan Lalithkumar
- National University of Singapore, Singapore; The Chinese University of Hong Kong, Hong Kong, China
| | | | - Hongliang Ren
- National University of Singapore, Singapore; The Chinese University of Hong Kong, Hong Kong, China
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| | | | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, London, UK
| |
Collapse
|
17
|
Wang J, Wang B, Liu YY, Luo YL, Wu YY, Xiang L, Yang XM, Qu YL, Tian TR, Man Y. Recent Advances in Digital Technology in Implant Dentistry. J Dent Res 2024; 103:787-799. [PMID: 38822563 DOI: 10.1177/00220345241253794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/03/2024] Open
Abstract
Digital technology has emerged as a transformative tool in dental implantation, profoundly enhancing accuracy and effectiveness across multiple facets, such as diagnosis, preoperative treatment planning, surgical procedures, and restoration delivery. The multiple integration of radiographic data and intraoral data, sometimes with facial scan data or electronic facebow through virtual planning software, enables comprehensive 3-dimensional visualization of the hard and soft tissue and the position of future restoration, resulting in heightened diagnostic precision. In virtual surgery design, the incorporation of both prosthetic arrangement and individual anatomical details enables the virtual execution of critical procedures (e.g., implant placement, extended applications, etc.) through analysis of cross-sectional images and the reconstruction of 3-dimensional surface models. After verification, the utilization of digital technology including templates, navigation, combined techniques, and implant robots achieved seamless transfer of the virtual treatment plan to the actual surgical sites, ultimately leading to enhanced surgical outcomes with highly improved accuracy. In restoration delivery, digital techniques for impression, shade matching, and prosthesis fabrication have advanced, enabling seamless digital data conversion and efficient communication among clinicians and technicians. Compared with clinical medicine, artificial intelligence (AI) technology in dental implantology primarily focuses on diagnosis and prediction. AI-supported preoperative planning and surgery remain in developmental phases, impeded by the complexity of clinical cases and ethical considerations, thereby constraining widespread adoption.
Collapse
Affiliation(s)
- J Wang
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
- Department of Oral Implantology, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - B Wang
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Sichuan, Henan
| | - Y Y Liu
- Department of Oral Implantology, The Affiliated Stomatological Hospital of Kunming Medical University, Kunming, Yunnan, Sichuan, China
| | - Y L Luo
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
- Department of Oral Implantology, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Y Y Wu
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
- Department of Oral Implantology, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - L Xiang
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
- Department of Oral Implantology, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - X M Yang
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
- Department of Oral Implantology, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Y L Qu
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
- Department of Prosthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - T R Tian
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
- Department of Oral Implantology, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Y Man
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
- Department of Oral Implantology, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
18
|
Laleman W, Peiffer KH, Tischendorf M, Ullerich HJ, Praktiknjo M, Trebicka J. Role of endoscopy in hepatology. Dig Liver Dis 2024; 56:1185-1195. [PMID: 38151452 DOI: 10.1016/j.dld.2023.11.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 11/27/2023] [Indexed: 12/29/2023]
Abstract
The growing and evolving field of EUS and advanced hepatobiliary endoscopy has amplified traditional upper gastrointestinal endoscopy and unveiled novel options for remaining unsolved hepatobiliary issues, both diagnostically and therapeutically. This conceptually appealing and fascinating integration of endoscopy within the practice of hepatology is referred to as 'endo-hepatology'. Endo-hepatology focuses on the one hand on disorders of the liver parenchyma and liver vasculature and of the hepatobiliary tract on the other hand. Applications hanging under the umbrella of endohepatology involve amongst others EUS-guided liver biopsy, EUS-guided portal pressure measurement, EUS-guided portal venous blood sampling, EUS-guided coil & glue embolization of gastric varices and spontaneous portosystemic shunts as well as ERCP in the challenging context of (decompensated cirrhosis) and intraductal cholangioscopy for primary sclerosing cholangitis. Although endoscopic proficiency however does not necessarily equal in an actual straightforward end-solution for currently persisting (complex) hepatobiliary situations. Therefore, endohepatology continues to generate high-quality data to validate and standardize procedures against currently considered (best available) "golden standards" while continuing to search and trying to provide novel minimally invasive solutions for persisting hepatological stalemate situations. In the current review, we aim to critically appraise the status and potential future directions of endo-hepatology.
Collapse
Affiliation(s)
- Wim Laleman
- Department of Gastroenterology and Hepatology, Section of Liver and Biliopancreatic disorders, University Hospitals Leuven, KU Leuven, Leuven, Belgium; Department of Medicine B (Gastroenterology, Hepatology, Endocrinology, Clinical Infectiology), University Hospital Muenster, Muenster, Germany.
| | - Kai-Henrik Peiffer
- Department of Medicine B (Gastroenterology, Hepatology, Endocrinology, Clinical Infectiology), University Hospital Muenster, Muenster, Germany
| | - Michael Tischendorf
- Department of Medicine B (Gastroenterology, Hepatology, Endocrinology, Clinical Infectiology), University Hospital Muenster, Muenster, Germany
| | - Hans-Joerg Ullerich
- Department of Medicine B (Gastroenterology, Hepatology, Endocrinology, Clinical Infectiology), University Hospital Muenster, Muenster, Germany
| | - Michael Praktiknjo
- Department of Medicine B (Gastroenterology, Hepatology, Endocrinology, Clinical Infectiology), University Hospital Muenster, Muenster, Germany
| | - Jonel Trebicka
- Department of Medicine B (Gastroenterology, Hepatology, Endocrinology, Clinical Infectiology), University Hospital Muenster, Muenster, Germany; European Foundation of Chronic Liver Failure, EFCLIF, Barcelona, Spain
| |
Collapse
|
19
|
Schmidt A, Mohareri O, DiMaio S, Yip MC, Salcudean SE. Tracking and mapping in medical computer vision: A review. Med Image Anal 2024; 94:103131. [PMID: 38442528 DOI: 10.1016/j.media.2024.103131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 02/08/2024] [Accepted: 02/29/2024] [Indexed: 03/07/2024]
Abstract
As computer vision algorithms increase in capability, their applications in clinical systems will become more pervasive. These applications include: diagnostics, such as colonoscopy and bronchoscopy; guiding biopsies, minimally invasive interventions, and surgery; automating instrument motion; and providing image guidance using pre-operative scans. Many of these applications depend on the specific visual nature of medical scenes and require designing algorithms to perform in this environment. In this review, we provide an update to the field of camera-based tracking and scene mapping in surgery and diagnostics in medical computer vision. We begin with describing our review process, which results in a final list of 515 papers that we cover. We then give a high-level summary of the state of the art and provide relevant background for those who need tracking and mapping for their clinical applications. After which, we review datasets provided in the field and the clinical needs that motivate their design. Then, we delve into the algorithmic side, and summarize recent developments. This summary should be especially useful for algorithm designers and to those looking to understand the capability of off-the-shelf methods. We maintain focus on algorithms for deformable environments while also reviewing the essential building blocks in rigid tracking and mapping since there is a large amount of crossover in methods. With the field summarized, we discuss the current state of the tracking and mapping methods along with needs for future algorithms, needs for quantification, and the viability of clinical applications. We then provide some research directions and questions. We conclude that new methods need to be designed or combined to support clinical applications in deformable environments, and more focus needs to be put into collecting datasets for training and evaluation.
Collapse
Affiliation(s)
- Adam Schmidt
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada.
| | - Omid Mohareri
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Simon DiMaio
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Michael C Yip
- Department of Electrical and Computer Engineering, University of California San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada
| |
Collapse
|
20
|
Selvaraj V, Sudhakar S, Sekaran S, Rajamani Sekar SK, Warrier S. Enhancing precision and efficiency: harnessing robotics and artificial intelligence for endoscopic and surgical advancements. Int J Surg 2024; 110:1315-1316. [PMID: 38016128 PMCID: PMC10871611 DOI: 10.1097/js9.0000000000000936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Accepted: 11/09/2023] [Indexed: 11/30/2023]
Affiliation(s)
- Vimalraj Selvaraj
- Department of Applied Mechanics and Biomedical Engineering, Indian Institute of Technology – Madras
| | - Swathi Sudhakar
- Department of Applied Mechanics and Biomedical Engineering, Indian Institute of Technology – Madras
| | - Saravanan Sekaran
- Department of Prosthodontics, Saveetha Dental College and Hospital, Saveetha Institute of Medical and Technical Sciences (SIMATS), Saveetha University
| | | | - Sudha Warrier
- Department of Biotechnology, Faculty of Biomedical Sciences and Technology, Sri Ramachandra Institute of Higher Education and Research, Chennai Tamil Nadu, India
| |
Collapse
|
21
|
Mahoney LB, Huang JS, Lightdale JR, Walsh CM. Pediatric endoscopy: how can we improve patient outcomes and ensure best practices? Expert Rev Gastroenterol Hepatol 2024; 18:89-102. [PMID: 38465446 DOI: 10.1080/17474124.2024.2328229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 03/05/2024] [Indexed: 03/12/2024]
Abstract
INTRODUCTION Strategies to promote high-quality endoscopy in children require consensus around pediatric-specific quality standards and indicators. Using a rigorous guideline development process, the international Pediatric Endoscopy Quality Improvement Network (PEnQuIN) was developed to support continuous quality improvement efforts within and across pediatric endoscopy services. AREAS COVERED This review presents a framework, informed by the PEnQuIN guidelines, for assessing endoscopist competence, granting procedural privileges, audit and feedback, and for skill remediation, when required. As is critical for promoting quality, PEnQuIN indicators can be benchmarked at the individual endoscopist, endoscopy facility, and endoscopy community levels. Furthermore, efforts to incorporate technologies, including electronic medical records and artificial intelligence, into endoscopic quality improvement processes can aid in creation of large-scale networks to facilitate comparison and standardization of quality indicator reporting across sites. EXPERT OPINION PEnQuIN quality standards and indicators provide a framework for continuous quality improvement in pediatric endoscopy, benefiting individual endoscopists, endoscopy facilities, and the broader endoscopy community. Routine and reliable measurement of data, facilitated by technology, is required to identify and drive improvements in care. Engaging all stakeholders in endoscopy quality improvement processes is crucial to enhancing patient outcomes and establishing best practices for safe, efficient, and effective pediatric endoscopic care.
Collapse
Affiliation(s)
- Lisa B Mahoney
- Division of Gastroenterology, Hepatology and Nutrition, Boston Children's Hospital, Boston, MA, USA
| | - Jeannie S Huang
- Rady Children's Hospital, San Diego, CA and University of California San Diego, La Jolla, CA, USA
| | - Jenifer R Lightdale
- Division of Gastroenterology, Hepatology and Nutrition, Boston Children's Hospital, Boston, MA, USA
| | - Catharine M Walsh
- Division of Gastroenterology, Hepatology and Nutrition and the Research and Learning Institutes, The Hospital for Sick Children, Department of Paediatrics and the Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
22
|
Laleman W, Vanderschueren E, Mehdi ZS, Wiest R, Cardenas A, Trebicka J. Endoscopic procedures in hepatology: Current trends and new developments. J Hepatol 2024; 80:124-139. [PMID: 37730125 DOI: 10.1016/j.jhep.2023.08.032] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 08/10/2023] [Accepted: 08/28/2023] [Indexed: 09/22/2023]
Abstract
Gastrointestinal endoscopy has long been a reliable backbone in the diagnosis and management of hepatobilary disorders and their complications. However, with evolving non-invasive testing, personalised medicine has reframed the utility and necessity of endoscopic screening. Conversely, the growing interest and use of endoscopic ultrasound (EUS) and advanced endoscopy within gastrointestinal units has also opened novel diagnostic and therapeutic avenues for patients with various hepatobiliary diseases. The integration of "advanced endoscopy" within the practice of hepatology is nowadays referred to as "endo-hepatology". In essence, endo-hepatology consists of two pillars: one focusing primarily on disorders of the liver parenchyma, vascular disorders, and portal hypertension, which is mainly captured via EUS, while the other targets the hepatobiliary tract via endoscopic retrograde cholangiopancreatography and advanced imaging. Applications under the umbrella of endo-hepatology include, amongst others, EUS-guided liver biopsy, EUS-guided portal pressure gradient measurement, coil and glue embolisation of gastric varices as well as cholangioscopy. As such endo-hepatology could become an attractive concept wherein advanced endoscopy might reinforce the medical management of patients with hepatobiliary disorders and their complications after initial basic work-up. In this review, we discuss current trends and future developments within endo-hepatology and the remaining hurdles to overcome.
Collapse
Affiliation(s)
- Wim Laleman
- Department of Gastroenterology and Hepatology, Section of Liver and Biliopancreatic Disorders, University Hospitals Leuven, KU LEUVEN, Leuven, Belgium; Medizinische Klinik B, Universitätsklinikum Münster, Münster University, Münster, Germany.
| | - Emma Vanderschueren
- Department of Gastroenterology and Hepatology, Section of Liver and Biliopancreatic Disorders, University Hospitals Leuven, KU LEUVEN, Leuven, Belgium
| | - Zain Seyad Mehdi
- Department of Mechanical Engineering, KU LEUVEN, Leuven, Belgium
| | - Reiner Wiest
- Department of Visceral Surgery and Medicine, University Inselspital, Bern, Switzerland
| | - Andres Cardenas
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Centro de Investigacion Biomedica en Red Enfermedades Hepaticas y Digestivas (CIBERehd), Madrid, Spain; Institute of Digestive Disease and Metabolism, Hospital Clinic de Barcelona, Barcelona, Catalunya, Spain
| | - Jonel Trebicka
- Medizinische Klinik B, Universitätsklinikum Münster, Münster University, Münster, Germany; European Foundation of Chronic Liver Failure, EFCLIF, Barcelona, Spain
| |
Collapse
|
23
|
Chu CH, Chia YH, Hsu HC, Vyas S, Tsai CM, Yamaguchi T, Tanaka T, Chen HW, Luo Y, Yang PC, Tsai DP. Intelligent Phase Contrast Meta-Microscope System. NANO LETTERS 2023; 23:11630-11637. [PMID: 38038680 DOI: 10.1021/acs.nanolett.3c03484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2023]
Abstract
Phase contrast imaging techniques enable the visualization of disparities in the refractive index among various materials. However, these techniques usually come with a cost: the need for bulky, inflexible, and complicated configurations. Here, we propose and experimentally demonstrate an ultracompact meta-microscope, a novel imaging platform designed to accomplish both optical and digital phase contrast imaging. The optical phase contrast imaging system is composed of a pair of metalenses and an intermediate spiral phase metasurface located at the Fourier plane. The performance of the system in generating edge-enhanced images is validated by imaging a variety of human cells, including lung cell lines BEAS-2B, CLY1, and H1299 and other types. Additionally, we integrate the ResNet deep learning model into the meta-microscope to transform bright-field images into edge-enhanced images with high contrast accuracy. This technology promises to aid in the development of innovative miniature optical systems for biomedical and clinical applications.
Collapse
Affiliation(s)
- Cheng Hung Chu
- YongLin Institute of Health, National Taiwan University, Taipei 10672, Taiwan
| | - Yu-Hsin Chia
- Institute of Medical Device and Imaging, National Taiwan University, Taipei 10051, Taiwan
- Department of Biomedical Engineering, National Taiwan University, Taipei 10051, Taiwan
| | - Hung-Chuan Hsu
- Department of Mechanical Engineering, National Taiwan University, Taipei 10617, Taiwan
| | - Sunil Vyas
- Institute of Medical Device and Imaging, National Taiwan University, Taipei 10051, Taiwan
| | - Chen-Ming Tsai
- Institute of Medical Device and Imaging, National Taiwan University, Taipei 10051, Taiwan
| | - Takeshi Yamaguchi
- Innovative Photon Manipulation Research Team, RIKEN Center for Advanced Photonics, Saitama 351-0198, Japan
| | - Takuo Tanaka
- Innovative Photon Manipulation Research Team, RIKEN Center for Advanced Photonics, Saitama 351-0198, Japan
| | - Huei-Wen Chen
- Graduate Institute of Toxicology, College of Medicine, National Taiwan University, Taipei 100, Taiwan
- Genome and Systems Biology Degree Program, National Taiwan University and Academia Sinica, Taipei 100, Taiwan
| | - Yuan Luo
- YongLin Institute of Health, National Taiwan University, Taipei 10672, Taiwan
- Institute of Medical Device and Imaging, National Taiwan University, Taipei 10051, Taiwan
- Department of Biomedical Engineering, National Taiwan University, Taipei 10051, Taiwan
- Program for Precision Health and Intelligent Medicine, National Taiwan University, Taipei 106319, Taiwan, R.O.C
| | - Pan-Chyr Yang
- YongLin Institute of Health, National Taiwan University, Taipei 10672, Taiwan
- Program for Precision Health and Intelligent Medicine, National Taiwan University, Taipei 106319, Taiwan, R.O.C
- Department of Internal Medicine, National Taiwan University Hospital, National Taiwan University, Taipei 10002, Taiwan
- Institute of Biomedical Sciences, Academia Sinica, Taipei 11529, Taiwan
| | - Din Ping Tsai
- Department of Electrical Engineering, City University of Hong Kong, Kowloon 999077, Hong Kong
- Centre for Biosystems, Neuroscience, and Nanotechnology, City University of Hong Kong, Kowloon 999077, Hong Kong
- The State Key Laboratory of Terahertz and Millimeter Waves, City University of Hong Kong, Kowloon 99907, Hong Kong
| |
Collapse
|
24
|
Daher R, Vasconcelos F, Stoyanov D. A Temporal Learning Approach to Inpainting Endoscopic Specularities and Its Effect on Image Correspondence. Med Image Anal 2023; 90:102994. [PMID: 37812856 PMCID: PMC10958122 DOI: 10.1016/j.media.2023.102994] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 08/31/2023] [Accepted: 10/02/2023] [Indexed: 10/11/2023]
Abstract
Video streams are utilised to guide minimally-invasive surgery and diagnosis in a wide range of procedures, and many computer-assisted techniques have been developed to automatically analyse them. These approaches can provide additional information to the surgeon such as lesion detection, instrument navigation, or anatomy 3D shape modelling. However, the necessary image features to recognise these patterns are not always reliably detected due to the presence of irregular light patterns such as specular highlight reflections. In this paper, we aim at removing specular highlights from endoscopic videos using machine learning. We propose using a temporal generative adversarial network (GAN) to inpaint the hidden anatomy under specularities, inferring its appearance spatially and from neighbouring frames, where they are not present in the same location. This is achieved using in-vivo data from gastric endoscopy (Hyper Kvasir) in a fully unsupervised manner that relies on the automatic detection of specular highlights. System evaluations show significant improvements to other methods through direct comparison and ablation studies that depict the importance of the network's temporal and transfer learning components. The generalisability of our system to different surgical setups and procedures was also evaluated qualitatively on in-vivo data of gastric endoscopy and ex-vivo porcine data (SERV-CT, SCARED). We also assess the effect of our method in comparison to other methods on computer vision tasks that underpin 3D reconstruction and camera motion estimation, namely stereo disparity, optical flow, and sparse point feature matching. These are evaluated quantitatively and qualitatively and results show a positive effect of our specular inpainting method on these tasks in a novel comprehensive analysis. Our code and dataset are made available at https://github.com/endomapper/Endo-STTN.
Collapse
Affiliation(s)
- Rema Daher
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, Gower Street, London, WC1E 6BT, UK.
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, Gower Street, London, WC1E 6BT, UK.
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, Gower Street, London, WC1E 6BT, UK.
| |
Collapse
|
25
|
Bobrow TL, Golhar M, Vijayan R, Akshintala VS, Garcia JR, Durr NJ. Colonoscopy 3D video dataset with paired depth from 2D-3D registration. Med Image Anal 2023; 90:102956. [PMID: 37713764 PMCID: PMC10591895 DOI: 10.1016/j.media.2023.102956] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 06/29/2023] [Accepted: 09/04/2023] [Indexed: 09/17/2023]
Abstract
Screening colonoscopy is an important clinical application for several 3D computer vision techniques, including depth estimation, surface reconstruction, and missing region detection. However, the development, evaluation, and comparison of these techniques in real colonoscopy videos remain largely qualitative due to the difficulty of acquiring ground truth data. In this work, we present a Colonoscopy 3D Video Dataset (C3VD) acquired with a high definition clinical colonoscope and high-fidelity colon models for benchmarking computer vision methods in colonoscopy. We introduce a novel multimodal 2D-3D registration technique to register optical video sequences with ground truth rendered views of a known 3D model. The different modalities are registered by transforming optical images to depth maps with a Generative Adversarial Network and aligning edge features with an evolutionary optimizer. This registration method achieves an average translation error of 0.321 millimeters and an average rotation error of 0.159 degrees in simulation experiments where error-free ground truth is available. The method also leverages video information, improving registration accuracy by 55.6% for translation and 60.4% for rotation compared to single frame registration. 22 short video sequences were registered to generate 10,015 total frames with paired ground truth depth, surface normals, optical flow, occlusion, six degree-of-freedom pose, coverage maps, and 3D models. The dataset also includes screening videos acquired by a gastroenterologist with paired ground truth pose and 3D surface models. The dataset and registration source code are available at https://durr.jhu.edu/C3VD.
Collapse
Affiliation(s)
- Taylor L Bobrow
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Mayank Golhar
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Rohan Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Venkata S Akshintala
- Division of Gastroenterology and Hepatology, Johns Hopkins Medicine, Baltimore, MD 21287, USA
| | - Juan R Garcia
- Department of Art as Applied to Medicine, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Nicholas J Durr
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
26
|
Qi C, Hu L, Zhang C, Wang K, Qiu B, Yi J, Shen Y. Role of surgery in T4N0-3M0 esophageal cancer. World J Surg Oncol 2023; 21:369. [PMID: 38008742 PMCID: PMC10680323 DOI: 10.1186/s12957-023-03239-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 11/13/2023] [Indexed: 11/28/2023] Open
Abstract
BACKGROUND This study aimed to investigate an unsettled issue that whether T4 esophageal cancer could benefit from surgery. METHODS Patients with T4N0-3M0 esophageal cancer from 2004 to 2015 from the Surveillance, Epidemiology, and End Results (SEER) database were included in this study. Kaplan-Meier method, Cox proportional hazard regression, and propensity score matching (PSM) were used to compare overall survival (OS) between the surgery and no-surgery group. RESULTS A total of 1822 patients were analyzed. The multivariable Cox regression showed the HR (95% CI) for surgery vs. no surgery was 0.492 (0.427-0.567) (P < 0.001) in T4N0-3M0 cohort, 0.471 (0.354-0.627) (P < 0.001) in T4aN0-3M0 cohort, and 0.480 (0.335-0.689) (P < 0.001) in T4bN0-3M0 cohort. The HR (95% CI) for neoadjuvant therapy plus surgery vs. no surgery and surgery without neoadjuvant therapy vs. no surgery were 0.548 (0.461-0.650) (P < 0.001) and 0.464 (0.375-0.574) (P < 0.001), respectively. No significant OS difference was observed between neoadjuvant therapy plus surgery and surgery without neoadjuvant therapy: 0.966 (0.686-1.360) (P = 0.843). Subgroup analyses and PSM-adjusted analyses showed consistent results. CONCLUSION Surgery might bring OS improvement for T4N0-3M0 esophageal cancer patients, no matter in T4a disease or in T4b disease. Surgery with and without neoadjuvant therapy might both achieve better OS than no surgery.
Collapse
Affiliation(s)
- Chen Qi
- Department of Cardiothoracic Surgery, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China
| | - Liwen Hu
- Department of Cardiothoracic Surgery, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China
- Department of Cardiothoracic Surgery, Jinling Hospital, Jinling Clinical Medical School, Nanjing Medical University, Nanjing, 210002, China
| | - Chi Zhang
- Department of Cardiothoracic Surgery, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China
| | - Kang Wang
- Department of Cardiothoracic Surgery, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China
- Department of Cardiothoracic Surgery, Jinling Hospital, Jinling Clinical Medical School, Nanjing Medical University, Nanjing, 210002, China
| | - Bingmei Qiu
- Department of Cardiothoracic Surgery, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China
- Department of Anesthesiology, Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Nanjing, 210004, China
| | - Jun Yi
- Department of Cardiothoracic Surgery, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China.
- Department of Cardiothoracic Surgery, Jinling Hospital, Jinling Clinical Medical School, Nanjing Medical University, Nanjing, 210002, China.
| | - Yi Shen
- Department of Cardiothoracic Surgery, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China.
- Department of Cardiothoracic Surgery, Jinling Hospital, Jinling Clinical Medical School, Nanjing Medical University, Nanjing, 210002, China.
| |
Collapse
|
27
|
Brandenburg JM, Jenke AC, Stern A, Daum MTJ, Schulze A, Younis R, Petrynowski P, Davitashvili T, Vanat V, Bhasker N, Schneider S, Mündermann L, Reinke A, Kolbinger FR, Jörns V, Fritz-Kebede F, Dugas M, Maier-Hein L, Klotz R, Distler M, Weitz J, Müller-Stich BP, Speidel S, Bodenstedt S, Wagner M. Active learning for extracting surgomic features in robot-assisted minimally invasive esophagectomy: a prospective annotation study. Surg Endosc 2023; 37:8577-8593. [PMID: 37833509 PMCID: PMC10615926 DOI: 10.1007/s00464-023-10447-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 09/02/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND With Surgomics, we aim for personalized prediction of the patient's surgical outcome using machine-learning (ML) on multimodal intraoperative data to extract surgomic features as surgical process characteristics. As high-quality annotations by medical experts are crucial, but still a bottleneck, we prospectively investigate active learning (AL) to reduce annotation effort and present automatic recognition of surgomic features. METHODS To establish a process for development of surgomic features, ten video-based features related to bleeding, as highly relevant intraoperative complication, were chosen. They comprise the amount of blood and smoke in the surgical field, six instruments, and two anatomic structures. Annotation of selected frames from robot-assisted minimally invasive esophagectomies was performed by at least three independent medical experts. To test whether AL reduces annotation effort, we performed a prospective annotation study comparing AL with equidistant sampling (EQS) for frame selection. Multiple Bayesian ResNet18 architectures were trained on a multicentric dataset, consisting of 22 videos from two centers. RESULTS In total, 14,004 frames were tag annotated. A mean F1-score of 0.75 ± 0.16 was achieved for all features. The highest F1-score was achieved for the instruments (mean 0.80 ± 0.17). This result is also reflected in the inter-rater-agreement (1-rater-kappa > 0.82). Compared to EQS, AL showed better recognition results for the instruments with a significant difference in the McNemar test comparing correctness of predictions. Moreover, in contrast to EQS, AL selected more frames of the four less common instruments (1512 vs. 607 frames) and achieved higher F1-scores for common instruments while requiring less training frames. CONCLUSION We presented ten surgomic features relevant for bleeding events in esophageal surgery automatically extracted from surgical video using ML. AL showed the potential to reduce annotation effort while keeping ML performance high for selected features. The source code and the trained models are published open source.
Collapse
Affiliation(s)
- Johanna M Brandenburg
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Alexander C Jenke
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Antonia Stern
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Marie T J Daum
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - André Schulze
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Rayan Younis
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Philipp Petrynowski
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Tornike Davitashvili
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Vincent Vanat
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Nithya Bhasker
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Sophia Schneider
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Lars Mündermann
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Annika Reinke
- Department of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Fiona R Kolbinger
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
- Else Kröner-Fresenius Center for Digital Health, Technische Universität Dresden, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Vanessa Jörns
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Fleur Fritz-Kebede
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Martin Dugas
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Lena Maier-Hein
- Department of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Rosa Klotz
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- The Study Center of the German Surgical Society (SDGC), Heidelberg University Hospital, Heidelberg, Germany
| | - Marius Distler
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Jürgen Weitz
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Centre for Tactile Internet With Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| | - Beat P Müller-Stich
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- University Center for Gastrointestinal and Liver Diseases, St. Clara Hospital and University Hospital Basel, Basel, Switzerland
| | - Stefanie Speidel
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Centre for Tactile Internet With Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| | - Sebastian Bodenstedt
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Centre for Tactile Internet With Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| | - Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
- German Cancer Research Center (DKFZ), Heidelberg, Germany.
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany.
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany.
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany.
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany.
- Centre for Tactile Internet With Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062, Dresden, Germany.
| |
Collapse
|
28
|
Li S, Huang L, Guo Y, Wang L, Xie RJ. A super-high brightness and excellent colour quality laser-driven white light source enables miniaturized endoscopy. MATERIALS HORIZONS 2023; 10:4581-4588. [PMID: 37584153 DOI: 10.1039/d3mh01170d] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
A laser-driven white light source promises intrinsic advantages for miniaturized endoscopic illumination. However, it remains a great challenge to simultaneously achieve high brightness and excellent colour rendition due to the shortage of highly efficient and thermally robust red-emitting laser phosphor converters. Here, we designed CaAlSiN3:Eu@Al (CASN@Al) converters with neglectable efficiency loss by tightly bonding all-inorganic phosphor films on an aluminium substrate. A layer-by-layer phosphor converter (LuAG/CASN@Al), i.e., stacking a green-emitting Lu3Al5O12:Ce (LuAG) layer on CASN@Al, was constructed to enhance light conversion efficiency and reduce reabsorption loss under blue laser excitation, which thus produces an excellent white light source with a luminous efficacy of 258 lm W-1 and a colour rendering index of 91. A miniaturized endoscopy with a coupling efficiency twice that of the commercial white LEDs was demonstrated by using the laser-driven white light and showed a central illuminance as high as 52 730 lx, more vivid images and long-term reliability.
Collapse
Affiliation(s)
- Shuxing Li
- Fujian Provincial Key Laboratory of Surface and Interface Engineering for High Performance Materials, College of Materials, Xiamen University, Xiamen 361005, China
| | - Linhui Huang
- Fujian Provincial Key Laboratory of Surface and Interface Engineering for High Performance Materials, College of Materials, Xiamen University, Xiamen 361005, China
| | - Yunqin Guo
- Fujian Provincial Key Laboratory of Surface and Interface Engineering for High Performance Materials, College of Materials, Xiamen University, Xiamen 361005, China
| | - Le Wang
- College of Optical and Electronic Technology, China Jiliang University, Hangzhou 310018, China.
| | - Rong-Jun Xie
- Fujian Provincial Key Laboratory of Surface and Interface Engineering for High Performance Materials, College of Materials, Xiamen University, Xiamen 361005, China
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Xiamen University, Xiamen 361005, China.
| |
Collapse
|
29
|
Zhong NN, Wang HQ, Huang XY, Li ZZ, Cao LM, Huo FY, Liu B, Bu LL. Enhancing head and neck tumor management with artificial intelligence: Integration and perspectives. Semin Cancer Biol 2023; 95:52-74. [PMID: 37473825 DOI: 10.1016/j.semcancer.2023.07.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 07/11/2023] [Accepted: 07/15/2023] [Indexed: 07/22/2023]
Abstract
Head and neck tumors (HNTs) constitute a multifaceted ensemble of pathologies that primarily involve regions such as the oral cavity, pharynx, and nasal cavity. The intricate anatomical structure of these regions poses considerable challenges to efficacious treatment strategies. Despite the availability of myriad treatment modalities, the overall therapeutic efficacy for HNTs continues to remain subdued. In recent years, the deployment of artificial intelligence (AI) in healthcare practices has garnered noteworthy attention. AI modalities, inclusive of machine learning (ML), neural networks (NNs), and deep learning (DL), when amalgamated into the holistic management of HNTs, promise to augment the precision, safety, and efficacy of treatment regimens. The integration of AI within HNT management is intricately intertwined with domains such as medical imaging, bioinformatics, and medical robotics. This article intends to scrutinize the cutting-edge advancements and prospective applications of AI in the realm of HNTs, elucidating AI's indispensable role in prevention, diagnosis, treatment, prognostication, research, and inter-sectoral integration. The overarching objective is to stimulate scholarly discourse and invigorate insights among medical practitioners and researchers to propel further exploration, thereby facilitating superior therapeutic alternatives for patients.
Collapse
Affiliation(s)
- Nian-Nian Zhong
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Han-Qi Wang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Xin-Yue Huang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Zi-Zhan Li
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Lei-Ming Cao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Fang-Yi Huo
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Bing Liu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| | - Lin-Lin Bu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| |
Collapse
|
30
|
Wang Z, Tao H, Wang J, Zhu Y, Lin J, Fang C, Yang J. Laparoscopic right hemi-hepatectomy plus total caudate lobectomy for perihilar cholangiocarcinoma via anterior approach with augmented reality navigation: a feasibility study. Surg Endosc 2023; 37:8156-8164. [PMID: 37653158 DOI: 10.1007/s00464-023-10397-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/13/2023] [Indexed: 09/02/2023]
Abstract
BACKGROUND Right hemi-hepatectomy plus total caudate lobectomy is the appropriate procedure for type IIIa or partial type II pCCA. However, the laparoscopic implementation of this procedure remains technically challenging, especially hilar vascular dissection and en bloc resection of the total caudate lobe. Augmented reality navigation can provide intraoperative navigation to enhance visualization of invisible hilar blood vessels and guide the parenchymal transection plane. METHODS Eleven patients who underwent laparoscopic right hemi-hepatectomy plus total caudate lobectomy from January 2021 to January 2023 were enrolled in this study. Augmented reality navigation technology and the anterior approach were utilized in this operation. Routine operative and short-term postoperative outcomes were assessed to evaluate the feasibility of the novel navigation method in this operation. RESULTS Right hemi-hepatectomy plus total caudate lobectomy was successfully performed in all 11 enrolled patients. Among the 11 patients, the mean operation time was 454.5 ± 25.0 min and the mean estimated blood loss was 209.1 ± 56.1 ml. Negative surgical margins were achieved in all patients. The postoperative course of all the patients was uneventful, and the mean length of postoperative hospital stay was 10.5 ± 1.2 days. CONCLUSION Laparoscopic right hemi-hepatectomy plus total caudate lobectomy via the anterior approach may be feasible and safe for pCCA with the assistance of augmented reality navigation.
Collapse
Affiliation(s)
- Zhuangxiong Wang
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Haisu Tao
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Junfeng Wang
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Yilin Zhu
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Jinyu Lin
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China.
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China.
| | - Jian Yang
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China.
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China.
| |
Collapse
|
31
|
Wang H, Wang K, Yan T, Zhou H, Cao E, Lu Y, Wang Y, Luo J, Pang Y. Endoscopic image classification algorithm based on Poolformer. Front Neurosci 2023; 17:1273686. [PMID: 37811325 PMCID: PMC10551176 DOI: 10.3389/fnins.2023.1273686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 09/04/2023] [Indexed: 10/10/2023] Open
Abstract
Image desmoking is a significant aspect of endoscopic image processing, effectively mitigating visual field obstructions without the need for additional surgical interventions. However, current smoke removal techniques tend to apply comprehensive video enhancement to all frames, encompassing both smoke-free and smoke-affected images, which not only escalates computational costs but also introduces potential noise during the enhancement of smoke-free images. In response to this challenge, this paper introduces an approach for classifying images that contain surgical smoke within endoscopic scenes. This classification method provides crucial target frame information for enhancing surgical smoke removal, improving the scientific robustness, and enhancing the real-time processing capabilities of image-based smoke removal method. The proposed endoscopic smoke image classification algorithm based on the improved Poolformer model, augments the model's capacity for endoscopic image feature extraction. This enhancement is achieved by transforming the Token Mixer within the encoder into a multi-branch structure akin to ConvNeXt, a pure convolutional neural network. Moreover, the conversion to a single-path topology during the prediction phase elevates processing speed. Experiments use the endoscopic dataset sourced from the Hamlyn Centre Laparoscopic/Endoscopic Video Dataset, augmented by Blender software rendering. The dataset comprises 3,800 training images and 1,200 test images, distributed in a 4:1 ratio of smoke-free to smoke-containing images. The outcomes affirm the superior performance of this paper's approach across multiple parameters. Comparative assessments against existing models, such as mobilenet_v3, efficientnet_b7, and ViT-B/16, substantiate that the proposed method excels in accuracy, sensitivity, and inference speed. Notably, when contrasted with the Poolformer_s12 network, the proposed method achieves a 2.3% enhancement in accuracy, an 8.2% boost in sensitivity, while incurring a mere 6.4 frames per second reduction in processing speed, maintaining 87 frames per second. The results authenticate the improved performance of the refined Poolformer model in endoscopic smoke image classification tasks. This advancement presents a lightweight yet effective solution for the automatic detection of smoke-containing images in endoscopy. This approach strikes a balance between the accuracy and real-time processing requirements of endoscopic image analysis, offering valuable insights for targeted desmoking process.
Collapse
Affiliation(s)
- Huiqian Wang
- Postdoctoral Research Station, Chongqing Key Laboratory of Photoelectronic Information Sensing and Transmitting Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
- Chongqing Xishan Science & Technology Co., Ltd., Chongqing, China
| | - Kun Wang
- Postdoctoral Research Station, Chongqing Key Laboratory of Photoelectronic Information Sensing and Transmitting Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Tian Yan
- Postdoctoral Research Station, Chongqing Key Laboratory of Photoelectronic Information Sensing and Transmitting Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Hekai Zhou
- Postdoctoral Research Station, Chongqing Key Laboratory of Photoelectronic Information Sensing and Transmitting Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Enling Cao
- Postdoctoral Research Station, Chongqing Key Laboratory of Photoelectronic Information Sensing and Transmitting Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Yi Lu
- Postdoctoral Research Station, Chongqing Key Laboratory of Photoelectronic Information Sensing and Transmitting Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Yuanfa Wang
- Postdoctoral Research Station, Chongqing Key Laboratory of Photoelectronic Information Sensing and Transmitting Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
- Chongqing Xishan Science & Technology Co., Ltd., Chongqing, China
| | - Jiasai Luo
- Postdoctoral Research Station, Chongqing Key Laboratory of Photoelectronic Information Sensing and Transmitting Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Yu Pang
- Postdoctoral Research Station, Chongqing Key Laboratory of Photoelectronic Information Sensing and Transmitting Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| |
Collapse
|
32
|
Langerman A, Hammack-Aviran C, Cohen IG, Agarwala AV, Cortez N, Feigenson NR, Fried GM, Grantcharov T, Greenberg CC, Mello MM, Shuman AG. Navigating a Path Toward Routine Recording in the Operating Room. Ann Surg 2023; 278:e474-e475. [PMID: 37212390 DOI: 10.1097/sla.0000000000005906] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Affiliation(s)
- Alexander Langerman
- Department of Otolaryngology - Head and Neck Surgery and Center for Biomedical Ethics and Society, Vanderbilt University Medical Center, Nashville, TN
| | | | | | - Aalok V Agarwala
- Department of Anesthesia, Massachusetts Eye and Ear Infirmary. Boston, MA
| | - Nathan Cortez
- Southern Methodist University Dedman School of Law, Dallas, TX
| | | | - Gerald M Fried
- Division of General Surgery, McGill University Faculty of Medicine and Health Sciences, Montreal, QC, Canada
| | | | | | - Michelle M Mello
- Stanford Law School and Department of Health Policy, Stanford University School of Medicine, Stanford, CA
| | - Andrew G Shuman
- Department of Otolaryngology - Head and Neck Surgery and Center for Bioethics and Social Sciences in Medicine, University of Michigan, and the Veterans Affairs Ann Arbor Health System, Ann Arbor, MI
| |
Collapse
|
33
|
Ungureanu BS, Gheonea DI, Florescu DN, Iordache S, Cazacu SM, Iovanescu VF, Rogoveanu I, Turcu-Stiolica A. Predicting mortality in patients with nonvariceal upper gastrointestinal bleeding using machine-learning. Front Med (Lausanne) 2023; 10:1134835. [PMID: 36873879 PMCID: PMC9982090 DOI: 10.3389/fmed.2023.1134835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Accepted: 02/06/2023] [Indexed: 02/19/2023] Open
Abstract
Background Non-endoscopic risk scores, Glasgow Blatchford (GBS) and admission Rockall (Rock), are limited by poor specificity. The aim of this study was to develop an Artificial Neural Network (ANN) for the non-endoscopic triage of nonvariceal upper gastrointestinal bleeding (NVUGIB), with mortality as a primary outcome. Methods Four machine learning algorithms, namely, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), logistic regression (LR), K-Nearest Neighbor (K-NN), were performed with GBS, Rock, Beylor Bleeding score (BBS), AIM65, and T-score. Results A total of 1,096 NVUGIB hospitalized in the Gastroenterology Department of the County Clinical Emergency Hospital of Craiova, Romania, randomly divided into training and testing groups, were included retrospectively in our study. The machine learning models were more accurate at identifying patients who met the endpoint of mortality than any of the existing risk scores. AIM65 was the most important score in the detection of whether a NVUGIB would die or not, whereas BBS had no influence on this. Also, the greater AIM65 and GBS, and the lower Rock and T-score, the higher mortality will be. Conclusion The best accuracy was obtained by the hyperparameter-tuned K-NN classifier (98%), giving the highest precision and recall on the training and testing datasets among all developed models, showing that machine learning can accurately predict mortality in patients with NVUGIB.
Collapse
Affiliation(s)
- Bogdan Silviu Ungureanu
- Department of Gastroenterology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | - Dan Ionut Gheonea
- Department of Gastroenterology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | - Dan Nicolae Florescu
- Department of Gastroenterology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | - Sevastita Iordache
- Department of Gastroenterology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | - Sergiu Marian Cazacu
- Department of Gastroenterology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | - Vlad Florin Iovanescu
- Department of Gastroenterology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | - Ion Rogoveanu
- Department of Gastroenterology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | - Adina Turcu-Stiolica
- Department of Pharmacoeconomics, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| |
Collapse
|