1
|
De Simone B, Abu-Zidan FM, Saeidi S, Deeken G, Biffl WL, Moore EE, Sartelli M, Coccolini F, Ansaloni L, Di Saverio S, Catena F. Knowledge, attitudes and practices of using Indocyanine Green (ICG) fluorescence in emergency surgery: an international web-based survey in the ARtificial Intelligence in Emergency and trauma Surgery (ARIES)-WSES project. Updates Surg 2024; 76:1969-1981. [PMID: 38801604 DOI: 10.1007/s13304-024-01853-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 04/10/2024] [Indexed: 05/29/2024]
Abstract
Fluorescence imaging is a real-time intraoperative navigation modality to enhance surgical vision and it can guide emergency surgeons while performing difficult, high-risk surgical procedures. The aim of this study is to assess current knowledge, attitudes, and practices of emergency surgeons in the use of indocyanine green (ICG) in emergency settings. Between March 08, 2023 and April 10, 2023, a questionnaire composed of 27 multiple choice and open-ended questions was sent to 200 emergency surgeons who had previously joined the ARtificial Intelligence in Emergency and trauma Surgery (ARIES) project promoted by the WSES. The questionnaire was developed by an emergency surgeon with an interest in advanced technologies and artificial intelligence. The response rate was 96% (192/200). Responders affirmed that ICG fluorescence can support the performance of difficult surgical procedures in the emergency setting, particularly in the presence of severe inflammation and in evaluating bowel viability. Nevertheless, there were concerns regarding accessibility and availability of fluorescence imaging in emergency settings. Eighty-seven out of 192 (45.3%) respondents have a fluorescence imaging system of vision for both elective and emergency surgical procedures; 32.3% of respondents have this system solely for elective procedures; 21.4% of respondents do not have this system, 15% do not have experience with it, and 38% do not use this imaging in emergency surgery. Less than 1% (2/192) affirmed that ICG fluorescence changed always their intraoperative decision-making. Precision surgery effectively tailors surgical interventions to individual patient characteristics using advanced technology, data analysis and artificial intelligence. ICG fluorescence can serve as a valid and safe tool to guide emergency surgery in different scenarios, such as intestinal ischemia and severe acute cholecystitis. Due to the lack of high-level evidence within this field, a consensus of expert emergency surgeons is needed to encourage stakeholders to increase the availability of fluorescence imaging systems and to support emergency surgeons in implementing ICG fluorescence in their daily practice.
Collapse
Affiliation(s)
- Belinda De Simone
- Department of Emergency and Digestive Minimally Invasive Surgery, Academic Hospital of Villeneuve St Georges, Villeneuve St Georges, France.
- Department of Emergency and General Minimally Invasive Surgery, Infermi Hospital, AUSL Romagna, Rimini, Italy.
- eCampus University, Novedrate, CO, Italy.
| | - Fikri M Abu-Zidan
- The Research Office, College of Medicine and Health Sciences, United Arab Emirates University, Al-Ain, UAE
| | - Sara Saeidi
- Minimally Invasive Research Center, Division of Minimally Invasive and Bariatric Surgery, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Genevieve Deeken
- Center for Research in Epidemiology and Statistics (CRESS), Université Paris Cité, 75004, Paris, France
- Department of Global Public Health and Global Studies, University of Virginia, Charlottesville, VA, 22904-4132, USA
| | - Walter L Biffl
- Department of Trauma and Emergency Surgery, Scripps Clinic, La Jolla, San Diego, USA
| | | | - Massimo Sartelli
- Department of General Surgery, Macerata Hospital, Macerata, Italy
| | - Federico Coccolini
- Department of General and Trauma Surgery, University Hospital of Pisa, Pisa, Italy
| | - Luca Ansaloni
- Department of General Surgery, Pavia University Hospital, Pavia, Italy
| | - Salomone Di Saverio
- Department of Surgery, Santa Maria del Soccorso Hospital, San Benedetto del Tronto, Italy
| | - Fausto Catena
- Department of Emergency and General Surgery, Level I Trauma Center, Bufalini Hospital, AUSL Romagna, Cesena, Italy
| |
Collapse
|
2
|
Leaf MC, Musselman K, Wang KC. Cutting-edge care: unleashing artificial intelligence's potential in gynecologic surgery. Curr Opin Obstet Gynecol 2024; 36:255-259. [PMID: 38869434 DOI: 10.1097/gco.0000000000000971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2024]
Abstract
PURPOSE OF REVIEW Artificial intelligence (AI) is now integrated in our daily life. It has also been incorporated in medicine with algorithms to diagnose, recommend treatment options, and estimate prognosis. RECENT FINDINGS AI in surgery differs from virtual AI used for clinical application. Physical AI in the form of computer vision and augmented reality is used to improve surgeon's skills, performance, and patient outcomes. SUMMARY Several applications of AI and augmented reality are utilized in gynecologic surgery. AI's potential use can be found in all phases of surgery: preoperatively, intra-operatively, and postoperatively. Its current benefits are for improving accuracy, surgeon's precision, and reducing complications.
Collapse
Affiliation(s)
- Marie-Claire Leaf
- Gynecology and Obstetrics, Division of Minimally Invasive Gynecologic Surgery, Johns Hopkins Hospital, Baltimore, Maryland, USA
| | | | | |
Collapse
|
3
|
Younis R, Yamlahi A, Bodenstedt S, Scheikl PM, Kisilenko A, Daum M, Schulze A, Wise PA, Nickel F, Mathis-Ullrich F, Maier-Hein L, Müller-Stich BP, Speidel S, Distler M, Weitz J, Wagner M. A surgical activity model of laparoscopic cholecystectomy for co-operation with collaborative robots. Surg Endosc 2024; 38:4316-4328. [PMID: 38872018 PMCID: PMC11289174 DOI: 10.1007/s00464-024-10958-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 05/24/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND Laparoscopic cholecystectomy is a very frequent surgical procedure. However, in an ageing society, less surgical staff will need to perform surgery on patients. Collaborative surgical robots (cobots) could address surgical staff shortages and workload. To achieve context-awareness for surgeon-robot collaboration, the intraoperative action workflow recognition is a key challenge. METHODS A surgical process model was developed for intraoperative surgical activities including actor, instrument, action and target in laparoscopic cholecystectomy (excluding camera guidance). These activities, as well as instrument presence and surgical phases were annotated in videos of laparoscopic cholecystectomy performed on human patients (n = 10) and on explanted porcine livers (n = 10). The machine learning algorithm Distilled-Swin was trained on our own annotated dataset and the CholecT45 dataset. The validation of the model was conducted using a fivefold cross-validation approach. RESULTS In total, 22,351 activities were annotated with a cumulative duration of 24.9 h of video segments. The machine learning algorithm trained and validated on our own dataset scored a mean average precision (mAP) of 25.7% and a top K = 5 accuracy of 85.3%. With training and validation on our dataset and CholecT45, the algorithm scored a mAP of 37.9%. CONCLUSIONS An activity model was developed and applied for the fine-granular annotation of laparoscopic cholecystectomies in two surgical settings. A machine recognition algorithm trained on our own annotated dataset and CholecT45 achieved a higher performance than training only on CholecT45 and can recognize frequently occurring activities well, but not infrequent activities. The analysis of an annotated dataset allowed for the quantification of the potential of collaborative surgical robots to address the workload of surgical staff. If collaborative surgical robots could grasp and hold tissue, up to 83.5% of the assistant's tissue interacting tasks (i.e. excluding camera guidance) could be performed by robots.
Collapse
Affiliation(s)
- R Younis
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - A Yamlahi
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - S Bodenstedt
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - P M Scheikl
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - A Kisilenko
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - M Daum
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - A Schulze
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - P A Wise
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - F Nickel
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg- Eppendorf, Hamburg, Germany
| | - F Mathis-Ullrich
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - L Maier-Hein
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - B P Müller-Stich
- Department for Abdominal Surgery, University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland
| | - S Speidel
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - M Distler
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - J Weitz
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - M Wagner
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany.
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany.
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany.
| |
Collapse
|
4
|
Murthy D, Ouellette RR, Anand T, Radhakrishnan S, Mohan NC, Lee J, Kong G. Using Computer Vision to Detect E-cigarette Content in TikTok Videos. Nicotine Tob Res 2024; 26:S36-S42. [PMID: 38366342 PMCID: PMC10873490 DOI: 10.1093/ntr/ntad184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 09/15/2023] [Accepted: 09/27/2023] [Indexed: 02/18/2024]
Abstract
INTRODUCTION Previous research has identified abundant e-cigarette content on social media using primarily text-based approaches. However, frequently used social media platforms among youth, such as TikTok, contain primarily visual content, requiring the ability to detect e-cigarette-related content across large sets of videos and images. This study aims to use a computer vision technique to detect e-cigarette-related objects in TikTok videos. AIMS AND METHODS We searched 13 hashtags related to vaping on TikTok (eg, #vape) in November 2022 and obtained 826 still images extracted from a random selection of 254 posts. We annotated images for the presence of vaping devices, hands, and/or vapor clouds. We developed a YOLOv7-based computer vision model to detect these objects using 85% of extracted images (N = 705) for training and 15% (N = 121) for testing. RESULTS Our model's recall value was 0.77 for all three classes: vape devices, hands, and vapor. Our model correctly classified vape devices 92.9% of the time, with an average F1 score of 0.81. CONCLUSIONS The findings highlight the importance of having accurate and efficient methods to identify e-cigarette content on popular video-based social media platforms like TikTok. Our findings indicate that automated computer vision methods can successfully detect a range of e-cigarette-related content, including devices and vapor clouds, across images from TikTok posts. These approaches can be used to guide research and regulatory efforts. IMPLICATIONS Object detection, a computer vision machine learning model, can accurately and efficiently identify e-cigarette content on a primarily visual-based social media platform by identifying the presence of vaping devices and evidence of e-cigarette use (eg, hands and vapor clouds). The methods used in this study can inform computational surveillance systems for detecting e-cigarette content on video- and image-based social media platforms to inform and enforce regulations of e-cigarette content on social media.
Collapse
Affiliation(s)
- Dhiraj Murthy
- Moody College of Communication, University of Texas at Austin, Austin, TX, USA
| | | | - Tanvi Anand
- Cockrell School of Engineering, University of Texas at Austin, Austin, TX, USA
| | - Srijith Radhakrishnan
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal, Karnataka, India
| | - Nikhil C Mohan
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal, Karnataka, India
| | - Juhan Lee
- Department of Psychiatry, Yale School of Medicine, New Haven, CT, USA
| | - Grace Kong
- Department of Psychiatry, Yale School of Medicine, New Haven, CT, USA
| |
Collapse
|
5
|
Zaccardi S, Frantz T, Beckwée D, Swinnen E, Jansen B. On-Device Execution of Deep Learning Models on HoloLens2 for Real-Time Augmented Reality Medical Applications. SENSORS (BASEL, SWITZERLAND) 2023; 23:8698. [PMID: 37960398 PMCID: PMC10648161 DOI: 10.3390/s23218698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Revised: 10/18/2023] [Accepted: 10/23/2023] [Indexed: 11/15/2023]
Abstract
The integration of Deep Learning (DL) models with the HoloLens2 Augmented Reality (AR) headset has enormous potential for real-time AR medical applications. Currently, most applications execute the models on an external server that communicates with the headset via Wi-Fi. This client-server architecture introduces undesirable delays and lacks reliability for real-time applications. However, due to HoloLens2's limited computation capabilities, running the DL model directly on the device and achieving real-time performances is not trivial. Therefore, this study has two primary objectives: (i) to systematically evaluate two popular frameworks to execute DL models on HoloLens2-Unity Barracuda and Windows Machine Learning (WinML)-using the inference time as the primary evaluation metric; (ii) to provide benchmark values for state-of-the-art DL models that can be integrated in different medical applications (e.g., Yolo and Unet models). In this study, we executed DL models with various complexities and analyzed inference times ranging from a few milliseconds to seconds. Our results show that Unity Barracuda is significantly faster than WinML (p-value < 0.005). With our findings, we sought to provide practical guidance and reference values for future studies aiming to develop single, portable AR systems for real-time medical assistance.
Collapse
Affiliation(s)
- Silvia Zaccardi
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, 1050 Brussel, Belgium; (T.F.); (B.J.)
- Rehabilitation Research Group (RERE), Vrije Universiteit Brussel, 1090 Brussel, Belgium; (D.B.); (E.S.)
- IMEC, 3001 Leuven, Belgium
| | - Taylor Frantz
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, 1050 Brussel, Belgium; (T.F.); (B.J.)
- IMEC, 3001 Leuven, Belgium
| | - David Beckwée
- Rehabilitation Research Group (RERE), Vrije Universiteit Brussel, 1090 Brussel, Belgium; (D.B.); (E.S.)
| | - Eva Swinnen
- Rehabilitation Research Group (RERE), Vrije Universiteit Brussel, 1090 Brussel, Belgium; (D.B.); (E.S.)
| | - Bart Jansen
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, 1050 Brussel, Belgium; (T.F.); (B.J.)
- IMEC, 3001 Leuven, Belgium
| |
Collapse
|
6
|
Gholinejad M, Edwin B, Elle OJ, Dankelman J, Loeve AJ. Process model analysis of parenchyma sparing laparoscopic liver surgery to recognize surgical steps and predict impact of new technologies. Surg Endosc 2023; 37:7083-7099. [PMID: 37386254 PMCID: PMC10462556 DOI: 10.1007/s00464-023-10166-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 05/28/2023] [Indexed: 07/01/2023]
Abstract
BACKGROUND Surgical process model (SPM) analysis is a great means to predict the surgical steps in a procedure as well as to predict the potential impact of new technologies. Especially in complicated and high-volume treatments, such as parenchyma sparing laparoscopic liver resection (LLR), profound process knowledge is essential for enabling improving surgical quality and efficiency. METHODS Videos of thirteen parenchyma sparing LLR were analyzed to extract the duration and sequence of surgical steps according to the process model. The videos were categorized into three groups, based on the tumor locations. Next, a detailed discrete events simulation model (DESM) of LLR was built, based on the process model and the process data obtained from the endoscopic videos. Furthermore, the impact of using a navigation platform on the total duration of the LLR was studied with the simulation model by assessing three different scenarios: (i) no navigation platform, (ii) conservative positive effect, and (iii) optimistic positive effect. RESULTS The possible variations of sequences of surgical steps in performing parenchyma sparing depending on the tumor locations were established. The statistically most probable chain of surgical steps was predicted, which could be used to improve parenchyma sparing surgeries. In all three categories (i-iii) the treatment phase covered the major part (~ 40%) of the total procedure duration (bottleneck). The simulation results predict that a navigation platform could decrease the total surgery duration by up to 30%. CONCLUSION This study showed a DESM based on the analysis of steps during surgical procedures can be used to predict the impact of new technology. SPMs can be used to detect, e.g., the most probable workflow paths which enables predicting next surgical steps, improving surgical training systems, and analyzing surgical performance. Moreover, it provides insight into the points for improvement and bottlenecks in the surgical process.
Collapse
Affiliation(s)
- Maryam Gholinejad
- Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands.
| | - Bjørn Edwin
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
- Medical Faculty, Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of HPB Surgery, Oslo University Hospital, Oslo, Norway
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Jenny Dankelman
- Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands
| | - Arjo J Loeve
- Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
7
|
Varas J, Coronel BV, Villagrán I, Escalona G, Hernandez R, Schuit G, Durán V, Lagos-Villaseca A, Jarry C, Neyem A, Achurra P. Innovations in surgical training: exploring the role of artificial intelligence and large language models (LLM). Rev Col Bras Cir 2023; 50:e20233605. [PMID: 37646729 PMCID: PMC10508667 DOI: 10.1590/0100-6991e-20233605-en] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 07/14/2023] [Indexed: 09/01/2023] Open
Abstract
The landscape of surgical training is rapidly evolving with the advent of artificial intelligence (AI) and its integration into education and simulation. This manuscript aims to explore the potential applications and benefits of AI-assisted surgical training, particularly the use of large language models (LLMs), in enhancing communication, personalizing feedback, and promoting skill development. We discuss the advancements in simulation-based training, AI-driven assessment tools, video-based assessment systems, virtual reality (VR) and augmented reality (AR) platforms, and the potential role of LLMs in the transcription, translation, and summarization of feedback. Despite the promising opportunities presented by AI integration, several challenges must be addressed, including accuracy and reliability, ethical and privacy concerns, bias in AI models, integration with existing training systems, and training and adoption of AI-assisted tools. By proactively addressing these challenges and harnessing the potential of AI, the future of surgical training may be reshaped to provide a more comprehensive, safe, and effective learning experience for trainees, ultimately leading to better patient outcomes. .
Collapse
Affiliation(s)
- Julian Varas
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Brandon Valencia Coronel
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Ignacio Villagrán
- - Pontificia Universidad Católica de Chile, Carrera de Kinesiología, Departamento de Ciencias de la Salud, Facultad de Medicina - Santiago - Región Metropolitana - Chile
| | - Gabriel Escalona
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Rocio Hernandez
- - Pontificia Universidad Católica de Chile, Computer Science Department, School of Engineering - Santiago - Región Metropolitana - Chile
| | - Gregory Schuit
- - Pontificia Universidad Católica de Chile, Computer Science Department, School of Engineering - Santiago - Región Metropolitana - Chile
| | - Valentina Durán
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Antonia Lagos-Villaseca
- - Pontificia Universidad Católica de Chile, Department of Otolaryngology - Santiago - Región Metropolitana - Chile
| | - Cristian Jarry
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Andres Neyem
- - Pontificia Universidad Católica de Chile, Computer Science Department, School of Engineering - Santiago - Región Metropolitana - Chile
| | - Pablo Achurra
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| |
Collapse
|
8
|
Bitkina OV, Park J, Kim HK. Application of artificial intelligence in medical technologies: A systematic review of main trends. Digit Health 2023; 9:20552076231189331. [PMID: 37485326 PMCID: PMC10359663 DOI: 10.1177/20552076231189331] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/30/2023] [Indexed: 07/25/2023] Open
Abstract
Objective Artificial intelligence (AI) has been increasingly applied in various fields of science and technology. In line with the current research, medicine involves an increasing number of artificial intelligence technologies. The introduction of rapid AI can lead to positive and negative effects. This is a multilateral analytical literature review aimed at identifying the main branches and trends in the use of using artificial intelligence in medical technologies. Methods The total number of literature sources reviewed is n = 89, and they are analyzed based on the literature reporting evidence-based guideline PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) for a systematic review. Results As a result, from the initially selected 198 references, 155 references were obtained from the databases and the remaining 43 sources were found on open internet as direct links to publications. Finally, 89 literature sources were evaluated after exclusion of unsuitable references based on the duplicated and generalized information without focusing on the users. Conclusions This article is identifying the current state of artificial intelligence in medicine and prospects for future use. The findings of this review will be useful for healthcare and AI professionals for improving the circulation and use of medical AI from design to implementation stage.
Collapse
Affiliation(s)
- Olga Vl Bitkina
- Department of Industrial and Management Engineering, Incheon National University, Incheon, Korea
| | - Jaehyun Park
- Department of Industrial and Management Engineering, Incheon National University, Incheon, Korea
| | - Hyun K. Kim
- School of Information Convergence, Kwangwoon University, Seoul, Korea
| |
Collapse
|
9
|
Survival Study: International Multicentric Minimally Invasive Liver Resection for Colorectal Liver Metastases (SIMMILR-2). Cancers (Basel) 2022; 14:cancers14174190. [PMID: 36077728 PMCID: PMC9454893 DOI: 10.3390/cancers14174190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 08/12/2022] [Accepted: 08/23/2022] [Indexed: 11/17/2022] Open
Abstract
Introduction: Study: International Multicentric Minimally Invasive Liver Resection for Colorectal Liver Metastases (SIMMILR-CRLM) was a propensity score matched (PSM) study that reported short-term outcomes of patients with CRLM who met the Milan criteria and underwent either open (OLR), laparoscopic (LLR) or robotic liver resection (RLR). This study, designated as SIMMILR-2, reports the long-term outcomes from that initial study, now referred to as SIMMILR-1. Methods: Data regarding neoadjuvant chemotherapeutic (NC) and neoadjuvant biological (NB) treatments received were collected, and Kaplan−Meier curves reporting the 5-year overall (OS) and recurrence-free survival (RFS) for OLR, LLR and RLR were created for patients who presented with synchronous lesions only, as there was insufficient follow-up for patients with metachronous lesions. Results: A total of 73% of patients received NC and 38% received NB in the OLR group compared to 70% and 28% in the LLR group, respectively (p = 0.5 and p = 0.08). A total of 82% of patients received NC and 40% received NB in the OLR group compared to 86% and 32% in the RLR group, respectively (p > 0.05). A total of 71% of patients received NC and 53% received NB in the LLR group compared to 71% and 47% in the RLR group, respectively (p > 0.05). OS at 5 years was 34.8% after OLR compared to 37.1% after LLR (p = 0.4), 34.3% after OLR compared to 46.9% after RLR (p = 0.4) and 30.3% after LLR compared to 46.9% after RLR (p = 0.9). RFS at 5 years was 12.1% after OLR compared to 20.7% after LLR (p = 0.6), 33.3% after OLR compared to 26.3% after RLR (p = 0.6) and 22.7% after LLR compared to 34.6% after RLR (p = 0.6). Conclusions: When comparing OLR, LLR and RLR, the OS and RFS were all similar after utilization of the Milan criteria and PSM. Biological agents tended to be utilized more in the OLR group when compared to the LLR group, suggesting that highly aggressive tumors are still managed through an open approach.
Collapse
|