1
|
Gonzalez R, Saha A, Campbell CJ, Nejat P, Lokker C, Norgan AP. Seeing the random forest through the decision trees. Supporting learning health systems from histopathology with machine learning models: Challenges and opportunities. J Pathol Inform 2024; 15:100347. [PMID: 38162950 PMCID: PMC10755052 DOI: 10.1016/j.jpi.2023.100347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 10/06/2023] [Accepted: 11/01/2023] [Indexed: 01/03/2024] Open
Abstract
This paper discusses some overlooked challenges faced when working with machine learning models for histopathology and presents a novel opportunity to support "Learning Health Systems" with them. Initially, the authors elaborate on these challenges after separating them according to their mitigation strategies: those that need innovative approaches, time, or future technological capabilities and those that require a conceptual reappraisal from a critical perspective. Then, a novel opportunity to support "Learning Health Systems" by integrating hidden information extracted by ML models from digitalized histopathology slides with other healthcare big data is presented.
Collapse
Affiliation(s)
- Ricardo Gonzalez
- DeGroote School of Business, McMaster University, Hamilton, Ontario, Canada
- Division of Computational Pathology and Artificial Intelligence, Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, United States
| | - Ashirbani Saha
- Department of Oncology, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
- Escarpment Cancer Research Institute, McMaster University and Hamilton Health Sciences, Hamilton, Ontario, Canada
| | - Clinton J.V. Campbell
- William Osler Health System, Brampton, Ontario, Canada
- Department of Pathology and Molecular Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Peyman Nejat
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, United States
| | - Cynthia Lokker
- Health Information Research Unit, Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada
| | - Andrew P. Norgan
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, United States
| |
Collapse
|
2
|
Quoc KN, Quach LD. Grain rot dataset caused by Burkholderia Glumae Bacteria. Data Brief 2024; 54:110334. [PMID: 38586139 PMCID: PMC10998030 DOI: 10.1016/j.dib.2024.110334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Revised: 03/11/2024] [Accepted: 03/12/2024] [Indexed: 04/09/2024] Open
Abstract
The Burkholderia glumae bacterium causes bacterial grain rot in rice, posing significant threats to the crop's yield, particularly thriving during the rice flowering and grain filling stages. This disease is especially evident in rice grains before harvest, presenting challenges in the detection and classification of rice panicles. Firstly, diseased grains may mix with healthy ones, complicating their separation. Secondly, the size of grains on a panicle varies from small to large, which can be problematic when detected using object detection methods. Thirdly, disease classification can be conducted by evaluating the extent of infection on rice panicles to assess its impact on yield. Finally, the challenges in detection, classification, and preprocessing for disease identification and management necessitate the adoption of diverse approaches in machine learning and deep learning to develop optimal methods and support smart agriculture.
Collapse
Affiliation(s)
| | - Luyl-Da Quach
- FPT University, Can Tho campus, Cantho city, Vietnam
| |
Collapse
|
3
|
Herrmann J, Feng YS, Gassenmaier S, Grunz JP, Koerzdoerfer G, Lingg A, Almansour H, Nickel D, Othman AE, Afat S. Fast 5-minute shoulder MRI protocol with accelerated TSE-sequences and deep learning image reconstruction for the assessment of shoulder pain at 1.5 and 3 Tesla. Eur J Radiol Open 2024; 12:100557. [PMID: 38495213 PMCID: PMC10943294 DOI: 10.1016/j.ejro.2024.100557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 02/13/2024] [Accepted: 02/18/2024] [Indexed: 03/19/2024] Open
Abstract
Purpose The objective of this study was to implement a 5-minute MRI protocol for the shoulder in routine clinical practice consisting of accelerated 2D turbo spin echo (TSE) sequences with deep learning (DL) reconstruction at 1.5 and 3 Tesla, and to compare the image quality and diagnostic performance to that of a standard 2D TSE protocol. Methods Patients undergoing shoulder MRI between October 2020 and June 2021 were prospectively enrolled. Each patient underwent two MRI examinations: first a standard, fully sampled TSE (TSES) protocol reconstructed with a standard reconstruction followed by a second fast, prospectively undersampled TSE protocol with a conventional parallel imaging undersampling pattern reconstructed with a DL reconstruction (TSEDL). Image quality and visualization of anatomic structures as well as diagnostic performance with respect to shoulder lesions were assessed using a 5-point Likert-scale (5 = best). Interchangeability analysis, Wilcoxon signed-rank test and kappa statistics were performed to compare the two protocols. Results A total of 30 participants was included (mean age 50±15 years; 15 men). Overall image quality was evaluated to be superior in TSEDL versus TSES (p<0.001). Noise and edge sharpness were evaluated to be significantly superior in TSEDL versus TSES (noise: p<0.001, edge sharpness: p<0.05). No difference was found concerning qualitative diagnostic confidence, assessability of anatomical structures (p>0.05), and quantitative diagnostic performance for shoulder lesions when comparing the two sequences. Conclusions A fast 5-minute TSEDL MRI protocol of the shoulder is feasible in routine clinical practice at 1.5 and 3 T, with interchangeable results concerning the diagnostic performance, allowing a reduction in scan time of more than 50% compared to the standard TSES protocol.
Collapse
Affiliation(s)
- Judith Herrmann
- Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Eberhard Karls University, Tuebingen, Germany
| | - You-Shan Feng
- Institute for Clinical Epidemiology and Applied Biometrics, University Hospital Tuebingen, Eberhard Karls University, Tuebingen, Germany
| | - Sebastian Gassenmaier
- Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Eberhard Karls University, Tuebingen, Germany
| | - Jan-Peter Grunz
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | | | - Andreas Lingg
- Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Eberhard Karls University, Tuebingen, Germany
| | - Haidara Almansour
- Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Eberhard Karls University, Tuebingen, Germany
| | - Dominik Nickel
- MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany
| | - Ahmed E. Othman
- Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Eberhard Karls University, Tuebingen, Germany
- Department of Neuroradiology, University Medical Center Mainz, Mainz, Germany
| | - Saif Afat
- Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Eberhard Karls University, Tuebingen, Germany
| |
Collapse
|
4
|
Selvam A, Shah S, Singh SR, Sant V, Harihar S, Arora S, Patel M, Ong J, Yadav S, Ibrahim MN, Sahel JA, Vupparaboina KK, Chhablani J. Longitudinal changes in pigment epithelial detachment composition indices (PEDCI): new biomarkers in neovascular age-related macular degeneration. Graefes Arch Clin Exp Ophthalmol 2024; 262:1489-1498. [PMID: 38141059 DOI: 10.1007/s00417-023-06335-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 11/06/2023] [Accepted: 12/01/2023] [Indexed: 12/24/2023] Open
Abstract
PURPOSE To evaluate novel, automated biomarkers, pigment epithelial detachment composition indices (PEDCI) in eyes with neovascular age-related macular degeneration (nAMD) undergoing anti-vascular endothelial growth factor (anti-VEGF) therapy through 24 months. METHODS Retrospective analysis of 37 eyes (34 patients) with PED associated with nAMD receiving as-needed anti-VEGF treatment was performed. Best-corrected visual acuity (BCVA) and optical coherence tomography images were acquired at a treatment-naïve baseline and 3-, 6-, 12-, 18-, and 24-month visits. Previously validated automated imaging biomarkers, PEDCI-S (serous), PEDCI-N (neovascular), and PEDCI-F (fibrous) within PEDs were measured. ANOVA analysis and Spearman correlation were performed. RESULTS Mean BCVA (in logMAR) was 0.60 ± 0.47, 0.45 ± 0.41, 0.49 ± 0.49, 0.61 ± 0.54, 0.59 ± 0.56, and 0.67 ± 0.57 at baseline, 3, 6, 12, 18, and 24 months respectively. Overall, BCVA showed minimal worsening of 0.07 ± 0.54 logMAR (p = 0.07). 13.38 ± 3.77 anti-VEGF injections were given through 24 months. PEDCI-F showed an increase of 0.116, 0.122, 0.036, and 0.006 at months 3, 6, 12, and 18 respectively and a decrease of 0.004 at month 24 (p = 0.03); PEDCI-S showed a decrease of 0.064, 0.130, 0.091, 0.092, and 0.095 at months 3, 6, 12, 18, and 24 respectively (p = 0.16); PEDCI-N showed a decrease of 0.052 at month 3 and an increase of 0.008, 0.055, 0.086, and 0.099 at months 6, 12, 18, and 24 respectively (p = 0.06). BCVA was negatively correlated with PEDCI-F (r = -0.28, p < 0.01), and positively correlated with PEDCI-N (r = 0.28, p < 0.01) and PEDCI-S (r = 0.15, p = 0.03). CONCLUSION Longitudinal analysis of PEDCI supports their utility as biomarkers that characterize treatment related effects by quantifying the relative composition of PEDs.
Collapse
Affiliation(s)
- Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Stavan Shah
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Sumit Randhir Singh
- Sri Sai Eye Hospital, Kankarbagh, Patna, Bihar, India
- Nilima Sinha Medical College and Hospital, Rampur, India
| | - Vinisha Sant
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Sanjana Harihar
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Supriya Arora
- Bahamas Vision Center and Princess Margaret Hospital, Nassau, NP, Bahamas
| | - Manan Patel
- BJ Medical College, Ahmedabad, Gujarat, India
| | - Joshua Ong
- University of Michigan Kellogg Eye Center, Ann Arbor, MI, USA
| | - Sanya Yadav
- Department of Ophthalmology, West Virginia University, Morgantown, WV, USA
| | | | - José-Alain Sahel
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|
5
|
Tiantian W, Hu Z, Guan Y. An efficient lightweight network for image denoising using progressive residual and convolutional attention feature fusion. Sci Rep 2024; 14:9554. [PMID: 38664440 DOI: 10.1038/s41598-024-60139-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 04/19/2024] [Indexed: 04/28/2024] Open
Abstract
While deep learning has become the go-to method for image denoising due to its impressive noise removal capabilities, excessive network depth often plagues existing approaches, leading to significant computational burdens. To address this critical bottleneck, we propose a novel lightweight progressive residual and attention mechanism fusion network that effectively alleviates these limitations. This architecture tackles both Gaussian and real-world image noise with exceptional efficacy. Initiated through dense blocks (DB) tasked with discerning the noise distribution, this approach substantially reduces network parameters while comprehensively extracting local image features. The network then adopts a progressive strategy, whereby shallow convolutional features are incrementally integrated with deeper features, establishing a residual fusion framework adept at extracting encompassing global features relevant to noise characteristics. The process concludes by integrating the output feature maps from each DB and the robust edge features from the convolutional attention feature fusion module (CAFFM). These combined elements are then directed to the reconstruction layer, ultimately producing the final denoised image. Empirical analyses conducted in environments characterized by Gaussian white noise and natural noise, spanning noise levels 15-50, indicate a marked enhancement in performance. This assertion is quantitatively corroborated by increased average values in metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Feature Similarity Index for Color images (FSIMc), outperforming the outcomes of more than 20 existing methods across six varied datasets. Collectively, the network delineated in this research exhibits exceptional adeptness in image denoising. Simultaneously, it adeptly preserves essential image features such as edges and textures, thereby signifying a notable progression in the domain of image processing. The proposed model finds applicability in a range of image-centric domains, encompassing image processing, computer vision, video analysis, and pattern recognition.
Collapse
Affiliation(s)
- Wang Tiantian
- School of Computer and Software Engineering, Xias University, Zhengzhou, 451150, Henan, China
| | - Zhihua Hu
- School of Computer, Huanggang Normal University, Huanggang, 438000, Hubei, China.
| | - Yurong Guan
- School of Computer, Huanggang Normal University, Huanggang, 438000, Hubei, China.
| |
Collapse
|
6
|
Lim WX, Chen Z. Enhancing deep learning pre-trained networks on diabetic retinopathy fundus photographs with SLIC-G. Med Biol Eng Comput 2024:10.1007/s11517-024-03093-0. [PMID: 38649629 DOI: 10.1007/s11517-024-03093-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 04/10/2024] [Indexed: 04/25/2024]
Abstract
Diabetic retinopathy disease contains lesions (e.g., exudates, hemorrhages, and microaneurysms) that are minute to the naked eye. Determining the lesions at pixel level poses a challenge as each pixel does not reflect any semantic entities. Furthermore, the computational cost of inspecting each pixel is expensive because the number of pixels is high even at low resolution. In this work, we propose a hybrid image processing method. Simple Linear Iterative Clustering with Gaussian Filter (SLIC-G) for the purpose of overcoming pixel constraints. The SLIC-G image processing method is divided into two stages: (1) simple linear iterative clustering superpixel segmentation and (2) Gaussian smoothing operation. In such a way, a large number of new transformed datasets are generated and then used for model training. Finally, two performance evaluation metrics that are suitable for imbalanced diabetic retinopathy datasets were used to validate the effectiveness of the proposed SLIC-G. The results indicate that, in comparison to prior published works' results, the proposed SLIC-G shows better performance on image classification of class imbalanced diabetic retinopathy datasets. This research reveals the importance of image processing and how it influences the performance of deep learning networks. The proposed SLIC-G enhances pre-trained network performance by eliminating the local redundancy of an image, which preserves local structures, but avoids over-segmented, noisy clips. It closes the research gap by introducing the use of superpixel segmentation and Gaussian smoothing operation as image processing methods in diabetic retinopathy-related tasks.
Collapse
Affiliation(s)
- Wei Xiang Lim
- Faculty of Science and Engineering, School of Computer Science, University of Nottingham Malaysia, Semenyih, Malaysia
| | - Zhiyuan Chen
- Faculty of Science and Engineering, School of Computer Science, University of Nottingham Malaysia, Semenyih, Malaysia.
| |
Collapse
|
7
|
Wang J. Optimizing support vector machine (SVM) by social spider optimization (SSO) for edge detection in colored images. Sci Rep 2024; 14:9136. [PMID: 38644440 PMCID: PMC11033277 DOI: 10.1038/s41598-024-59811-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 04/15/2024] [Indexed: 04/23/2024] Open
Abstract
Edge detection in images is a vital application of image processing in fields such as object detection and identification of lesion regions in medical images. This problem is more complex in the domain of color images due to the combination of color layer information and the need to achieve a unified edge boundary across these layers, which increases the complexity of the problem. In this paper, a simple and effective method for edge detection in color images is proposed using a combination of support vector machine (SVM) and the social spider optimization (SSO) algorithm. In the proposed method, the input color image is first converted to a grayscale image, and an initial estimation of the image edges is performed based on it. To this end, the proposed method utilizes an SVM with a Radial Basis Function (RBF) kernel, in which the model's hyperparameters are tuned using the SSO algorithm. After the formation of initial image edges, the resulting edges are compared with pairwise combinations of color layers, and an attempt is made to improve the edge localization using the SSO algorithm. In this step, the optimization algorithm's task is to refine the image edges in a way that maximizes the compatibility with pairwise combinations of color layers. This process leads to the formation of prominent image edges and reduces the adverse effects of noise on the final result. The performance of the proposed method in edge detection of various color images has been evaluated and compared with similar previous strategies. According to the obtained results, the proposed method can successfully identify image edges more accurately, as the edges identified by the proposed method have an average accuracy of 93.11% for the BSDS500 database, which is an increase of at least 0.74% compared to other methods.
Collapse
Affiliation(s)
- Jianfei Wang
- Suzhou Chien-Shiung Institute of Technology, Taicang, 215411, China.
| |
Collapse
|
8
|
Wang S, Zhou J. Gait Analysis of Knee Joint Walking Based on Image Processing. Curr Med Imaging 2024; 20:CMIR-EPUB-139679. [PMID: 38616747 DOI: 10.2174/0115734056277482240329050639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 03/01/2024] [Accepted: 03/13/2024] [Indexed: 04/16/2024]
Abstract
BACKGROUND With the in-depth development of assistive treatment devices, the application of artificial knee joints in the rehabilitation of amputees is becoming increasingly mature. The length of residual limbs and muscle strength of patients have individual differences, and the current artificial knee joint lacks certain adaptability in the personalized rehabilitation of patients. PURPOSE In order to deeply analyze the impact of different types of artificial knee joints on the walking function of unilateral thigh amputees, improve the performance of artificial knee joints, and enhance the rehabilitation effect of patients, this article combines image processing technology to conduct in-depth research on the walking gait analysis of different artificial knee joints of unilateral thigh amputees. METHODS This article divides patients into two groups: the experimental group consists of patients with single leg amputation, and the control group consists of patients with different prostheses. An image processing system is constructed using universal video and computer hardware, and relevant technologies are used to recognize and track landmarks; Furthermore, image processing technology was used to analyze the gait of different groups of patients. Finally, by analyzing the different psychological reactions of amputees, corresponding treatment plans were developed. RESULTS Different prostheses worn by amputees have brought varying degrees of convenience to life to a certain extent. The walking stability of wearing hydraulic single axis prosthetic joints is only 79%, and the gait elegance is relatively low. The walking stability of wearing intelligent artificial joints is as high as 96%. Elegant gait is basically in good condition. CONCLUSION Image processing technology helps doctors and rehabilitation practitioners better understand the gait characteristics and rehabilitation progress of patients wearing different artificial knee joints, providing objective basis for personalized rehabilitation of patients.
Collapse
Affiliation(s)
- Shuai Wang
- School of Sports Medicine and Health, Chengdu Sport University, Chengdu 610041, Sichuan, China
- School of Physical Education, Shanxi University, Taiyuan 030006, Shanxi, China
| | - Jihe Zhou
- School of Sports Medicine and Health, Chengdu Sport University, Chengdu 610041, Sichuan, China
- Sichuan China 81 Rehabilitation Center(Sichuan Provincial Rehabilitation Hospital),Chengdu 610041, Sichuan, China
| |
Collapse
|
9
|
Murat H, Awang Kechik MM, Chew MT, Kamal I, Abdul Karim MK. Bibliometric Review of Optimization and Image Processing of Positron Emission Tomography (PET) Imaging System between 1981-2022. Curr Med Imaging 2024; 20:CMIR-EPUB-139684. [PMID: 38616750 DOI: 10.2174/0115734056282004240403042345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 01/17/2024] [Accepted: 02/15/2024] [Indexed: 04/16/2024]
Abstract
BACKGROUND PET scan stands as a valuable diagnostic tool in nuclear medicine, enabling the observation of metabolic and physiological changes at a molecular level. However, PET scans have a number of drawbacks, such as poor spatial resolution, noisy images, scattered radiation, artifacts, and radiation exposure. These challenges demonstrate the need for optimization in image processing techniques. OBJECTIVES Our objective is to identify the evolving trends and impacts of publication in this field, as well as the most productive and influential countries, institutions, authors, themes, and articles. METHODS A bibliometric study was conducted using a comprehensive query string such as "positron emission tomography" AND "image processing" AND optimization to retrieve 1,783 publications from 1981 to 2022 found in the Scopus database related to this field of study. RESULTS The findings revealed that the most influential country, institution, and authors are from the USA, and the most prevalent theme is TOF PET image reconstruction. CONCLUSION The increasing trend in publication in the field of optimization of image processing in PET scans would address the challenges in PET scan by reducing radiation exposure, faster scanning speed, as well as enhancing lesion identification.
Collapse
Affiliation(s)
- Husain Murat
- Department of Nuclear Medicine, Hospital Sultanah Aminah Johor Bahru, Johor, Malaysia
- Department of Physics, Faculty of Science, Universiti Putra Malaysia, 43400 Serdang, Selangor, Malaysia
| | - Mohd Mustafa Awang Kechik
- Department of Physics, Faculty of Science, Universiti Putra Malaysia, 43400 Serdang, Selangor, Malaysia
| | - Ming Tsuey Chew
- Research Centre for Applied Physics and Radiation Technologies, School of Engineering and Technology, Sunway University, Petaling Jaya, Malaysia
| | - Izdihar Kamal
- Department of Physics, Faculty of Science, Universiti Putra Malaysia, 43400 Serdang, Selangor, Malaysia
| | | |
Collapse
|
10
|
Sung C, Oh JS, Park BS, Kim SS, Song SY, Lee JJ. Diagnostic performance of a deep-learning model using 18F-FDG PET/CT for evaluating recurrence after radiation therapy in patients with lung cancer. Ann Nucl Med 2024:10.1007/s12149-024-01925-5. [PMID: 38589677 DOI: 10.1007/s12149-024-01925-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 03/21/2024] [Indexed: 04/10/2024]
Abstract
OBJECTIVE We developed a deep learning model for distinguishing radiation therapy (RT)-related changes and tumour recurrence in patients with lung cancer who underwent RT, and evaluated its performance. METHODS We retrospectively recruited 308 patients with lung cancer with RT-related changes observed on 18F-fluorodeoxyglucose positron emission tomography-computed tomography (18F-FDG PET/CT) performed after RT. Patients were labelled as positive or negative for tumour recurrence through histologic diagnosis or clinical follow-up after 18F-FDG PET/CT. A two-dimensional (2D) slice-based convolutional neural network (CNN) model was created with a total of 3329 slices as input, and performance was evaluated with five independent test sets. RESULTS For the five independent test sets, the area under the curve (AUC) of the receiver operating characteristic curve, sensitivity, and specificity were in the range of 0.98-0.99, 95-98%, and 87-95%, respectively. The region determined by the model was confirmed as an actual recurred tumour through the explainable artificial intelligence (AI) using gradient-weighted class activation mapping (Grad-CAM). CONCLUSION The 2D slice-based CNN model using 18F-FDG PET imaging was able to distinguish well between RT-related changes and tumour recurrence in patients with lung cancer.
Collapse
Affiliation(s)
- Changhwan Sung
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Korea
| | - Jungsu S Oh
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Korea
| | - Byung Soo Park
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Korea
| | - Su Ssan Kim
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Si Yeol Song
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Jong Jin Lee
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Korea.
| |
Collapse
|
11
|
Chalfoun J, Lund SP, Ling C, Peskin A, Pierce L, Halter M, Elliott J, Sarkar S. Establishing a reference focal plane using convolutional neural networks and beads for brightfield imaging. Sci Rep 2024; 14:7768. [PMID: 38565548 PMCID: PMC10987482 DOI: 10.1038/s41598-024-57123-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 03/14/2024] [Indexed: 04/04/2024] Open
Abstract
Repeatability of measurements from image analytics is difficult, due to the heterogeneity and complexity of cell samples, exact microscope stage positioning, and slide thickness. We present a method to define and use a reference focal plane that provides repeatable measurements with very high accuracy, by relying on control beads as reference material and a convolutional neural network focused on the control bead images. Previously we defined a reference effective focal plane (REFP) based on the image gradient of bead edges and three specific bead image features. This paper both generalizes and improves on this previous work. First, we refine the definition of the REFP by fitting a cubic spline to describe the relationship between the distance from a bead's center and pixel intensity and by sharing information across experiments, exposures, and fields of view. Second, we remove our reliance on image features that behave differently from one instrument to another. Instead, we apply a convolutional regression neural network (ResNet 18) trained on cropped bead images that is generalizable to multiple microscopes. Our ResNet 18 network predicts the location of the REFP with only a single inferenced image acquisition that can be taken across a wide range of focal planes and exposure times. We illustrate the different strategies and hyperparameter optimization of the ResNet 18 to achieve a high prediction accuracy with an uncertainty for every image tested coming within the microscope repeatability measure of 7.5 µm from the desired focal plane. We demonstrate the generalizability of this methodology by applying it to two different optical systems and show that this level of accuracy can be achieved using only 6 beads per image.
Collapse
Affiliation(s)
- Joe Chalfoun
- National Institute of Standards and Technology, Gaithersburg, MD, USA.
| | - Steven P Lund
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Chenyi Ling
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Adele Peskin
- National Institute of Standards and Technology, Boulder, CO, USA
| | - Laura Pierce
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Michael Halter
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - John Elliott
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Sumona Sarkar
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| |
Collapse
|
12
|
Sun Y, Wang C. Brain tumor detection based on a novel and high-quality prediction of the tumor pixel distributions. Comput Biol Med 2024; 172:108196. [PMID: 38493601 DOI: 10.1016/j.compbiomed.2024.108196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 01/31/2024] [Accepted: 02/18/2024] [Indexed: 03/19/2024]
Abstract
The work presented in this paper is in the area of brain tumor detection. We propose a fast detection system with 3D MRI scans of Flair modality. It performs 2 functions, predicting the gray level distribution and location distribution of the pixels in the tumor regions and generating tumor masks with pixel-wise precision. To facilitate 3D data analysis and processing, we introduce a 2D histogram presentation encompassing the gray-level distribution and pixel-location distribution of a 3D object. In the proposed system, specific 2D histograms highlighting tumor-related features are established by exploiting the left-right asymmetry of a brain structure. A modulation function, generated from the input data of each patient case, is applied to the 2D histograms to transform them into coarsely or finely predicted distributions of tumor pixels. The prediction result helps to identify/remove tumor-free slices. The prediction and removal operations are performed to the axial, coronal and sagittal slice series of a brain image, transforming it into a 3D minimum bounding box of its tumor region. The bounding box is utilized to finalize the prediction and generate a 3D tumor mask. The proposed system has been tested extensively with the data of more than 1200 patient cases in BraTS2018∼2021 datasets. The test results demonstrate that the predicted 2D histograms resemble closely the true ones. The system delivers also very good tumor detection results, comparable to those of state-of-the-art CNN systems with mono-modality inputs. They are reproducible and obtained at an extremely low computation cost and without need for training.
Collapse
Affiliation(s)
- Yanming Sun
- Department of Electrical and Computer Engineering, Concordia University, 1455 De Maisonneuve Blvd. W, Montreal, Quebec, Canada, H3G 1M8
| | - Chunyan Wang
- Department of Electrical and Computer Engineering, Concordia University, 1455 De Maisonneuve Blvd. W, Montreal, Quebec, Canada, H3G 1M8.
| |
Collapse
|
13
|
Goceri E. Polyp Segmentation Using a Hybrid Vision Transformer and a Hybrid Loss Function. J Imaging Inform Med 2024; 37:851-863. [PMID: 38343250 PMCID: PMC11031515 DOI: 10.1007/s10278-023-00954-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 09/16/2023] [Accepted: 10/02/2023] [Indexed: 04/20/2024]
Abstract
Accurate and early detection of precursor adenomatous polyps and their removal at the early stage can significantly decrease the mortality rate and the occurrence of the disease since most colorectal cancer evolve from adenomatous polyps. However, accurate detection and segmentation of the polyps by doctors are difficult mainly these factors: (i) quality of the screening of the polyps with colonoscopy depends on the imaging quality and the experience of the doctors; (ii) visual inspection by doctors is time-consuming, burdensome, and tiring; (iii) prolonged visual inspections can lead to polyps being missed even when the physician is experienced. To overcome these problems, computer-aided methods have been proposed. However, they have some disadvantages or limitations. Therefore, in this work, a new architecture based on residual transformer layers has been designed and used for polyp segmentation. In the proposed segmentation, both high-level semantic features and low-level spatial features have been utilized. Also, a novel hybrid loss function has been proposed. The loss function designed with focal Tversky loss, binary cross-entropy, and Jaccard index reduces image-wise and pixel-wise differences as well as improves regional consistencies. Experimental works have indicated the effectiveness of the proposed approach in terms of dice similarity (0.9048), recall (0.9041), precision (0.9057), and F2 score (0.8993). Comparisons with the state-of-the-art methods have shown its better performance.
Collapse
|
14
|
Sheikh MR, Islam MM, Himel GMS. LuffaFolio: A Multidimensional Image Dataset of Smooth Luffa. Data Brief 2024; 53:110149. [PMID: 38379887 PMCID: PMC10877164 DOI: 10.1016/j.dib.2024.110149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 12/11/2023] [Accepted: 01/29/2024] [Indexed: 02/22/2024] Open
Abstract
This article introduces a comprehensive dataset designed for researchers to classify diseases in Luffa leaves, determine the grade of Luffa from Luffa images, and identify different growth stages throughout the year. The dataset is meticulously organized into three sections, each concentrating on specific facets of Luffa Aegyptiaca, commonly known as Smooth Luffa (Dhundol/). These images were captured in various village fields in Faridpur, Bangladesh. The sections include the assessment of Smooth Luffa quality, the identification of plant diseases, and the documentation of Luffa flowers. The dataset is divided into three sections, totaling 1933 original JPG images. The "Luffa Diseases" section features images of smooth Luffa leaves, depicting various diseases and unaffected leaves. Categories in this section encompass Alternaria Disease, Angular Spot Disease, Holed Leaves, Mosaic Virus, and Fresh Leaves, totaling 1228 JPG raw images. The "Flowers" category comprises 362 JPG raw images, showcasing different maturity stages in smooth Luffa flowers. Finally, the "Luffa Grade" section focuses on categorizing smooth Luffa into fresh and defective categories, presenting 343 JPG raw images for this purpose.
Collapse
Affiliation(s)
- Md Ripon Sheikh
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology (BUBT), Dhaka, Bangladesh
| | - Md. Masudul Islam
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology (BUBT), Dhaka, Bangladesh
- Department of Computer Science and Engineering, Jahangirnagar University, Dhaka, Bangladesh
| | - Galib Muhammad Shahriar Himel
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology (BUBT), Dhaka, Bangladesh
- Department of Computer Science, American International University-Bangladesh, Dhaka, Bangladesh
- Department of Physics, Jahangirnagar University, Dhaka, Bangladesh
- School of Computer Sciences, Universiti Sains Malaysia, 11800 USM Penang, Malaysia
| |
Collapse
|
15
|
Tohl D, Tran Tam Pham A, Li J, Tang Y. Point-of-care image-based quantitative urinalysis with commercial reagent strips: Design and clinical evaluation. Methods 2024; 224:63-70. [PMID: 38367653 DOI: 10.1016/j.ymeth.2024.02.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 01/24/2024] [Accepted: 02/12/2024] [Indexed: 02/19/2024] Open
Abstract
Urinalysis is a useful test as an indicator of health or disease and as such, is a part of routine health screening. Urinalysis can be undertaken in many ways, one of which is reagent strips used in the general evaluation of health and to aid in the diagnosis and monitoring of kidney disease. To be effective, the test must be performed properly, and the results interpreted correctly. However, different light conditions and colour perception can vary between users leading to ambiguous readings. This has led to camera devices being used to capture and generate the estimated biomarker concentrations, but image colour can be affected by variations in illumination and inbuilt image processing. Therefore, a new portable device with embedded image processing techniques is presented in this study to provide quantitative measurements that are invariant to changes in illumination. The device includes a novel calibration process and uses the ratio of RGB values to compensate for variations in illumination across an image and improve the accuracy of quantitative measurements. Results show that the proposed calibration method gives consistent homogeneous illumination across the whole image. Comparisons against other existing methods and clinical results show good performance with a correlation to the clinical values. The proposed device can be used for point-of-care testing to provide reliable results consistent with clinical values.
Collapse
Affiliation(s)
- Damian Tohl
- Australia-China Joint Research Centre for Personal Health Technologies, Medical Device Research Institute, College of Science and Engineering, Flinders University, South Australia 5042, Australia
| | - Anh Tran Tam Pham
- Australia-China Joint Research Centre for Personal Health Technologies, Medical Device Research Institute, College of Science and Engineering, Flinders University, South Australia 5042, Australia
| | - Jordan Li
- Department of Renal Medicine, Flinders Medical Centre, College of Medicine and Public Health, Flinders University, South Australia 5042, Australia
| | - Youhong Tang
- Australia-China Joint Research Centre for Personal Health Technologies, Medical Device Research Institute, College of Science and Engineering, Flinders University, South Australia 5042, Australia.
| |
Collapse
|
16
|
Li ZH, Wang RL, Lu M, Wang X, Huang YP, Yang JW, Zhang TY. A novel method for identifying aerobic granular sludge state using sorting, densification and clarification dynamics during the settling process. Water Res 2024; 253:121336. [PMID: 38382291 DOI: 10.1016/j.watres.2024.121336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 01/22/2024] [Accepted: 02/17/2024] [Indexed: 02/23/2024]
Abstract
Aerobic granular sludge is one of the most promising biological wastewater treatment technologies, yet maintaining its stability is still a challenge for its application, and predicting the state of the granules is essential in addressing this issue. This study explored the potential of dynamic texture entropy, derived from settling images, as a predictive tool for the state of granular sludge. Three processes, traditional thickening, often overlooked clarification, and innovative particle sorting, were used to capture the complexity and diversity of granules. It was found that rapid sorting during settling indicates stable granules, which helps to identify the state of granules. Furthermore, a relationship between sorting time and granule heterogeneity was identified, helping to adjust selection pressure. Features of the dynamic texture entropy well correlated with the respirogram, i.e., R2 were 0.86 and 0.91 for the specific endogenous respiration rate (SOURe) and the specific quasi-endogenous respiration rate (SOURq), respectively, providing a biologically based approach for monitoring the state of granules. The classification accuracy of models using features of dynamic texture entropy as an input was greater than 0.90, significantly higher than the input of conventional features, demonstrating the significant advantage of this approach. These findings contributed to developing robust monitoring tools that facilitate the maintenance of stable granular sludge operations.
Collapse
Affiliation(s)
- Zhi-Hua Li
- Key Laboratory of Northwest Water Resource, Environment, and Ecology, MOE, School of Environmental and Municipal Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China; Xi'an Key Laboratory of Intelligent Equipment Technology for Environmental Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China.
| | - Ruo-Lan Wang
- Key Laboratory of Northwest Water Resource, Environment, and Ecology, MOE, School of Environmental and Municipal Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China; Xi'an Key Laboratory of Intelligent Equipment Technology for Environmental Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China
| | - Meng Lu
- Key Laboratory of Northwest Water Resource, Environment, and Ecology, MOE, School of Environmental and Municipal Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China; Xi'an Key Laboratory of Intelligent Equipment Technology for Environmental Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China
| | - Xin Wang
- Key Laboratory of Northwest Water Resource, Environment, and Ecology, MOE, School of Environmental and Municipal Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China; Xi'an Key Laboratory of Intelligent Equipment Technology for Environmental Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China
| | - Yong-Peng Huang
- Key Laboratory of Northwest Water Resource, Environment, and Ecology, MOE, School of Environmental and Municipal Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China; Xi'an Key Laboratory of Intelligent Equipment Technology for Environmental Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China
| | - Jia-Wei Yang
- Key Laboratory of Northwest Water Resource, Environment, and Ecology, MOE, School of Environmental and Municipal Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China; Xi'an Key Laboratory of Intelligent Equipment Technology for Environmental Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China
| | - Tian-Yu Zhang
- Department of Mathematical Sciences, Montana State University, Bozeman, MT 59717, USA
| |
Collapse
|
17
|
Kocaçınar B, İnan P, Zamur EN, Çalşimşek B, Akbulut FP, Catal C. NeuroBioSense: A multidimensional dataset for neuromarketing analysis. Data Brief 2024; 53:110235. [PMID: 38533115 PMCID: PMC10964042 DOI: 10.1016/j.dib.2024.110235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 01/14/2024] [Accepted: 02/19/2024] [Indexed: 03/28/2024] Open
Abstract
In the context of neuromarketing, sales, and branding, the investigation of consumer decision-making processes presents complex and intriguing challenges. Consideration of the effects of multicultural influences and societal conditions from a global perspective enriches this multifaceted field. The application of neuroscience tools and techniques to international marketing and consumer behavior is an emerging interdisciplinary field that seeks to understand the cognitive processes, reactions, and selection mechanisms of consumers within the context of branding and sales. The NeuroBioSense dataset was prepared to analyze and classify consumer responses. This dataset includes physiological signals, facial images of the participants while watching the advertisements, and demographic information. The primary objective of the data collection process is to record and analyze the responses of human subjects to these signals during a carefully designed experiment consisting of three distinct phases, each of which features a different form of branding advertisement. Physiological signals were collected with the Empatica e4 wearable sensor device, considering non-invasive body photoplethysmography (PPG), electrodermal activity (EDA), and body temperature sensors. A total of 58 participants, aged between 18 and 70, were divided into three different groups, and data were collected. Advertisements prepared in the categories of cosmetics for 18 participants, food for 20 participants, and cars for 20 participants were watched. On the emotion evaluation scale, 7 different emotion classes are given: Joy, Surprise, anger, disgust, sadness, fear, and neutral. This dataset will help researchers analyse responses, understand and develop emotion classification studies, the relationship between consumers and advertising, and neuromarketing methods.
Collapse
Affiliation(s)
- Büşra Kocaçınar
- Department of Computer Engineering, Istanbul Kültür University, Istanbul, Turkey
| | - Pelin İnan
- Department of Computer Engineering, Istanbul Kültür University, Istanbul, Turkey
| | - Ela Nur Zamur
- Department of Computer Engineering, Istanbul Kültür University, Istanbul, Turkey
| | - Buket Çalşimşek
- Department of Computer Engineering, Istanbul Kültür University, Istanbul, Turkey
| | - Fatma Patlar Akbulut
- Department of Software Engineering, Istanbul Kültür University, Istanbul, Turkey
| | - Cagatay Catal
- Department of Computer Science and Engineering, Qatar University, Doha, Qatar
| |
Collapse
|
18
|
Sahrmann AS, Vosse L, Siebert T, Handsfield GG, Röhrle O. 3D ultrasound-based determination of skeletal muscle fascicle orientations. Biomech Model Mechanobiol 2024:10.1007/s10237-024-01837-3. [PMID: 38530501 DOI: 10.1007/s10237-024-01837-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 02/22/2024] [Indexed: 03/28/2024]
Abstract
Architectural parameters of skeletal muscle such as pennation angle provide valuable information on muscle function, since they can be related to the muscle force generating capacity, fiber packing, and contraction velocity. In this paper, we introduce a 3D ultrasound-based workflow for determining 3D fascicle orientations of skeletal muscles. We used a custom-designed automated motor driven 3D ultrasound scanning system for obtaining 3D ultrasound images. From these, we applied a custom-developed multiscale-vessel enhancement filter-based fascicle detection algorithm and determined muscle volume and pennation angle. We conducted trials on a phantom and on the human tibialis anterior (TA) muscle of 10 healthy subjects in plantarflexion (157 ± 7∘ ), neutral position (109 ± 7∘ , corresponding to neutral standing), and one resting position in between (145 ± 6∘ ). The results of the phantom trials showed a high accuracy with a mean absolute error of 0.92 ± 0.59∘ . TA pennation angles were significantly different between all positions for the deep muscle compartment; for the superficial compartment, angles are significantly increased for neutral position compared to plantarflexion and resting position. Pennation angles were also significantly different between superficial and deep compartment. The results of constant muscle volumes across the 3 ankle joint angles indicate the suitability of the method for capturing 3D muscle geometry. Absolute pennation angles in our study were slightly lower than recent literature. Decreased pennation angles during plantarflexion are consistent with previous studies. The presented method demonstrates the possibility of determining 3D fascicle orientations of the TA muscle in vivo.
Collapse
Affiliation(s)
- Annika S Sahrmann
- Institute for Modelling and Simulation of Biomechanical Systems, University of Stuttgart, Pfaffenwaldring 5A, 70569, Stuttgart, Germany.
- Stuttgart Center for Simulation Science, EXC2075 - 390740016, University of Stuttgart, 70569, Stuttgart, Germany.
| | - Lukas Vosse
- Institute of Sport and Movement Science, University of Stuttgart, Allmandring 28, 70569, Stuttgart, Germany
- Stuttgart Center for Simulation Science, EXC2075 - 390740016, University of Stuttgart, 70569, Stuttgart, Germany
| | - Tobias Siebert
- Institute of Sport and Movement Science, University of Stuttgart, Allmandring 28, 70569, Stuttgart, Germany
- Stuttgart Center for Simulation Science, EXC2075 - 390740016, University of Stuttgart, 70569, Stuttgart, Germany
| | - Geoffrey G Handsfield
- Auckland Bioengineering Institute, University of Auckland, 70 Symonds Street, Auckland, 1010, New Zealand
| | - Oliver Röhrle
- Institute for Modelling and Simulation of Biomechanical Systems, University of Stuttgart, Pfaffenwaldring 5A, 70569, Stuttgart, Germany
- Stuttgart Center for Simulation Science, EXC2075 - 390740016, University of Stuttgart, 70569, Stuttgart, Germany
| |
Collapse
|
19
|
Eyupoglu S, Eyupoglu C, Merdan N. Investigation of the effect of enzymatic and alkali treatments on the physico-chemical properties of Sambucus ebulus l. plant fiber. Int J Biol Macromol 2024; 266:130968. [PMID: 38521324 DOI: 10.1016/j.ijbiomac.2024.130968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Revised: 03/05/2024] [Accepted: 03/15/2024] [Indexed: 03/25/2024]
Abstract
The investigation aims to determine the effect of enzymatic and alkali treatments on Sambucus ebulus L. stem fiber. For this purpose, Sambucus ebulus L. stem fibers were treated with alkali, cellulase, and pectinase enzymes. An image processing technique was developed and implemented to calculate the average thicknesses of Sambucus ebulus L. fibers. The thickness of alkali, cellulase and pectinase enzyme treated fibers was determined as 478.62 μm, 808.28 μm and 478.20 μm, respectively. Scanning electron microscopy analysis illustrated that enzymatic and alkali treatments lead to the breakage of fiber structure. Furthermore, enzymatic and alkali treatments induce variations in elemental ingredients. All treatments increased the crystallinity index of Sambucus ebulus L. fiber from 72 % (raw fiber) to 83 % (alkali treated), 75.2 % (cellulase enzyme treated) and 86.3 % (pectinase enzyme treated) due to the hydrolysis of hemicellulose. Fourier transform infrared analysis indicated that there are no significant differences in functional groups. Thermogravimetric analysis shows that enzymatic and alkali treatments improve final degradation temperature of the fiber. Mechanical behaviors of cellulase enzyme-treated fiber decrease compared to raw fiber, while pectinase enzyme and alkali treatment cause to improve mechanical properties. Tensile strength of samples was determined as 76.4 MPa (cellulase enzyme treated fiber), 210 MPa (pectinase enzyme treated fiber) and 240 MPa (alkali treated fiber). Young's modules of cellulase enzyme, pectinase enzyme and alkali treated fibers were predicted as 5.5 GPa, 13.1 GPa and 16.6 GPa. Elongation at break of samples was calculated as 5.5 % (cellulase enzyme treated fiber), 6.5 % (pectinase enzyme treated fiber) and 6 % (alkali treated fiber). The results suggest that enzymatic and alkali treatments can modify the functional and structural attributes of Sambucus ebulus L. fiber.
Collapse
Affiliation(s)
- Seyda Eyupoglu
- Department of Textile, Clothing, Footwear and Leather, Vocational School of Technical Sciences, Istanbul University - Cerrahpaşa, Istanbul, Türkiye.
| | - Can Eyupoglu
- Department of Computer Engineering, Turkish Air Force Academy, National Defence University, Istanbul, Türkiye.
| | - Nigar Merdan
- Department of Fashion and Textile Design, Architecture and Design Faculty, Istanbul Commerce University, Istanbul, Türkiye
| |
Collapse
|
20
|
田 恒, 王 瑜, 计 亚, Rahman Md M. [Fully Automatic Glioma Segmentation Algorithm of Magnetic Resonance Imaging Based on 3D-UNet With More Global Contextual Feature Extraction: An Improvement on Insufficient Extraction of Global Features]. Sichuan Da Xue Xue Bao Yi Xue Ban 2024; 55:447-454. [PMID: 38645864 PMCID: PMC11026905 DOI: 10.12182/20240360208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Indexed: 04/23/2024]
Abstract
Objective The fully automatic segmentation of glioma and its subregions is fundamental for computer-aided clinical diagnosis of tumors. In the segmentation process of brain magnetic resonance imaging (MRI), convolutional neural networks with small convolutional kernels can only capture local features and are ineffective at integrating global features, which narrows the receptive field and leads to insufficient segmentation accuracy. This study aims to use dilated convolution to address the problem of inadequate global feature extraction in 3D-UNet. Methods 1) Algorithm construction: A 3D-UNet model with three pathways for more global contextual feature extraction, or 3DGE-UNet, was proposed in the paper. By using publicly available datasets from the Brain Tumor Segmentation Challenge (BraTS) of 2019 (335 patient cases), a global contextual feature extraction (GE) module was designed. This module was integrated at the first, second, and third skip connections of the 3D UNet network. The module was utilized to fully extract global features at different scales from the images. The global features thus extracted were then overlaid with the upsampled feature maps to expand the model's receptive field and achieve deep fusion of features at different scales, thereby facilitating end-to-end automatic segmentation of brain tumors. 2) Algorithm validation: The image data were sourced from the BraTs 2019 dataset, which included the preoperative MRI images of 335 patients across four modalities (T1, T1ce, T2, and FLAIR) and a tumor image with annotations made by physicians. The dataset was divided into the training, the validation, and the testing sets at an 8∶1∶1 ratio. Physician-labelled tumor images were used as the gold standard. Then, the algorithm's segmentation performance on the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) was evaluated in the test set using the Dice coefficient (for overall effectiveness evaluation), sensitivity (detection rate of lesion areas), and 95% Hausdorff distance (segmentation accuracy of tumor boundaries). The performance was tested using both the 3D-UNet model without the GE module and the 3DGE-UNet model with the GE module to internally validate the effectiveness of the GE module setup. Additionally, the performance indicators were evaluated using the 3DGE-UNet model, ResUNet, UNet++, nnUNet, and UNETR, and the convergence of these five algorithm models was compared to externally validate the effectiveness of the 3DGE-UNet model. Results 1) In internal validation, the enhanced 3DGE-UNet model achieved Dice mean values of 91.47%, 87.14%, and 83.35% for segmenting the WT, TC, and ET regions in the test set, respectively, producing the optimal values for comprehensive evaluation. These scores were superior to the corresponding scores of the traditional 3D-UNet model, which were 89.79%, 85.13%, and 80.90%, indicating a significant improvement in segmentation accuracy across all three regions (P<0.05). Compared with the 3D-UNet model, the 3DGE-UNet model demonstrated higher sensitivity for ET (86.46% vs. 80.77%) (P<0.05) , demonstrating better performance in the detection of all the lesion areas. When dealing with lesion areas, the 3DGE-UNet model tended to correctly identify and capture the positive areas in a more comprehensive way, thereby effectively reducing the likelihood of missed diagnoses. The 3DGE-UNet model also exhibited exceptional performance in segmenting the edges of WT, producing a mean 95% Hausdorff distance superior to that of the 3D-UNet model (8.17 mm vs. 13.61 mm, P<0.05). However, its performance for TC (8.73 mm vs. 7.47 mm) and ET (6.21 mm vs. 5.45 mm) was similar to that of the 3D-UNet model. 2) In the external validation, the other four algorithms outperformed the 3DGE-UNet model only in the mean Dice for TC (87.25%), the mean sensitivity for WT (94.59%), the mean sensitivity for TC (86.98%), and the mean 95% Hausdorff distance for ET (5.37 mm). Nonetheless, these differences were not statistically significant (P>0.05). The 3DGE-UNet model demonstrated rapid convergence during the training phase, outpacing the other external models. Conclusion The 3DGE-UNet model can effectively extract and fuse feature information on different scales, improving the accuracy of brain tumor segmentation.
Collapse
Affiliation(s)
- 恒屹 田
- 北京工商大学人工智能学院 (北京 100048)School of Artificial Intelligence, Beijing Technology and Business University, Beijing 100048, China
| | - 瑜 王
- 北京工商大学人工智能学院 (北京 100048)School of Artificial Intelligence, Beijing Technology and Business University, Beijing 100048, China
| | - 亚荣 计
- 北京工商大学人工智能学院 (北京 100048)School of Artificial Intelligence, Beijing Technology and Business University, Beijing 100048, China
| | - Mostafizur Rahman Md
- 北京工商大学人工智能学院 (北京 100048)School of Artificial Intelligence, Beijing Technology and Business University, Beijing 100048, China
| |
Collapse
|
21
|
Ounissi M, Latouche M, Racoceanu D. PhagoStat a scalable and interpretable end to end framework for efficient quantification of cell phagocytosis in neurodegenerative disease studies. Sci Rep 2024; 14:6482. [PMID: 38499658 PMCID: PMC10948879 DOI: 10.1038/s41598-024-56081-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 03/01/2024] [Indexed: 03/20/2024] Open
Abstract
Quantifying the phagocytosis of dynamic, unstained cells is essential for evaluating neurodegenerative diseases. However, measuring rapid cell interactions and distinguishing cells from background make this task very challenging when processing time-lapse phase-contrast video microscopy. In this study, we introduce an end-to-end, scalable, and versatile real-time framework for quantifying and analyzing phagocytic activity. Our proposed pipeline is able to process large data-sets and includes a data quality verification module to counteract potential perturbations such as microscope movements and frame blurring. We also propose an explainable cell segmentation module to improve the interpretability of deep learning methods compared to black-box algorithms. This includes two interpretable deep learning capabilities: visual explanation and model simplification. We demonstrate that interpretability in deep learning is not the opposite of high performance, by additionally providing essential deep learning algorithm optimization insights and solutions. Besides, incorporating interpretable modules results in an efficient architecture design and optimized execution time. We apply this pipeline to quantify and analyze microglial cell phagocytosis in frontotemporal dementia (FTD) and obtain statistically reliable results showing that FTD mutant cells are larger and more aggressive than control cells. The method has been tested and validated on several public benchmarks by generating state-of-the art performances. To stimulate translational approaches and future studies, we release an open-source end-to-end pipeline and a unique microglial cells phagocytosis dataset for immune system characterization in neurodegenerative diseases research. This pipeline and the associated dataset will consistently crystallize future advances in this field, promoting the development of efficient and effective interpretable algorithms dedicated to the critical domain of neurodegenerative diseases' characterization. https://github.com/ounissimehdi/PhagoStat .
Collapse
Affiliation(s)
- Mehdi Ounissi
- CNRS, Inserm, AP-HP, Inria, Paris Brain Institute-ICM, Sorbonne University, 75013, Paris, France
| | - Morwena Latouche
- Inserm, CNRS, AP-HP, Institut du Cerveau, ICM, Sorbonne Université, 75013, Paris, France
- PSL Research university, EPHE, Paris, France
| | - Daniel Racoceanu
- CNRS, Inserm, AP-HP, Inria, Paris Brain Institute-ICM, Sorbonne University, 75013, Paris, France.
| |
Collapse
|
22
|
Chen CH, Lin YC, Wang SH, Kuo TH, Tsai HY. An automatic system for recognizing fly courtship patterns via an image processing method. Behav Brain Funct 2024; 20:5. [PMID: 38493127 PMCID: PMC10943763 DOI: 10.1186/s12993-024-00231-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 03/02/2024] [Indexed: 03/18/2024]
Abstract
Fruit fly courtship behaviors composed of a series of actions have always been an important model for behavioral research. While most related studies have focused only on total courtship behaviors, specific courtship elements have often been underestimated. Identifying these courtship element details is extremely labor intensive and would largely benefit from an automatic recognition system. To address this issue, in this study, we established a vision-based fly courtship behavior recognition system. The system based on the proposed image processing methods can precisely distinguish body parts such as the head, thorax, and abdomen and automatically recognize specific courtship elements, including orientation, singing, attempted copulation, copulation and tapping, which was not detectable in previous studies. This system, which has high identity tracking accuracy (99.99%) and high behavioral element recognition rates (> 97.35%), can ensure correct identification even when flies completely overlap. Using this newly developed system, we investigated the total courtship time, and proportion, and transition of courtship elements in flies across different ages and found that male flies adjusted their courtship strategy in response to their physical condition. We also identified differences in courtship patterns between males with and without successful copulation. Our study therefore demonstrated how image processing methods can be applied to automatically recognize complex animal behaviors. The newly developed system will largely help us investigate the details of fly courtship in future research.
Collapse
Affiliation(s)
- Ching-Hsin Chen
- Department of Power Mechanical Engineering, National Tsing Hua University, Hsinchu, 30013, Taiwan
| | - Yu-Chiao Lin
- Department of Life Science, National Tsing Hua University, Hsinchu, 30013, Taiwan
| | - Sheng-Hao Wang
- Institute of Systems Neuroscience, National Tsing Hua University, Hsinchu, 30013, Taiwan
| | - Tsung-Han Kuo
- Department of Life Science, National Tsing Hua University, Hsinchu, 30013, Taiwan.
- Institute of Systems Neuroscience, National Tsing Hua University, Hsinchu, 30013, Taiwan.
- Brain Research Center, National Tsing Hua University, Hsinchu, 30013, Taiwan.
| | - Hung-Yin Tsai
- Department of Power Mechanical Engineering, National Tsing Hua University, Hsinchu, 30013, Taiwan.
- Brain Research Center, National Tsing Hua University, Hsinchu, 30013, Taiwan.
| |
Collapse
|
23
|
Decoene I, Nasello G, Madeiro de Costa RF, Nilsson Hall G, Pastore A, Van Hoven I, Ribeiro Viseu S, Verfaillie C, Geris L, Luyten FP, Papantoniou I. Robotics-Driven Manufacturing of Cartilaginous Microtissues for Skeletal Tissue Engineering Applications. Stem Cells Transl Med 2024; 13:278-292. [PMID: 38217535 PMCID: PMC10940839 DOI: 10.1093/stcltm/szad091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 11/02/2023] [Indexed: 01/15/2024] Open
Abstract
Automated technologies are attractive for enhancing the robust manufacturing of tissue-engineered products for clinical translation. In this work, we present an automation strategy using a robotics platform for media changes, and imaging of cartilaginous microtissues cultured in static microwell platforms. We use an automated image analysis pipeline to extract microtissue displacements and morphological features as noninvasive quality attributes. As a result, empty microwells were identified with a 96% accuracy, and dice coefficient of 0.84 for segmentation. Design of experiment are used for the optimization of liquid handling parameters to minimize empty microwells during long-term differentiation protocols. We found no significant effect of aspiration or dispension speeds at and beyond manual speed. Instead, repeated media changes and time in culture were the driving force or microtissue displacements. As the ovine model is the preclinical model of choice for large skeletal defects, we used ovine periosteum-derived cells to form cartilage-intermediate microtissues. Increased expression of COL2A1 confirms chondrogenic differentiation and RUNX2 shows no osteogenic specification. Histological analysis shows an increased secretion of cartilaginous extracellular matrix and glycosaminoglycans in larger microtissues. Furthermore, microtissue-based implants are capable of forming mineralized tissues and bone after 4 weeks of ectopic implantation in nude mice. We demonstrate the development of an integrated bioprocess for culturing and manipulation of cartilaginous microtissues and anticipate the progressive substitution of manual operations with automated solutions for the manufacturing of microtissue-based living implants.
Collapse
Affiliation(s)
- Isaak Decoene
- Prometheus Division of Skeletal Tissue Engineering, KU Leuven, Leuven, Belgium
- Skeletal Biology and Engineering Research Center, Department of Development and Regeneration, KU Leuven, Leuven
| | - Gabriele Nasello
- Prometheus Division of Skeletal Tissue Engineering, KU Leuven, Leuven, Belgium
- Biomechanics Research Unit, GIGA In Silico Medicine, GIGA institute, University ofLiège, Liège, Belgium
| | | | - Gabriella Nilsson Hall
- Prometheus Division of Skeletal Tissue Engineering, KU Leuven, Leuven, Belgium
- Skeletal Biology and Engineering Research Center, Department of Development and Regeneration, KU Leuven, Leuven
| | - Angela Pastore
- Prometheus Division of Skeletal Tissue Engineering, KU Leuven, Leuven, Belgium
- Skeletal Biology and Engineering Research Center, Department of Development and Regeneration, KU Leuven, Leuven
| | - Inge Van Hoven
- Prometheus Division of Skeletal Tissue Engineering, KU Leuven, Leuven, Belgium
- Skeletal Biology and Engineering Research Center, Department of Development and Regeneration, KU Leuven, Leuven
| | - Samuel Ribeiro Viseu
- Prometheus Division of Skeletal Tissue Engineering, KU Leuven, Leuven, Belgium
- Skeletal Biology and Engineering Research Center, Department of Development and Regeneration, KU Leuven, Leuven
| | - Catherine Verfaillie
- Department of Development and Regeneration, Stem Cell Biology and Embryology, KU Leuven, Leuven, Belgium
| | - Liesbet Geris
- Prometheus Division of Skeletal Tissue Engineering, KU Leuven, Leuven, Belgium
- Skeletal Biology and Engineering Research Center, Department of Development and Regeneration, KU Leuven, Leuven
- Biomechanics Research Unit, GIGA In Silico Medicine, GIGA institute, University ofLiège, Liège, Belgium
| | - Frank P Luyten
- Prometheus Division of Skeletal Tissue Engineering, KU Leuven, Leuven, Belgium
- Skeletal Biology and Engineering Research Center, Department of Development and Regeneration, KU Leuven, Leuven
| | - Ioannis Papantoniou
- Prometheus Division of Skeletal Tissue Engineering, KU Leuven, Leuven, Belgium
- Skeletal Biology and Engineering Research Center, Department of Development and Regeneration, KU Leuven, Leuven
- Institute for Chemical Engineering Sciences, Foundationfor Research and Technology–Hellas, Patras, Greece
| |
Collapse
|
24
|
Tang X, Rashid Sheykhahmad F. Boosted dipper throated optimization algorithm-based Xception neural network for skin cancer diagnosis: An optimal approach. Heliyon 2024; 10:e26415. [PMID: 38449650 PMCID: PMC10915520 DOI: 10.1016/j.heliyon.2024.e26415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Revised: 02/10/2024] [Accepted: 02/13/2024] [Indexed: 03/08/2024] Open
Abstract
Skin cancer is a prevalent form of cancer that necessitates prompt and precise detection. However, current diagnostic methods for skin cancer are either invasive, time-consuming, or unreliable. Consequently, there is a demand for an innovative and efficient approach to diagnose skin cancer that utilizes non-invasive and automated techniques. In this study, a unique method has been proposed for diagnosing skin cancer by employing an Xception neural network that has been optimized using Boosted Dipper Throated Optimization (BDTO) algorithm. The Xception neural network is a deep learning model capable of extracting high-level features from skin dermoscopy images, while the BDTO algorithm is a bio-inspired optimization technique that can determine the optimal parameters and weights for the Xception neural network. To enhance the quality and diversity of the images, the ISIC dataset is utilized, a widely accepted benchmark system for skin cancer diagnosis, and various image preprocessing and data augmentation techniques were implemented. By comparing the method with several contemporary approaches, it has been demonstrated that the method outperforms others in detecting skin cancer. The method achieves an average precision of 94.936%, an average accuracy of 94.206%, and an average recall of 97.092% for skin cancer diagnosis, surpassing the performance of alternative methods. Additionally, the 5-fold ROC curve and error curve have been presented for the data validation to showcase the superiority and robustness of the method.
Collapse
Affiliation(s)
- Xiaofei Tang
- School of Computer Science and Software Engineering, University of Science and Technology Liaoning, Anshan, 114051, Liaoning, China
| | - Fatima Rashid Sheykhahmad
- Ardabil Branch, Islamic Azad University, Ardabil, Iran
- College of Technical Engineering, The Islamic University, Najaf, Iraq
| |
Collapse
|
25
|
Tello JP, Velez JC, Cadena A, Jutinico A, Pardo M, Percybrooks W. Blood flow effects in a patient with a thoracic aortic endovascular prosthesis. Heliyon 2024; 10:e26355. [PMID: 38434340 PMCID: PMC10907539 DOI: 10.1016/j.heliyon.2024.e26355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 02/08/2024] [Accepted: 02/12/2024] [Indexed: 03/05/2024] Open
Abstract
This work analyzes hemodynamic phenomena within the aorta of two elderly patients and their impact on blood flow behavior, particularly affected by an endovascular prosthesis in one of them (Patient II). Computational Fluid Dynamics (CFD) was utilized for this study, involving measurements of velocity, pressure, and wall shear stress (WSS) at various time points during the third cardiac cycle, at specific positions within two cross sections of the thoracic aorta. The first cross-section (Cross-Section 1, CS1) is located before the initial fluid bifurcation, just before the right subclavian artery. The second cross-section (Cross-Section 2, CS2) is situated immediately after the left subclavian artery. The results reveal that, under regular aortic geometries, velocity and pressure magnitudes follow the principles of fluid dynamics, displaying variations. However, in Patient II, an endoprosthesis near the CS2 and the proximal border of the endoprosthesis significantly disrupts fluid behavior owing to the pulsatile flow. The cross-sectional areas of Patient I are smaller than those of Patient II, leading to higher flow magnitudes. Although in CS1 of Patient I, there is considerable variability in velocity magnitudes, they exhibit a more uniform and predictable transition. In contrast, CS2 of Patient II, where magnitude variation is also high, displays irregular fluid behavior due to the endoprosthesis presence. This cross-section coincides with the border of the fluid bifurcation. Additionally, the irregular geometry caused by endovascular aneurysm repair contributes to flow disruption as the endoprosthesis adjusts to the endothelium, reshaping itself to conform with the vessel wall. In this context, significant alterations in velocity values, pressure differentials fluctuating by up to 10%, and low wall shear stress indicate the pronounced influence of the endovascular prosthesis on blood flow behavior. These flow disturbances, when compounded by the heart rate, can potentially lead to changes in vascular anatomy and displacement, resulting in a disruption of the prosthesis-endothelium continuity and thereby causing clinical complications in the patient.
Collapse
Affiliation(s)
- Juan P. Tello
- Universidad del Norte, Km. 5 Via Puerto Colombia, Barranquilla, Colombia
| | - Juan C. Velez
- Universidad del Norte, Km. 5 Via Puerto Colombia, Barranquilla, Colombia
| | | | - Andres Jutinico
- Universidad Distrital Francisco Jose de Caldas, Bogota, Colombia
| | - Mauricio Pardo
- Universidad del Norte, Km. 5 Via Puerto Colombia, Barranquilla, Colombia
| | | |
Collapse
|
26
|
Lin HC, Xiao SX. Achievement of Dynamic Tablet Defect Detection Mechanism Using Biaxial Slope Symmetry Algorithm. J Pharm Sci 2024:S0022-3549(24)00089-3. [PMID: 38484876 DOI: 10.1016/j.xphs.2024.03.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Revised: 03/06/2024] [Accepted: 03/07/2024] [Indexed: 03/24/2024]
Abstract
An inspection in tablet appearance integrity before bottling is regarded as a routine task in a pharmaceutical factory. Although some methods such as automated optical instrument, video or artificial intelligence (AI) are currently available in industry, it usually pays for a complex computational process as well as high cost. Based on the symmetry of tablet appearance in reality, this study develops a biaxial scanning slope symmetry algorithm to realize a dynamic real-time tablet defect detection with a simple arithmetic operation. First, the tablet is discretely scanned using image sensor in two axes, i.e. X and Y directions, simultaneously. Second, the analogy output signals generated from the sensor during the scanning process is discretely digitized and stored in an array. Third, the coordinate of center point in the series data array is identified from every line scanning. Fourth, every section slope between two nearby center points from the first to last lines is formulated and calculated sequentially. Finally, the square mean error (SME) is used to evaluate the shape defect situation according to all accumulated errors from every slope variation. The experimental results verify that the proposed algorithm can achieve both fast and accurate detection performance.
Collapse
Affiliation(s)
- Hsiung-Cheng Lin
- Department of Electronic Engineering, National Chin-Yi University of Technology, Taichung, Taiwan.
| | - Sheng-Xi Xiao
- Department of Electronic Engineering, National Chin-Yi University of Technology, Taichung, Taiwan
| |
Collapse
|
27
|
Myslicka M, Kawala-Sterniuk A, Bryniarska A, Sudol A, Podpora M, Gasz R, Martinek R, Kahankova Vilimkova R, Vilimek D, Pelc M, Mikolajewski D. Review of the application of the most current sophisticated image processing methods for the skin cancer diagnostics purposes. Arch Dermatol Res 2024; 316:99. [PMID: 38446274 DOI: 10.1007/s00403-024-02828-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 12/28/2023] [Accepted: 01/25/2024] [Indexed: 03/07/2024]
Abstract
This paper presents the most current and innovative solutions applying modern digital image processing methods for the purpose of skin cancer diagnostics. Skin cancer is one of the most common types of cancers. It is said that in the USA only, one in five people will develop skin cancer and this trend is constantly increasing. Implementation of new, non-invasive methods plays a crucial role in both identification and prevention of skin cancer occurrence. Early diagnosis and treatment are needed in order to decrease the number of deaths due to this disease. This paper also contains some information regarding the most common skin cancer types, mortality and epidemiological data for Poland, Europe, Canada and the USA. It also covers the most efficient and modern image recognition methods based on the artificial intelligence applied currently for diagnostics purposes. In this work, both professional, sophisticated as well as inexpensive solutions were presented. This paper is a review paper and covers the period of 2017 and 2022 when it comes to solutions and statistics. The authors decided to focus on the latest data, mostly due to the rapid technology development and increased number of new methods, which positively affects diagnosis and prognosis.
Collapse
Affiliation(s)
- Maria Myslicka
- Faculty of Medicine, Wroclaw Medical University, J. Mikulicza-Radeckiego 5, 50-345, Wroclaw, Poland.
| | - Aleksandra Kawala-Sterniuk
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland.
| | - Anna Bryniarska
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Adam Sudol
- Faculty of Natural Sciences and Technology, University of Opole, Dmowskiego 7-9, 45-368, Opole, Poland
| | - Michal Podpora
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Rafal Gasz
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Radek Martinek
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Radana Kahankova Vilimkova
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Dominik Vilimek
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Mariusz Pelc
- Institute of Computer Science, University of Opole, Oleska 48, 45-052, Opole, Poland
- School of Computing and Mathematical Sciences, University of Greenwich, Old Royal Naval College, Park Row, SE10 9LS, London, UK
| | - Dariusz Mikolajewski
- Institute of Computer Science, Kazimierz Wielki University in Bydgoszcz, ul. Kopernika 1, 85-074, Bydgoszcz, Poland
- Neuropsychological Research Unit, 2nd Clinic of the Psychiatry and Psychiatric Rehabilitation, Medical University in Lublin, Gluska 1, 20-439, Lublin, Poland
| |
Collapse
|
28
|
Tsuda H, Kawabata H. materialmodifier: An R package of photo editing effects for material perception research. Behav Res Methods 2024; 56:2657-2674. [PMID: 37162649 PMCID: PMC10991072 DOI: 10.3758/s13428-023-02116-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/27/2023] [Indexed: 05/11/2023]
Abstract
In this paper, we introduce an R package that performs automated photo editing effects. Specifically, it is an R implementation of an image-processing algorithm proposed by Boyadzhiev et al. (2015). The software allows the user to manipulate the appearance of objects in photographs, such as emphasizing facial blemishes and wrinkles, smoothing the skin, or enhancing the gloss of fruit. It provides a reproducible method to quantitatively control specific surface properties of objects (e.g., gloss and roughness), which is useful for researchers interested in topics related to material perception, from basic mechanisms of perception to the aesthetic evaluation of faces and objects. We describe the functionality, usage, and algorithm of the method, report on the findings of a behavioral evaluation experiment, and discuss its usefulness and limitations for psychological research. The package can be installed via CRAN, and documentation and source code are available at https://github.com/tsuda16k/materialmodifier .
Collapse
Affiliation(s)
- Hiroyuki Tsuda
- Faculty of Psychology, Doshisha University, Kyoto, Japan.
| | - Hideaki Kawabata
- Department of Psychology, Faculty of Letters, Keio University, Tokyo, Japan.
| |
Collapse
|
29
|
Chung SC. Cryo-forum: A framework for orientation recovery with uncertainty measure with the application in cryo-EM image analysis. J Struct Biol 2024; 216:108058. [PMID: 38163450 DOI: 10.1016/j.jsb.2023.108058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 12/14/2023] [Accepted: 12/28/2023] [Indexed: 01/03/2024]
Abstract
In single-particle cryo-electron microscopy (cryo-EM), efficient determination of orientation parameters for particle images poses a significant challenge yet is crucial for reconstructing 3D structures. This task is complicated by the high noise levels in the datasets, which often include outliers, necessitating several time-consuming 2D clean-up processes. Recently, solutions based on deep learning have emerged, offering a more streamlined approach to the traditionally laborious task of orientation estimation. These solutions employ amortized inference, eliminating the need to estimate parameters individually for each image. However, these methods frequently overlook the presence of outliers and may not adequately concentrate on the components used within the network. This paper introduces a novel method using a 10-dimensional feature vector for orientation representation, extracting orientations as unit quaternions with an accompanying uncertainty metric. Furthermore, we propose a unique loss function that considers the pairwise distances between orientations, thereby enhancing the accuracy of our method. Finally, we also comprehensively evaluate the design choices in constructing the encoder network, a topic that has not received sufficient attention in the literature. Our numerical analysis demonstrates that our methodology effectively recovers orientations from 2D cryo-EM images in an end-to-end manner. Notably, the inclusion of uncertainty quantification allows for direct clean-up of the dataset at the 3D level. Lastly, we package our proposed methods into a user-friendly software suite named cryo-forum, designed for easy access by developers.
Collapse
Affiliation(s)
- Szu-Chi Chung
- Department of Applied Mathematics, National Sun Yat-sen University, No. 70, Lienhai Rd, Kaohsiung, Taiwan.
| |
Collapse
|
30
|
Zanini LGK, Rubira-Bullen IRF, Nunes FDLDS. A Systematic Review on Caries Detection, Classification, and Segmentation from X-Ray Images: Methods, Datasets, Evaluation, and Open Opportunities. J Imaging Inform Med 2024:10.1007/s10278-024-01054-5. [PMID: 38429559 DOI: 10.1007/s10278-024-01054-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 12/19/2023] [Accepted: 01/02/2024] [Indexed: 03/03/2024]
Abstract
Dental caries occurs from the interaction between oral bacteria and sugars, generating acids that damage teeth over time. The importance of X-ray images for detecting oral problems is undeniable in dentistry. With technological advances, it is feasible to identify these lesions using techniques such as deep learning, machine learning, and image processing. Therefore, the survey and systematization of these methods are essential to determining the main computational approaches for identifying caries in X-ray images. In this systematic review, we investigated the primary computational methods used for classifying, detecting, and segmenting caries in X-ray images. Following the PRISMA methodology, we selected relevant studies and analyzed their methods, strengths, limitations, imaging modalities, evaluation metrics, datasets, and classification techniques. The review encompassed 42 studies retrieved from the Science Direct, IEEExplore, ACM Digital, and PubMed databases from the Computer Science and Health areas. The results indicate that 12% of the included articles utilized public datasets, with deep learning being the predominant approach, accounting for 69% of the studies. The majority of these studies (76%) focused on classifying dental caries, either in binary or multiclass classification. Panoramic imaging was the most commonly used radiographic modality, representing 29% of the cases studied. Overall, our systematic review provides a comprehensive overview of the computational methods employed in identifying caries in radiographic images and highlights trends, patterns, and challenges in this research field.
Collapse
Affiliation(s)
- Luiz Guilherme Kasputis Zanini
- Department of Computer Engineering and Digital Systems, University of São Paulo, Av. Prof. Luciano Gualberto 158, São Paulo, 05508-010, São Paulo, Brazil.
| | | | - Fátima de Lourdes Dos Santos Nunes
- Department of Computer Engineering and Digital Systems, University of São Paulo, Av. Prof. Luciano Gualberto 158, São Paulo, 05508-010, São Paulo, Brazil
- School of Arts, Sciences and Humanities, University of São Paulo, Rua Arlindo Béttio, 1000, São Paulo, 03828-000, São Paulo, Brazil
| |
Collapse
|
31
|
Tsukijima M, Teramoto A, Kojima A, Yamamuro O, Tamaki T, Fujita H. A position-adaptive noise-reduction method using a deep denoising filter bank for dedicated breast positron emission tomography images. Phys Eng Sci Med 2024; 47:73-85. [PMID: 37870728 DOI: 10.1007/s13246-023-01343-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 10/02/2023] [Indexed: 10/24/2023]
Abstract
Dedicated breast positron emission tomography (db-PET) is more sensitive than whole-body positron emission tomography and is thus expected to detect early stage breast cancer and determine treatment efficacy. However, it is challenging to decrease the sensitivity of the chest wall side at the edge of the detector, resulting in a relative increase in noise and a decrease in detectability. Longer acquisition times and injection of larger amounts of tracer improve image quality but increase the burden on the patient. Therefore, this study aimed to improve image quality via reconstruction with shorter acquisition time data using deep learning, which has recently been widely used as a noise reduction technique. In our proposed method, a multi-adaptive denoising filter bank structure was introduced by training the training data separately for each detector area because the noise characteristics of db-PET images vary at different locations. Input and ideal images were reconstructed based on 1- and 7-min collection data, respectively, using list mode data. The deep learning model used residual learning with an encoder-decoder structure. The image quality of the proposed method was superior to that of existing noise reduction filters such as Gaussian filters and nonlocal mean filters. Furthermore, there was no significant difference between the maximum standardized uptake values before and after filtering using the proposed method. Taken together, the proposed method is useful as a noise reduction filter for db-PET images, as it can reduce the patient burden, scan time, and radiotracer amount in db-PET examinations.
Collapse
Affiliation(s)
- Masahiro Tsukijima
- Imaging Diagnostic Technology Department, East Nagoya Imaging Diagnosis Center, 3-4-26 Jiyugaoka, Chikusa-ku, Nagoya, Aichi, Japan
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake, Aichi, Japan
| | | | - Akihiro Kojima
- Nagoya PET Imaging Center, 1-162 Hokke, Nakagawa-ku, Nagoya, Aichi, Japan
| | - Osamu Yamamuro
- Imaging Diagnostic Technology Department, East Nagoya Imaging Diagnosis Center, 3-4-26 Jiyugaoka, Chikusa-ku, Nagoya, Aichi, Japan
| | - Tsuneo Tamaki
- Imaging Diagnostic Technology Department, East Nagoya Imaging Diagnosis Center, 3-4-26 Jiyugaoka, Chikusa-ku, Nagoya, Aichi, Japan
| | - Hiroshi Fujita
- Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Japan
| |
Collapse
|
32
|
El Hady A, Takahashi D, Sun R, Akinwale O, Boyd-Meredith T, Zhang Y, Charles AS, Brody CD. Chronic brain functional ultrasound imaging in freely moving rodents performing cognitive tasks. J Neurosci Methods 2024; 403:110033. [PMID: 38056633 PMCID: PMC10872377 DOI: 10.1016/j.jneumeth.2023.110033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 11/06/2023] [Accepted: 12/01/2023] [Indexed: 12/08/2023]
Abstract
BACKGROUND Functional ultrasound imaging (fUS) is an emerging imaging technique that indirectly measures neural activity via changes in blood volume. Chronic fUS imaging during cognitive tasks in freely moving animals faces multiple exceptional challenges: performing large durable craniotomies with chronic implants, designing behavioral experiments matching the hemodynamic timescale, stabilizing the ultrasound probe during freely moving behavior, accurately assessing motion artifacts, and validating that the animal can perform cognitive tasks while tethered. NEW METHOD We provide validated solutions for those technical challenges. In addition, we present standardized step-by-step reproducible protocols, procedures, and data processing pipelines. Finally, we present proof-of-concept analysis of brain dynamics during a decision making task. RESULTS We obtain stable recordings from which we can robustly decode task variables from fUS data over multiple months. Moreover, we find that brain wide imaging through hemodynamic response is nonlinearly related to cognitive variables, such as task difficulty, as compared to sensory responses previously explored. COMPARISON WITH EXISTING METHODS Computational pipelines in fUS are nascent and we present an initial development of a full processing pathway to correct and segment fUS data. CONCLUSIONS Our methods provide stable imaging and analysis of behavior with fUS that will enable new experimental paradigms in understanding brain-wide dynamics in naturalistic behaviors.
Collapse
Affiliation(s)
- Ahmed El Hady
- Princeton Neuroscience Institute, Princeton University, Princeton, United States; Center for advanced study of collective behavior, University of Konstanz, Germany; Max Planck Institute of Animal Behavior, Konstanz, Germany
| | - Daniel Takahashi
- Brain Institute, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Ruolan Sun
- Department of Biomedical Engineering, John Hopkins University, Baltimore, United States
| | - Oluwateniola Akinwale
- Department of Biomedical Engineering, John Hopkins University, Baltimore, United States
| | - Tyler Boyd-Meredith
- Princeton Neuroscience Institute, Princeton University, Princeton, United States
| | - Yisi Zhang
- Princeton Neuroscience Institute, Princeton University, Princeton, United States
| | - Adam S Charles
- Department of Biomedical Engineering, John Hopkins University, Baltimore, United States; Mathematical Institute for Data Science, Kavli Neuroscience Discovery Institute & Center for Imaging Science, John Hopkins University, Baltimore, United States.
| | - Carlos D Brody
- Princeton Neuroscience Institute, Princeton University, Princeton, United States; Howard Hughes Medical Institute, Princeton University, Princeton, United States; Department of Molecular Biology, Princeton University, Princeton, United States.
| |
Collapse
|
33
|
Khan RU, Almakdi S, Alshehri M, Haq AU, Ullah A, Kumar R. An intelligent neural network model to detect red blood cells for various blood structure classification in microscopic medical images. Heliyon 2024; 10:e26149. [PMID: 38384569 PMCID: PMC10879026 DOI: 10.1016/j.heliyon.2024.e26149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 01/28/2024] [Accepted: 02/08/2024] [Indexed: 02/23/2024] Open
Abstract
Biomedical image analysis plays a crucial role in enabling high-performing imaging and various clinical applications. For the proper diagnosis of blood diseases related to red blood cells, red blood cells must be accurately identified and categorized. Manual analysis is time-consuming and prone to mistakes. Analyzing multi-label samples, which contain clusters of cells, is challenging due to difficulties in separating individual cells, such as touching or overlapping cells. High-performance biomedical imaging and several medical applications are made possible by advanced biosensors. We develop an intelligent neural network model that can automatically identify and categorize red blood cells from microscopic medical images using region-based convolutional neural networks (RCNN) and cutting-edge biosensors. Our model successfully navigates obstacles like touching or overlapping cells and accurately recognizes various blood structures. Additionally, we utilized data augmentation as a pre-processing method on microscopic images to enhance the model's computational efficiency and expand the sample size. To refine the data and eliminate noise from the dataset, we utilized the Radial Gradient Index filtering algorithm for imaging data equalization. We exhibit improved detection accuracy and a reduced model loss rate when using medical imagery datasets to apply our proposed model in comparison to existing ResNet and GoogleNet models. Our model precisely detected red blood cells in a collection of medical images with 99% training accuracy and 91.21% testing accuracy. Our proposed model outperformed earlier models like ResNet-50 and GoogleNet by 10-15%. Our results demonstrated that Artificial intelligence (AI)-assisted automated red blood cell detection has the potential to revolutionize and speed up blood cell analysis, minimizing human error and enabling early illness diagnosis.
Collapse
Affiliation(s)
- Riaz Ullah Khan
- Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou 313001, China
| | - Sultan Almakdi
- Department of Computer Science, College of Computer Science and Information systems, Najran University, Najran 55461, Saudi Arabia
| | - Mohammed Alshehri
- Department of Computer Science, College of Computer Science and Information systems, Najran University, Najran 55461, Saudi Arabia
| | - Amin Ul Haq
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Aman Ullah
- Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou 313001, China
| | - Rajesh Kumar
- Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou 313001, China
| |
Collapse
|
34
|
Öztaş B, Korkmaz Y, Çelik Hİ. Image analyses of artificially damaged carbon/glass/ epoxy composites before and after impact load. Heliyon 2024; 10:e25876. [PMID: 38404785 PMCID: PMC10884454 DOI: 10.1016/j.heliyon.2024.e25876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 02/04/2024] [Accepted: 02/05/2024] [Indexed: 02/27/2024] Open
Abstract
In recent years, there has been a widespread utilization of composite materials, particularly in critical sectors such as aircraft manufacturing, where errors can have significant consequences. This has generated a need for effective protection of composite materials both during and after production. Detecting internal damage in composite materials, which is often visually imperceptible, becomes crucial and can be assessed using non-destructive testing methods. In this study, glass and carbon woven fabric-reinforced epoxy composites intentionally embedded with artificial damages during manufacturing were subjected to impact tests. The composite materials were scanned using the ultrasonic method to detect damages before and after the impacts. Particularly in glass fiber-reinforced composites (GFRP), the damaged area in the artificially damaged glass lamella sample (G/AL) was calculated to be 4-5 times higher than in the undamaged sample (G/UD). Damaged area values in GFRP were calculated as 72.88 mm2 in the G/UD sample, 143.74 mm2 in the G/AC sample, and 315.93 mm2 in the G/AL sample. While the samples with artificial damage in carbon fiber-reinforced composites (C/AL, C/AC) were perforated during the impact tests, the undamaged samples (C/UD) were not. The images obtained were evaluated using image processing algorithms and were employed in damage analysis. In conclusion, the applied method and the developed image processing algorithm yielded successful results in analyzing barely visible damages and detecting damaged areas.
Collapse
Affiliation(s)
- Burak Öztaş
- Department of Textile Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş, Turkey
| | - Yasemin Korkmaz
- Department of Textile Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş, Turkey
| | | |
Collapse
|
35
|
Fernandez R, Le Cunff L, Mérigeaud S, Verdeil JL, Perry J, Larignon P, Spilmont AS, Chatelet P, Cardoso M, Goze-Bac C, Moisy C. End-to-end multimodal 3D imaging and machine learning workflow for non-destructive phenotyping of grapevine trunk internal structure. Sci Rep 2024; 14:5033. [PMID: 38424155 PMCID: PMC10904756 DOI: 10.1038/s41598-024-55186-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 02/21/2024] [Indexed: 03/02/2024] Open
Abstract
Quantifying healthy and degraded inner tissues in plants is of great interest in agronomy, for example, to assess plant health and quality and monitor physiological traits or diseases. However, detecting functional and degraded plant tissues in-vivo without harming the plant is extremely challenging. New solutions are needed in ligneous and perennial species, for which the sustainability of plantations is crucial. To tackle this challenge, we developed a novel approach based on multimodal 3D imaging and artificial intelligence-based image processing that allowed a non-destructive diagnosis of inner tissues in living plants. The method was successfully applied to the grapevine (Vitis vinifera L.). Vineyard's sustainability is threatened by trunk diseases, while the sanitary status of vines cannot be ascertained without injuring the plants. By combining MRI and X-ray CT 3D imaging with an automatic voxel classification, we could discriminate intact, degraded, and white rot tissues with a mean global accuracy of over 91%. Each imaging modality contribution to tissue detection was evaluated, and we identified quantitative structural and physiological markers characterizing wood degradation steps. The combined study of inner tissue distribution versus external foliar symptom history demonstrated that white rot and intact tissue contents are key-measurements in evaluating vines' sanitary status. We finally proposed a model for an accurate trunk disease diagnosis in grapevine. This work opens new routes for precision agriculture and in-situ monitoring of tissue quality and plant health across plant species.
Collapse
Affiliation(s)
- Romain Fernandez
- IFV, French Institute of Vine and Wine, IFV, INRAE, UMT Géno-Vigne, Institut Agro, 34398, Montpellier, France
- CIRAD, UMR AGAP Institut, 34398, Montpellier, France
- UMR AGAP Institut, Univ Montpellier, CIRAD, INRAE, Institut Agro, Montpellier, France
| | - Loïc Le Cunff
- IFV, French Institute of Vine and Wine, IFV, INRAE, UMT Géno-Vigne, Institut Agro, 34398, Montpellier, France
- UMR AGAP Institut, Univ Montpellier, CIRAD, INRAE, Institut Agro, Montpellier, France
| | | | - Jean-Luc Verdeil
- CIRAD, Phiv, Campus Lavalette, 389 Avenue Agropolis, Montferrier-sur-Lez, France
- UMR AGAP Institut, Univ Montpellier, CIRAD, INRAE, Institut Agro, Montpellier, France
| | - Julie Perry
- CIVC Comité Champagne, 5 Rue Henri Martin, 51200, Epernay, France
| | - Philippe Larignon
- IFV Nîmes. Pôle Rhône-Méditerranée, 7 Avenue Cazeaux, 30230, Rodilhan, France
| | - Anne-Sophie Spilmont
- IFV Pôle Matériel Végétal, Domaine de l'Espiguette, 30240, Le Grau du Roi, France
| | - Philippe Chatelet
- UMR AGAP Institut, Univ Montpellier, CIRAD, INRAE, Institut Agro, Montpellier, France
| | - Maïda Cardoso
- BNIF University of Montpellier, Place Eugène Bataillon, Montpellier, France
| | - Christophe Goze-Bac
- Laboratoire Charles Coulomb, University of Montpellier and CNRS, 34095, Montpellier, France
| | - Cédric Moisy
- IFV, French Institute of Vine and Wine, IFV, INRAE, UMT Géno-Vigne, Institut Agro, 34398, Montpellier, France.
- UMR AGAP Institut, Univ Montpellier, CIRAD, INRAE, Institut Agro, Montpellier, France.
| |
Collapse
|
36
|
Gao Z, Jia S, Li Q, Lu D, Zhang S, Xiao W. [Deep learning approach for automatic segmentation of auricular acupoint divisions]. Sheng Wu Yi Xue Gong Cheng Xue Za Zhi 2024; 41:114-120. [PMID: 38403611 PMCID: PMC10894748 DOI: 10.7507/1001-5515.202309010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/28/2023] [Indexed: 02/27/2024]
Abstract
The automatic segmentation of auricular acupoint divisions is the basis for realizing intelligent auricular acupoint therapy. However, due to the large number of ear acupuncture areas and the lack of clear boundary, existing solutions face challenges in automatically segmenting auricular acupoints. Therefore, a fast and accurate automatic segmentation approach of auricular acupuncture divisions is needed. A deep learning-based approach for automatic segmentation of auricular acupoint divisions is proposed, which mainly includes three stages: ear contour detection, anatomical part segmentation and keypoints localization, and image post-processing. In the anatomical part segmentation and keypoints localization stages, K-YOLACT was proposed to improve operating efficiency. Experimental results showed that the proposed approach achieved automatic segmentation of 66 acupuncture points in the frontal image of the ear, and the segmentation effect was better than existing solutions. At the same time, the mean average precision (mAP) of the anatomical part segmentation of the K-YOLACT was 83.2%, mAP of keypoints localization was 98.1%, and the running speed was significantly improved. The implementation of this approach provides a reliable solution for the accurate segmentation of auricular point images, and provides strong technical support for the modern development of traditional Chinese medicine.
Collapse
Affiliation(s)
- Zhenyue Gao
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, P. R. China
- Beijing Engineering Research Center of Industrial Spectrum Imaging, University of Science and Technology Beijing, Beijing 100083, P. R. China
- Shunde Innovation School, University of Science and Technology Beijing, Shunde, Guangdong 528399, P. R. China
| | - Shijin Jia
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, P. R. China
- Shunde Innovation School, University of Science and Technology Beijing, Shunde, Guangdong 528399, P. R. China
| | - Qingfeng Li
- Mobile Health Management System Engineering Research Center of the Ministry of Education, Hangzhou Normal University, Hangzhou 311121, P. R. China
| | - Dongxin Lu
- Mobile Health Management System Engineering Research Center of the Ministry of Education, Hangzhou Normal University, Hangzhou 311121, P. R. China
| | - Sen Zhang
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, P. R. China
- Beijing Engineering Research Center of Industrial Spectrum Imaging, University of Science and Technology Beijing, Beijing 100083, P. R. China
| | - Wendong Xiao
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, P. R. China
- Beijing Engineering Research Center of Industrial Spectrum Imaging, University of Science and Technology Beijing, Beijing 100083, P. R. China
- Shunde Innovation School, University of Science and Technology Beijing, Shunde, Guangdong 528399, P. R. China
| |
Collapse
|
37
|
Mavridis C, Economopoulos TL, Benetos G, Matsopoulos GK. Aorta Segmentation in 3D CT Images by Combining Image Processing and Machine Learning Techniques. Cardiovasc Eng Technol 2024:10.1007/s13239-024-00720-7. [PMID: 38388764 DOI: 10.1007/s13239-024-00720-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 01/30/2024] [Indexed: 02/24/2024]
Abstract
PURPOSE Aorta segmentation is extremely useful in clinical practice, allowing the diagnosis of numerous pathologies, such as dissections, aneurysms and occlusive disease. In such cases, image segmentation is prerequisite for applying diagnostic algorithms, which in turn allow the prediction of possible complications and enable risk assessment, which is crucial in saving lives. The aim of this paper is to present a novel fully automatic 3D segmentation method, which combines basic image processing techniques and more advanced machine learning algorithms, for detecting and modelling the aorta in 3D CT imaging data. METHODS An initial intensity threshold-based segmentation procedure is followed by a classification-based segmentation approach, based on a Markov Random Field network. The result of the proposed two-stage segmentation process is modelled and visualized. RESULTS The proposed methodology was applied to 16 3D CT data sets and the extracted aortic segments were reconstructed as 3D models. The performance of segmentation was evaluated both qualitatively and quantitatively against other commonly used segmentation techniques, in terms of the accuracy achieved, compared to the actual aorta, which was defined manually by experts. CONCLUSION The proposed methodology achieved superior segmentation performance, compared to all compared segmentation techniques, in terms of the accuracy of the extracted 3D aortic model. Therefore, the proposed segmentation scheme could be used in clinical practice, such as in treatment planning and assessment, as it can speed up the evaluation of the medical imaging data, which is commonly a lengthy and tedious process.
Collapse
Affiliation(s)
- Christos Mavridis
- Department of Electrical and Computer Engineering, National Technical University of Athens, 15780, Athens, Greece.
| | - Theodore L Economopoulos
- Department of Electrical and Computer Engineering, National Technical University of Athens, 15780, Athens, Greece
| | - Georgios Benetos
- Department of CT and MRI, Lefkos Stavros Clinic, 11528, Athens, Greece
| | - George K Matsopoulos
- Department of Electrical and Computer Engineering, National Technical University of Athens, 15780, Athens, Greece
| |
Collapse
|
38
|
Khosravifard N, Vadiati Saberi B, Khosravifard A, Hendi A, Shadi K, Mihandoust S, Yousefi Z, Mortezaei T, Ghaffari ME. Introducing a new auto edge detection technique capable of revealing cervical root resorption in CBCT scans with pronounced metallic artifacts. Sci Rep 2024; 14:4245. [PMID: 38379025 PMCID: PMC10879123 DOI: 10.1038/s41598-024-54974-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 02/19/2024] [Indexed: 02/22/2024] Open
Abstract
Cervical resorption is a serious threat to the longevity of the teeth. In this study, the Canny edge-detection algorithm was applied on CBCT images to compare the accuracy of original and Canny views for diagnosing cervical resorption in endodontically treated teeth. Intracanal metallic posts were inserted in 60 extracted teeth being randomly divided into three groups: control, 0.5 mm, and 1 mm cervical resorption. CBCT scans of the teeth were presented to three observers in both original and Canny formats with the accuracy being determined by receiver operating characteristic (ROC) analysis. The DeLong test was used for paired comparisons with the significance level set at 0.05. The highest accuracy belonged to Canny images in 1 mm resorption, followed by Canny images in 0.5 mm resorption, original images in 1 mm resorption, and original images in 0.5 mm resorption, respectively. The Canny images were significantly more accurate in the diagnosis of 0.5 mm (p < 0.001) and 1 mm (p = 0.009) resorption. Application of the Canny edge-detection algorithm could be suggested as a new technique for facilitating the diagnosis of cervical resorption in teeth that are negatively affected by metallic artifacts.
Collapse
Affiliation(s)
- Negar Khosravifard
- Department of Oral and Maxillofacial Radiology, Dental Sciences Research Center, School of Dentistry, Guilan University of Medical Sciences, Rasht, Iran.
| | - Bardia Vadiati Saberi
- Department of Periodontics, Dental Sciences Research Center, School of Dentistry, Guilan University of Medical Sciences, Rasht, Iran
| | - Amir Khosravifard
- Department of Mechanical Engineering, Shiraz University, Shiraz, Iran
| | - Amirreza Hendi
- Department of Dental Prosthesis, Dental Sciences Research Center, School of Dentistry, Guilan University of Medical Sciences, Rasht, Iran
| | - Kimia Shadi
- Department of Oral and Maxillofacial Radiology, Dental Sciences Research Center, School of Dentistry, Guilan University of Medical Sciences, Rasht, Iran
| | - Sanaz Mihandoust
- Department of Oral and Maxillofacial Radiology, Dental Sciences Research Center, School of Dentistry, Guilan University of Medical Sciences, Rasht, Iran
| | - Zahra Yousefi
- Department of Oral and Maxillofacial Radiology, Dental Caries Prevention Research Center, School of Dentistry, Qazvin University of Medical Sciences, Qazvin, Iran
| | - Tahereh Mortezaei
- Department of Oral and Maxillofacial Radiology, Dental Sciences Research Center, School of Dentistry, Guilan University of Medical Sciences, Rasht, Iran
| | | |
Collapse
|
39
|
Han MM, Li XY, Yi XY, Zheng YS, Xia WL, Liu YF, Wang QX. Automatic recognition of depression based on audio and video: A review. World J Psychiatry 2024; 14:225-233. [PMID: 38464777 PMCID: PMC10921287 DOI: 10.5498/wjp.v14.i2.225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 12/18/2023] [Accepted: 01/24/2024] [Indexed: 02/06/2024] Open
Abstract
Depression is a common mental health disorder. With current depression detection methods, specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary measures for depression assessment. Non-biological markers-typically classified as verbal or non-verbal and deemed crucial evaluation criteria for depression-have not been effectively utilized. Specialized physicians usually require extensive training and experience to capture changes in these features. Advancements in deep learning technology have provided technical support for capturing non-biological markers. Several researchers have proposed automatic depression estimation (ADE) systems based on sounds and videos to assist physicians in capturing these features and conducting depression screening. This article summarizes commonly used public datasets and recent research on audio- and video-based ADE based on three perspectives: Datasets, deficiencies in existing research, and future development directions.
Collapse
Affiliation(s)
- Meng-Meng Han
- Shandong Mental Health Center, Shandong University, Jinan 250014, Shandong Province, China
- Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, Shandong Province, China
| | - Xing-Yun Li
- Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, Shandong Province, China
- Shandong Engineering Research Center of Big Data Applied Technology, Faculty of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, Shandong Province, China
- Shandong Provincial Key Laboratory of Computer Networks, Shandong Fundamental Research Center for Computer Science, Jinan 250353, Shandong Province, China
| | - Xin-Yu Yi
- Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, Shandong Province, China
- Shandong Engineering Research Center of Big Data Applied Technology, Faculty of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, Shandong Province, China
- Shandong Provincial Key Laboratory of Computer Networks, Shandong Fundamental Research Center for Computer Science, Jinan 250353, Shandong Province, China
| | - Yun-Shao Zheng
- Department of Ward Two, Shandong Mental Health Center, Shandong University, Jinan 250014, Shandong Province, China
| | - Wei-Li Xia
- Shandong Mental Health Center, Shandong University, Jinan 250014, Shandong Province, China
| | - Ya-Fei Liu
- Shandong Mental Health Center, Shandong University, Jinan 250014, Shandong Province, China
| | - Qing-Xiang Wang
- Shandong Mental Health Center, Shandong University, Jinan 250014, Shandong Province, China
| |
Collapse
|
40
|
Chen H, Tian Y, Zhang S, Wang X, Qu H. Image processing-based online analysis and feedback control system for droplet dripping process. Int J Pharm 2024; 651:123736. [PMID: 38142872 DOI: 10.1016/j.ijpharm.2023.123736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 12/01/2023] [Accepted: 12/21/2023] [Indexed: 12/26/2023]
Abstract
Droplets find wide application across diverse industries, where maintaining their quality is paramount. Precise control over the substance content within droplets demands non-destructive and online analysis techniques, such as Process Analytical Technology (PAT), often integrated with control strategies. In this context, the present study focuses on the example of controlling droplet quality during the dripping process of pills. Leveraging the dripping and image acquisition systems established in previous research, a novel feedback control system centered on image processing was devised for the quality control of dripping pills. The system was developed and its efficacy was assessed, yielding satisfactory outcomes. The proposed system facilitates real-time monitoring of pill weight through the analysis of droplet images during the dripping process, thereby offering real-time feedback control of pill weight. Importantly, this system holds potential for broader applications beyond the scope of this study.
Collapse
Affiliation(s)
- Hang Chen
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310058, China
| | - Ying Tian
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310058, China
| | - Sheng Zhang
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310058, China
| | - Xiaoping Wang
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310058, China
| | - Haibin Qu
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310058, China.
| |
Collapse
|
41
|
Laurie MA, Zhou SR, Islam MT, Shkolyar E, Xing L, Liao JC. Bladder Cancer and Artificial Intelligence: Emerging Applications. Urol Clin North Am 2024; 51:63-75. [PMID: 37945103 PMCID: PMC10697017 DOI: 10.1016/j.ucl.2023.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Bladder cancer is a common and heterogeneous disease that poses a significant burden to the patient and health care system. Major unmet needs include effective early detection strategy, imprecision of risk stratification, and treatment-associated morbidities. The existing clinical paradigm is imprecise, which results in missed tumors, suboptimal therapy, and disease progression. Artificial intelligence holds immense potential to address many unmet needs in bladder cancer, including early detection, risk stratification, treatment planning, quality assessment, and outcome prediction. Despite recent advances, extensive work remains to affirm the efficacy of artificial intelligence as a decision-making tool for bladder cancer management.
Collapse
Affiliation(s)
- Mark A Laurie
- Department of Urology, Stanford University School of Medicine, 453 Quarry Road, Mail Code 5656, Palo Alto, CA 94304, USA; Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive Room G204, Stanford, CA 94305-5847, USA; Veterans Affairs Palo Alto Health Care System, Palo Alto, CA 94304, USA; Institute for Computational and Mathematical Engineering, Stanford University School of Engineering, Stanford, CA 94305, USA
| | - Steve R Zhou
- Department of Urology, Stanford University School of Medicine, 453 Quarry Road, Mail Code 5656, Palo Alto, CA 94304, USA
| | - Md Tauhidul Islam
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive Room G204, Stanford, CA 94305-5847, USA
| | - Eugene Shkolyar
- Department of Urology, Stanford University School of Medicine, 453 Quarry Road, Mail Code 5656, Palo Alto, CA 94304, USA; Veterans Affairs Palo Alto Health Care System, Palo Alto, CA 94304, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive Room G204, Stanford, CA 94305-5847, USA
| | - Joseph C Liao
- Department of Urology, Stanford University School of Medicine, 453 Quarry Road, Mail Code 5656, Palo Alto, CA 94304, USA; Veterans Affairs Palo Alto Health Care System, Palo Alto, CA 94304, USA.
| |
Collapse
|
42
|
Khodadadi R, Eghbal M, Ofoghi H, Balaei A, Tamayol A, Abrinia K, Sanati-Nezhad A, Samandari M. An integrated centrifugal microfluidic strategy for point-of-care complete blood counting. Biosens Bioelectron 2024; 245:115789. [PMID: 37979545 DOI: 10.1016/j.bios.2023.115789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 09/26/2023] [Accepted: 10/24/2023] [Indexed: 11/20/2023]
Abstract
Centrifugal microfluidics holds the potential to revolutionize point-of-care (POC) testing by simplifying laboratory tests through automating fluid and cell manipulation within microfluidic channels. This technology can facilitate blood testing, the most frequent clinical test, at the POC. However, an integrated centrifugal microfluidic device for complete blood counting (CBC) has not yet been fully realized. To address this, we propose an integrated portable system comprising a centrifuge and a hybrid microfluidic disc specifically designed for CBC analysis at the POC. The disc enables the implementation of various spin profiles in different stages of CBC to facilitate in-situ cell separation, solution metering and mixing, and differential cell counting. Furthermore, our system is coupled with a custom script that automates the process and ensures precise quantification of cells using light and fluorescent images captured from the detection chamber of the disc. We demonstrate a close correlation between the proposed method and the hematology analyzer, considered the gold standard, for quantifying hematocrit (R2 = 0.99), white blood cell count (R2 = 0.98), white blood cell differential count (granulocyte/agranulocyte; R2 = 0.89), red blood cell count (R2 = 0.97), and mean corpuscular volume (R2 = 0.94). The integration of our portable system offers significant advantages, enabling more accessible and affordable CBC testing at the POC. Considering the simplicity, affordability (∼$250 capital cost and <$2 operational cost per test), as well as low power consumption (>100 tests using a typical 24 V/10 Ah battery), this system has the potential to enhance healthcare delivery, particularly in resource-limited settings and remote areas where access to traditional laboratory facilities is limited.
Collapse
Affiliation(s)
- Reza Khodadadi
- School of Mechanical Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Manouchehr Eghbal
- Biotechnology Department, Iranian Research Organization for Science and Technology, Tehran, Iran
| | - Hamideh Ofoghi
- Biotechnology Department, Iranian Research Organization for Science and Technology, Tehran, Iran
| | - Alireza Balaei
- Biotechnology Department, Iranian Research Organization for Science and Technology, Tehran, Iran
| | - Ali Tamayol
- Department of Biomedical Engineering, University of Connecticut Health Center, Farmington, CT, 06030, USA
| | - Karen Abrinia
- School of Mechanical Engineering, College of Engineering, University of Tehran, Tehran, Iran.
| | - Amir Sanati-Nezhad
- Department of Biomedical Engineering, University of Calgary, Calgary, Alberta, T2N 1N4, Canada.
| | - Mohamadmahdi Samandari
- Department of Biomedical Engineering, University of Connecticut Health Center, Farmington, CT, 06030, USA.
| |
Collapse
|
43
|
Abstract
This article presents a comprehensive dataset featuring ten distinct hen breeds, sourced from various regions, capturing the unique characteristics and traits of each breed. The dataset encompasses Bielefeld, Blackorpington, Brahma, Buckeye, Fayoumi, Leghorn, Newhampshire, Plymouthrock, Sussex, and Turken breeds, offering a diverse representation of poultry commonly bred worldwide. A total of 1010 original JPG images were meticulously collected, showcasing the physical attributes, feather patterns, and distinctive features of each hen breed. These images were subsequently standardized, resized, and converted to PNG format for consistency within the dataset. The compilation, although unevenly distributed across the breeds, provides a rich resource, serving as a foundation for research and applications in poultry science, genetics, and agricultural studies. This dataset holds significant potential to contribute to various fields by enabling the exploration and analysis of unique characteristics and genetic traits across different hen breeds, thereby supporting advancements in poultry breeding, farming, and genetic research.
Collapse
Affiliation(s)
- Galib Muhammad Shahriar Himel
- Department of Computer Science, American International University-Bangladesh, Dhaka, Bangladesh
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology (BUBT), Dhaka, Bangladesh
- Department of Physics, Jahangirnagar University, Dhaka, Bangladesh
| | - Md Masudul Islam
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology (BUBT), Dhaka, Bangladesh
- Department of Computer Science and Engineering, Jahangirnagar University, Dhaka, Bangladesh
| |
Collapse
|
44
|
Saputra DE, Suandi D, Sunarto JW, Michael P. Aerial images and water quality dataset for fishpond's condition monitoring. Data Brief 2024; 52:110009. [PMID: 38226040 PMCID: PMC10788197 DOI: 10.1016/j.dib.2023.110009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 12/11/2023] [Accepted: 12/21/2023] [Indexed: 01/17/2024] Open
Abstract
This dataset is part of fundamental research to produce IoT monitoring in fishponds. The data consists of the results of measurements of pH, total dissolved solids (TDS), and water temperature obtained through manual sensor devices in several locations at different times. Additionally, this data also includes images taken by drones at consistent heights. These images are linked to the sensor data that has been collected. In this research, data will be used to monitor the health of fishponds through visual data. This data can be used for correlation analysis between visual data and sensor data. The hypothesis is the visual appearance of the pond (the colour) is affected by the number of mixed solid (mud and other organic material) in the water, which reflected in the TDS level of the water. In addition, the data can also be used for initial investigations into the development of machine learning models for pool condition recognition through image analysis.
Collapse
Affiliation(s)
- Dany Eka Saputra
- Computer Science Department, School of Computer Science, Bina Nusantara University - Bandung Campus, Jakarta, Indonesia
| | - Dani Suandi
- Computer Science Department, School of Computer Science, Bina Nusantara University - Bandung Campus, Jakarta, Indonesia
| | - Joshua Wenata Sunarto
- Computer Science Department, School of Computer Science, Bina Nusantara University - Bandung Campus, Jakarta, Indonesia
| | - Petra Michael
- Computer Science Department, School of Computer Science, Bina Nusantara University - Bandung Campus, Jakarta, Indonesia
| |
Collapse
|
45
|
Sledevič T, Matuzevičius D. Labeled dataset for bee detection and direction estimation on entrance to beehive. Data Brief 2024; 52:110060. [PMID: 38304387 PMCID: PMC10831503 DOI: 10.1016/j.dib.2024.110060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 01/08/2024] [Indexed: 02/03/2024] Open
Abstract
The datasets for bee detection, pose estimation and segmentation consist of organized folders containing both images and corresponding labels. The detection dataset comprises a total of 7200 individual frames collected at 8 different beehives. The pose dataset contains 400 images of bees annotated with two key points per bee. The first point marks a head, second point marks a stinger. All frames have a resolution of 1920×1080 pixels. The segmentation dataset contains 2300 cropped images of bees. These cropped images are annotated with triangular markers that aid in estimating directional vectors. The labels in all proposed datasets were saved in YOLO format. The labeling process was automated by training YOLOv8 model on a set of manually annotated images for bee detection. After detection, all the labels were visually revised and corrected. Frames were captured using stationary mounted camera 30 cm above beehive landing boards. The data collection period spanned from June to July 2023 in Vilnius district.
Collapse
Affiliation(s)
- Tomyslav Sledevič
- Department of Electronic Systems, Faculty of Electronics, Vilnius Gediminas Technical University, Saulėtekio al. 11, LT-10223 Vilnius, Lithuania
| | - Dalius Matuzevičius
- Department of Electronic Systems, Faculty of Electronics, Vilnius Gediminas Technical University, Saulėtekio al. 11, LT-10223 Vilnius, Lithuania
| |
Collapse
|
46
|
Jalal S, Nicolaou S. Advanced Imaging Technology: Photon Counting CT. Can Assoc Radiol J 2024; 75:20-21. [PMID: 37119123 DOI: 10.1177/08465371231172738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/30/2023] Open
Affiliation(s)
- Sabeena Jalal
- Department of Radiology, Vancouver General Hospital, Vancouver, Canada
| | - Savvas Nicolaou
- Department of Radiology, Vancouver General Hospital, Vancouver, Canada
| |
Collapse
|
47
|
Hagerty JR, Nambisan A, Stanley RJ, Stoecker WV. Fusion of Deep Learning with Conventional Imaging Processing: Does It Bring Artificial Intelligence Closer to the Clinic? J Invest Dermatol 2024:S0022-202X(23)03212-8. [PMID: 38310497 DOI: 10.1016/j.jid.2023.10.043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 10/30/2023] [Indexed: 02/05/2024]
Affiliation(s)
- Jason R Hagerty
- S&A Technologies, Rolla, Missouri, USA; Missouri University of Science and Technology, Rolla, Missouri, USA
| | - Anand Nambisan
- Missouri University of Science and Technology, Rolla, Missouri, USA
| | - R Joe Stanley
- Missouri University of Science and Technology, Rolla, Missouri, USA
| | - William V Stoecker
- S&A Technologies, Rolla, Missouri, USA; Missouri University of Science and Technology, Rolla, Missouri, USA.
| |
Collapse
|
48
|
Hendriks P, van Dijk KM, Boekestijn B, Broersen A, van Duijn-de Vreugd JJ, Coenraad MJ, Tushuizen ME, van Erkel AR, van der Meer RW, van Rijswijk CS, Dijkstra J, de Geus-Oei LF, Burgmans MC. Intraprocedural assessment of ablation margins using computed tomography co-registration in hepatocellular carcinoma treatment with percutaneous ablation: IAMCOMPLETE study. Diagn Interv Imaging 2024; 105:57-64. [PMID: 37517969 DOI: 10.1016/j.diii.2023.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/20/2023] [Accepted: 07/18/2023] [Indexed: 08/01/2023]
Abstract
PURPOSE The primary objective of this study was to determine the feasibility of ablation margin quantification using a standardized scanning protocol during thermal ablation (TA) of hepatocellular carcinoma (HCC), and a rigid registration algorithm. Secondary objectives were to determine the inter- and intra-observer variability of tumor segmentation and quantification of the minimal ablation margin (MAM). MATERIALS AND METHODS Twenty patients who underwent thermal ablation for HCC were included. There were thirteen men and seven women with a mean age of 67.1 ± 10.8 (standard deviation [SD]) years (age range: 49.1-81.1 years). All patients underwent contrast-enhanced computed tomography examination under general anesthesia directly before and after TA, with preoxygenated breath hold. Contrast-enhanced computed tomography examinations were analyzed by radiologists using rigid registration software. Registration was deemed feasible when accurate rigid co-registration could be obtained. Inter- and intra-observer rates of tumor segmentation and MAM quantification were calculated. MAM values were correlated with local tumor progression (LTP) after one year of follow-up. RESULTS Co-registration of pre- and post-ablation images was feasible in 16 out of 20 patients (80%) and 26 out of 31 tumors (84%). Mean Dice similarity coefficient for inter- and intra-observer variability of tumor segmentation were 0.815 and 0.830, respectively. Mean MAM was 0.63 ± 3.589 (SD) mm (range: -6.26-6.65 mm). LTP occurred in four out of 20 patients (20%). The mean MAM value for patients who developed LTP was -4.00 mm, as compared to 0.727 mm for patients who did not develop LTP. CONCLUSION Ablation margin quantification is feasible using a standardized contrast-enhanced computed tomography protocol. Interpretation of MAM was hampered by the occurrence of tissue shrinkage during TA. Further validation in a larger cohort should lead to meaningful cut-off values for technical success of TA.
Collapse
Affiliation(s)
- Pim Hendriks
- Department of Radiology, Leiden University Medical Center, 2333 ZA, Leiden, the Netherlands.
| | - Kiki M van Dijk
- Department of Radiology, Leiden University Medical Center, 2333 ZA, Leiden, the Netherlands
| | - Bas Boekestijn
- Department of Radiology, Leiden University Medical Center, 2333 ZA, Leiden, the Netherlands
| | - Alexander Broersen
- LKEB Laboratory of Clinical and Experimental Imaging, Department of Radiology, Leiden University Medical Center, 2333 ZA, Leiden, the Netherlands
| | | | - Minneke J Coenraad
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, 2333 ZA Leiden, the Netherlands
| | - Maarten E Tushuizen
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, 2333 ZA Leiden, the Netherlands
| | - Arian R van Erkel
- Department of Radiology, Leiden University Medical Center, 2333 ZA, Leiden, the Netherlands
| | - Rutger W van der Meer
- Department of Radiology, Leiden University Medical Center, 2333 ZA, Leiden, the Netherlands
| | | | - Jouke Dijkstra
- LKEB Laboratory of Clinical and Experimental Imaging, Department of Radiology, Leiden University Medical Center, 2333 ZA, Leiden, the Netherlands
| | - Lioe-Fee de Geus-Oei
- Department of Radiology, Leiden University Medical Center, 2333 ZA, Leiden, the Netherlands; Biomedical Photonic Imaging Group, TechMed Centre, University of Twente, 7522 NB, Enschede, the Netherlands; Department of Radiation Science & Technology, Delft University of Technology, 2628 CD, Delft, the Netherlands
| | - Mark C Burgmans
- Department of Radiology, Leiden University Medical Center, 2333 ZA, Leiden, the Netherlands
| |
Collapse
|
49
|
Park HS, Shim MJ, Kim Y, Ko TY, Choi JH, Ahn YC. Multimodal real-time imaging with laser speckle contrast and fluorescent contrast. Photodiagnosis Photodyn Ther 2024; 45:103912. [PMID: 38043762 DOI: 10.1016/j.pdpdt.2023.103912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 11/16/2023] [Accepted: 11/28/2023] [Indexed: 12/05/2023]
Abstract
INTRODUCTION Laser speckle contrast imaging (LSCI) can achieve real-time 2D perfusion maps non-invasively. However, LSCI is still difficult to use in general clinical applications because of movement sensitivity and limitations in blood flow analysis. To overcome this, fluorescence imaging (FI) is combined with LSCI using a light source with a wavelength of 785 nm in near-infrared (NIR) region and validates to visualize real-time blood perfusion. MATERIALS AND METHODS The system was performed using Intralipid and indocyanine green (ICG) in a flow phantom that has three tubes and controlled the flow rate in 0-150 μl/min range. First, real-time LSCI was monitored and measured the change in speckle contrast by reperfusion. Then, we visualized blood perfusion of a rabbit ear under the non-invasive condition by intravenous injection using a total of five different ICG concentration solutions from 128 μM to 3.22 mM. RESULTS The combined system achieved the performance of processing laser speckle images at about 37-38 fps, and we simultaneously confirmed the fluorescence of ICG and changes in speckle contrast due to intralipid as a light scatterer. In addition, we obtained real-time contrast variation and fluorescent images occurring in rabbit's blood perfusion. CONCLUSIONS The aim of this study is to provide a real-time diagnostic imaging system that can be used in general clinical applications. LSCI and FI are combined complementary for observing tissue perfusion using a single NIR light source. The combined system could achieve real-time visualization of blood perfusion non-invasively.
Collapse
Affiliation(s)
- Hyun-Seo Park
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Busan 48513, South Korea
| | - Min-Jae Shim
- Department of Biomedical Engineering, Pukyong National University, Busan 48513, South Korea
| | - Yikeun Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, South Korea
| | - Taek-Yong Ko
- Kosin University Gospel Hospital, Busan 49267, South Korea
| | - Jin-Hyuk Choi
- Kosin University Gospel Hospital, Busan 49267, South Korea
| | - Yeh-Chan Ahn
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Busan 48513, South Korea; Department of Biomedical Engineering, Pukyong National University, Busan 48513, South Korea.
| |
Collapse
|
50
|
Sampaio-Oliveira M, Marinho-Vieira LE, Barros-Costa M, Oliveira ML. Can Digital Enhancement Restore the Image Quality of Phosphor Plate-Based Radiographs Partially Damaged by Ambient Light? J Imaging Inform Med 2024; 37:145-150. [PMID: 38343236 DOI: 10.1007/s10278-023-00922-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 09/05/2023] [Accepted: 09/06/2023] [Indexed: 03/02/2024]
Abstract
To assess the effect of digital enhancement on the image quality of radiographs obtained with photostimulable phosphor (PSP) plates partially damaged by ambient light. Radiographs of an aluminum step wedge were obtained using the VistaScan and Express systems. Half of the PSP plates was exposed to ambient light for 0, 10, 30, 60, or 90 s before being scanned. The resulting radiographs were exported with and without digital enhancement. Metrics for brightness, contrast, and contrast-to-noise ratio (CNR) were derived, and the ratio of each metric between the exposed-to-light and non-exposed-to-light halves of the radiographs was calculated. The resulting ratios of the radiographs with digital enhancement were subtracted from those without digital enhancement and compared among each other. For the VistaScan system, digital enhancement partially restored brightness, contrast, and CNR. For the Express system, digital enhancement only restored CNR and not the impact of ambient light on brightness and contrast. Specifically, digital enhancement restored 23.48% of brightness for the VistaScan, while percentages below 1% were observed for the Express. Digital enhancement restored 53.25% of image contrast for the VistaScan and 5.79% for the Express; 40.71% of CNR was restored for the VistaScan, and 35% for the Express. Digital enhancement can partially restore the damage caused by ambient light on the brightness and contrast of PSP-based radiographs obtained with the VistaScan, as well as on CNR for the VistaScan and Express systems. The exposure of PSP plates to light can lead to unnecessary retakes and increased patient exposure to X-rays.
Collapse
Affiliation(s)
- Matheus Sampaio-Oliveira
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira, 901, Piracicaba-SP, 13414-903, Brazil.
| | - Luiz Eduardo Marinho-Vieira
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira, 901, Piracicaba-SP, 13414-903, Brazil
| | - Matheus Barros-Costa
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira, 901, Piracicaba-SP, 13414-903, Brazil
| | - Matheus L Oliveira
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira, 901, Piracicaba-SP, 13414-903, Brazil
| |
Collapse
|