1
|
Jaganathan Y, Sanober S, Aldossary SMA, Aldosari H. Validating Wound Severity Assessment via Region-Anchored Convolutional Neural Network Model for Mobile Image-Based Size and Tissue Classification. Diagnostics (Basel) 2023; 13:2866. [PMID: 37761233 PMCID: PMC10529166 DOI: 10.3390/diagnostics13182866] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/16/2023] [Accepted: 08/23/2023] [Indexed: 09/29/2023] Open
Abstract
Evaluating and tracking the size of a wound is a crucial step in wound assessment. The measurement of various indicators on wounds over time plays a vital role in treating and managing crucial wounds. This article introduces the concept of utilizing mobile device-captured photographs to address this challenge. The research explores the application of digital technologies in the treatment of chronic wounds, offering tools to assist healthcare professionals in enhancing patient care and decision-making. Additionally, it investigates the use of deep learning (DL) algorithms along with the use of computer vision techniques to enhance the validation results of wounds. The proposed method involves tissue classification as well as visual recognition system. The wound's region of interest (RoI) is determined using superpixel techniques, enabling the calculation of its wounded zone. A classification model based on the Region Anchored CNN framework is employed to detect and differentiate wounds and classify their tissues. The outcome demonstrates that the suggested method of DL, with visual methodologies to detect the shape of a wound and measure its size, achieves exceptional results. By utilizing Resnet50, an accuracy of 0.85 percent is obtained, while the Tissue Classification CNN exhibits a Median Deviation Error of 2.91 and a precision range of 0.96%. These outcomes highlight the effectiveness of the methodology in real-world scenarios and its potential to enhance therapeutic treatments for patients with chronic wounds.
Collapse
Affiliation(s)
- Yogapriya Jaganathan
- Department of Computer Science and Engineering, Kongunadu College of Engineering and Technology, Trichy 621215, India
| | - Sumaya Sanober
- Department of Computer Science, Prince Sattam Bin Abdulaziz University, Wadi al dwassir 1190, Saudi Arabia;
| | - Sultan Mesfer A Aldossary
- Department of Computer Sciences, College of Arts and Sciences, Prince Sattam Bin Abdulaziz University, Wadi al dwassir 1190, Saudi Arabia;
| | - Huda Aldosari
- Department of Computer Science, Prince Sattam Bin Abdulaziz University, Wadi al dwassir 1190, Saudi Arabia;
| |
Collapse
|
2
|
Kavran D, Mongus D, Žalik B, Lukač N. Graph Neural Network-Based Method of Spatiotemporal Land Cover Mapping Using Satellite Imagery. Sensors (Basel) 2023; 23:6648. [PMID: 37514942 PMCID: PMC10384354 DOI: 10.3390/s23146648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 07/14/2023] [Accepted: 07/19/2023] [Indexed: 07/30/2023]
Abstract
Multispectral satellite imagery offers a new perspective for spatial modelling, change detection and land cover classification. The increased demand for accurate classification of geographically diverse regions led to advances in object-based methods. A novel spatiotemporal method is presented for object-based land cover classification of satellite imagery using a Graph Neural Network. This paper introduces innovative representation of sequential satellite images as a directed graph by connecting segmented land region through time. The method's novel modular node classification pipeline utilises the Convolutional Neural Network as a multispectral image feature extraction network, and the Graph Neural Network as a node classification model. To evaluate the performance of the proposed method, we utilised EfficientNetV2-S for feature extraction and the GraphSAGE algorithm with Long Short-Term Memory aggregation for node classification. This innovative application on Sentinel-2 L2A imagery produced complete 4-year intermonthly land cover classification maps for two regions: Graz in Austria, and the region of Portorož, Izola and Koper in Slovenia. The regions were classified with Corine Land Cover classes. In the level 2 classification of the Graz region, the method outperformed the state-of-the-art UNet model, achieving an average F1-score of 0.841 and an accuracy of 0.831, as opposed to UNet's 0.824 and 0.818, respectively. Similarly, the method demonstrated superior performance over UNet in both regions under the level 1 classification, which contains fewer classes. Individual classes have been classified with accuracies up to 99.17%.
Collapse
Affiliation(s)
- Domen Kavran
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška Cesta 46, 2000 Maribor, Slovenia
| | - Domen Mongus
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška Cesta 46, 2000 Maribor, Slovenia
| | - Borut Žalik
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška Cesta 46, 2000 Maribor, Slovenia
| | - Niko Lukač
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška Cesta 46, 2000 Maribor, Slovenia
| |
Collapse
|
3
|
Lin T, Lin J, Huang G, Yuan X, Zhong G, Xie F, Li J. Improving breast tumor segmentation via shape-wise prior-guided information on cone-beam breast CT images. Phys Med Biol 2023. [PMID: 37364585 DOI: 10.1088/1361-6560/ace1cf] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
Due to the blurry edges and uneven shape of breast tumors, breast
tumor segmentation can be a challenging task. Recently, deep convolution networks
(DCNs) based approaches achieve satisfying segmentation results. However, the
learned shape information of breast tumors might be lost owing to the successive
convolution and down-sampling operations, resulting in limited performance. To this
end, we propose a novel Shape-Guided Segmentation (SGS) framework that guides
the segmentation networks to be shape-sensitive to breast tumors by prior shape
information. Different from usual segmentation networks, we guide the networks to
model shape-shared representation with the assumption that shape information of
breast tumors can be shared among samples. Specifically, on the one hand, we propose
a Shape Guiding Block (SGB) to provide shape guidance through a superpixel poolingunpooling operation and attention mechanism. On the other hand, we introduce
the Shared Classification Layer (SCL) to address the problems brought by the SGB,
including feature inconsistency and additional computational cost. Additionally, the
proposed SGB and SCL can be effortlessly incorporated into mainstream segmentation
networks (e.g., UNet) to compose the SGS, facilitating compact shape-friendly
representation learning. Experiments conducted on a private dataset and a public
dataset demonstrate the effectiveness of the SGS compared to other advanced methods.
The source code is made available at https://github.com/TxLin7/Shape-Seg.
Collapse
Affiliation(s)
- Tongxu Lin
- School of Automation, Guangdong University of Technology, Xiaoguwei, Panyu District, Guangzhou, Guangdong, 510006, CHINA
| | - Junyu Lin
- School of Computer Science and Technology, Guangdong University of Technology, No. 100 Waihuan Xi Road, Panyu District, Guangzhou, Guangdong, 510006, CHINA
| | - Guoheng Huang
- School of Computer Science and Technology, Guangdong University of Technology, No. 100 Waihuan Xi Road, Panyu District, Guangzhou, Guangdong, 510006, CHINA
| | - Xiaochen Yuan
- Faculty of Applied Sciences, Macao Polytechnic University, Rua de Luís Gonzaga Gomes, Macao, Macau, Macau, 999078, MACAO
| | - Guo Zhong
- School of Information Science and Technology, Guangdong University of Foreign Studies, Xiaoguwei, Panyu District, Guangzhou, 510006, CHINA
| | - Fenfang Xie
- Guangdong University of Foreign Studies, 2 Baiyun Avenue, Baiyun District, Guangzhou, Guangzhou, Guangdong, 510420, CHINA
| | - Jiao Li
- Department of Radiology, Sun Yat-sen University Cancer Center, No.651 Dongfeng Road East, Guangzhou, Guangdong, 510060, CHINA
| |
Collapse
|
4
|
Frackiewicz M, Palus H, Prandzioch D. Superpixel-Based PSO Algorithms for Color Image Quantization. Sensors (Basel) 2023; 23:1108. [PMID: 36772145 PMCID: PMC9921601 DOI: 10.3390/s23031108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 01/13/2023] [Accepted: 01/14/2023] [Indexed: 06/18/2023]
Abstract
Nature-inspired artificial intelligence algorithms have been applied to color image quantization (CIQ) for some time. Among these algorithms, the particle swarm optimization algorithm (PSO-CIQ) and its numerous modifications are important in CIQ. In this article, the usefulness of such a modification, labeled IDE-PSO-CIQ and additionally using the idea of individual difference evolution based on the emotional states of particles, is tested. The superiority of this algorithm over the PSO-CIQ algorithm was demonstrated using a set of quality indices based on pixels, patches, and superpixels. Furthermore, both algorithms studied were applied to superpixel versions of quantized images, creating color palettes in much less time. A heuristic method was proposed to select the number of superpixels, depending on the size of the palette. The effectiveness of the proposed algorithms was experimentally verified on a set of benchmark color images. The results obtained from the computational experiments indicate a multiple reduction in computation time for the superpixel methods while maintaining the high quality of the output quantized images, slightly inferior to that obtained with the pixel methods.
Collapse
|
5
|
Liang J, Liu A, Zhou J, Xin L, Zuo Z, Liu Z, Luo H, Chen J, Hu X. Optimized method for segmentation of ancient mural images based on superpixel algorithm. Front Neurosci 2022; 16:1031524. [PMID: 36408409 PMCID: PMC9666489 DOI: 10.3389/fnins.2022.1031524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 09/30/2022] [Indexed: 11/23/2022] Open
Abstract
High-precision segmentation of ancient mural images is the foundation of their digital virtual restoration. However, the complexity of the color appearance of ancient murals makes it difficult to achieve high-precision segmentation when using traditional algorithms directly. To address the current challenges in ancient mural image segmentation, an optimized method based on a superpixel algorithm is proposed in this study. First, the simple linear iterative clustering (SLIC) algorithm is applied to the input mural images to obtain superpixels. Then, the density-based spatial clustering of applications with noise (DBSCAN) algorithm is used to cluster the superpixels to obtain the initial clustered images. Subsequently, a series of optimized strategies, including (1) merging the small noise superpixels, (2) segmenting and merging the large noise superpixels, (3) merging initial clusters based on color similarity and positional adjacency to obtain the merged regions, and (4) segmenting and merging the color-mixing noisy superpixels in each of the merged regions, are applied to the initial cluster images sequentially. Finally, the optimized segmentation results are obtained. The proposed method is tested and compared with existing methods based on simulated and real mural images. The results show that the proposed method is effective and outperforms the existing methods.
Collapse
Affiliation(s)
- Jinxing Liang
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
- Engineering Research Center of Hubei Province for Clothing Information, Wuhan, Hubei, China
- Hubei Province Engineering Technical Center for Digitization and Virtual Reproduction of Color Information of Cultural Relics, Wuhan, Hubei, China
| | - Anping Liu
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| | - Jing Zhou
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| | - Lei Xin
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| | - Zhuan Zuo
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| | - Zhen Liu
- School of Communication, Qufu Normal University, Rizhao, Shandong, China
| | - Hang Luo
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| | - Jia Chen
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| | - Xinrong Hu
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| |
Collapse
|
6
|
Frackiewicz M, Palus H. Efficient Color Quantization Using Superpixels. Sensors (Basel) 2022; 22:6043. [PMID: 36015804 PMCID: PMC9416436 DOI: 10.3390/s22166043] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 08/02/2022] [Accepted: 08/10/2022] [Indexed: 06/15/2023]
Abstract
We propose three methods for the color quantization of superpixel images. Prior to the application of each method, the target image is first segmented into a finite number of superpixels by grouping the pixels that are similar in color. The color of a superpixel is given by the arithmetic mean of the colors of all constituent pixels. Following this, the superpixels are quantized using common splitting or clustering methods, such as median cut, k-means, and fuzzy c-means. In this manner, a color palette is generated while the original pixel image undergoes color mapping. The effectiveness of each proposed superpixel method is validated via experimentation using different color images. We compare the proposed methods with state-of-the-art color quantization methods. The results show significantly decreased computation time along with high quality of the quantized images. However, a multi-index evaluation process shows that the image quality is slightly worse than that obtained via pixel methods.
Collapse
|
7
|
Jiang M, Rößler C, Wellmann E, Klaver J, Kleiner F, Schmatz J. Workflow for high-resolution phase segmentation of cement clinker from combined BSE image and EDX spectral data. J Microsc 2021; 286:85-91. [PMID: 34725826 DOI: 10.1111/jmi.13072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 10/14/2021] [Accepted: 10/27/2021] [Indexed: 12/01/2022]
Abstract
Burning of clinker is the most influencing step of cement quality during the production process. Appropriate characterisation for quality control and decision-making is therefore the critical point to maintain a stable production but also for the development of alternative cements. Scanning electron microscopy (SEM) in combination with energy dispersive X-ray spectroscopy (EDX) delivers spatially resolved phase and chemical information for cement clinker. This data can be used to quantify phase fractions and chemical composition of identified phases. The contribution aims to provide an overview of phase fraction quantification by semi-automatic phase segmentation using high-resolution backscattered electron (BSE) images and lower-resolved EDX element maps. Therefore, a tool for image analysis was developed that uses state-of-the-art algorithms for pixel-wise image segmentation and labelling in combination with a decision tree that allows searching for specific clinker phases. Results show that this tool can be applied to segment sub-micron scale clinker phases and to get a quantification of all phase fractions. In addition, statistical evaluation of the data is implemented within the tool to reveal whether the imaged area is representative for all clinker phases.
Collapse
Affiliation(s)
- Mingze Jiang
- MaP - Microstructure and Pores GmbH, Aachen, Nordrhein-Westfalen, Germany
| | - Christiane Rößler
- F. A. Finger-Institute for Building Material Science, Bauhaus-Universität Weimar, Weimar, Thüringen, Germany
| | - Eva Wellmann
- MaP - Microstructure and Pores GmbH, Aachen, Nordrhein-Westfalen, Germany
| | - Jop Klaver
- RWTH Aachen University, Aachen, Nordrhein-Westfalen, Germany
| | - Florian Kleiner
- F. A. Finger-Institute for Building Material Science, Bauhaus-Universität Weimar, Weimar, Thüringen, Germany
| | - Joyce Schmatz
- MaP - Microstructure and Pores GmbH, Aachen, Nordrhein-Westfalen, Germany
| |
Collapse
|
8
|
Chang HH, Yeh SJ, Chiang MC, Hsieh ST. Segmentation of Rat Brains and Cerebral Hemispheres in Triphenyltetrazolium Chloride-Stained Images after Stroke. Sensors (Basel) 2021; 21:s21217171. [PMID: 34770479 PMCID: PMC8588199 DOI: 10.3390/s21217171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 10/18/2021] [Accepted: 10/26/2021] [Indexed: 01/18/2023]
Abstract
Ischemic stroke is one of the leading causes of death among the aged population in the world. Experimental stroke models with rodents play a fundamental role in the investigation of the mechanism and impairment of cerebral ischemia. For its celerity and veracity, the 2,3,5-triphenyltetrazolium chloride (TTC) staining of rat brains has been extensively adopted to visualize the infarction, which is subsequently photographed for further processing. Two important tasks are to segment the brain regions and to compute the midline that separates the brain. This paper investigates automatic brain extraction and hemisphere segmentation algorithms in camera-based TTC-stained rat images. For rat brain extraction, a saliency region detection scheme on a superpixel image is exploited to extract the brain regions from the raw complicated image. Subsequently, the initial brain slices are refined using a parametric deformable model associated with color image transformation. For rat hemisphere segmentation, open curve evolution guided by the gradient vector flow in a medial subimage is developed to compute the midline. A wide variety of TTC-stained rat brain images captured by a smartphone were produced and utilized to evaluate the proposed segmentation frameworks. Experimental results on the segmentation of rat brains and cerebral hemispheres indicated that the developed schemes achieved high accuracy with average Dice scores of 92.33% and 97.15%, respectively. The established segmentation algorithms are believed to be potential and beneficial to facilitate experimental stroke study with TTC-stained rat brain images.
Collapse
Affiliation(s)
- Herng-Hua Chang
- Department of Engineering Science and Ocean Engineering, National Taiwan University, Taipei 10617, Taiwan;
| | - Shin-Joe Yeh
- Graduate Institute of Anatomy and Cell Biology, College of Medicine, National Taiwan University, Taipei 10051, Taiwan;
- Department of Neurology and Stroke Center, National Taiwan University Hospital, Taipei 10002, Taiwan
| | - Ming-Chang Chiang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei 11221, Taiwan
- Correspondence: (M.-C.C.); (S.-T.H.)
| | - Sung-Tsang Hsieh
- Graduate Institute of Anatomy and Cell Biology, College of Medicine, National Taiwan University, Taipei 10051, Taiwan;
- Department of Neurology and Stroke Center, National Taiwan University Hospital, Taipei 10002, Taiwan
- Graduate Institute of Clinical Medicine, College of Medicine, National Taiwan University, Taipei 10051, Taiwan
- Graduate Institute of Brain and Mind Sciences, College of Medicine, National Taiwan University, Taipei 10051, Taiwan
- Center of Precision Medicine, College of Medicine, National Taiwan University, Taipei 10051, Taiwan
- Correspondence: (M.-C.C.); (S.-T.H.)
| |
Collapse
|
9
|
Kassim YM, Palaniappan K, Yang F, Poostchi M, Palaniappan N, Maude RJ, Antani S, Jaeger S. Clustering-Based Dual Deep Learning Architecture for Detecting Red Blood Cells in Malaria Diagnostic Smears. IEEE J Biomed Health Inform 2021; 25:1735-1746. [PMID: 33119516 PMCID: PMC8127616 DOI: 10.1109/jbhi.2020.3034863] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Revised: 09/09/2020] [Accepted: 09/30/2020] [Indexed: 12/18/2022]
Abstract
Computer-assisted algorithms have become a mainstay of biomedical applications to improve accuracy and reproducibility of repetitive tasks like manual segmentation and annotation. We propose a novel pipeline for red blood cell detection and counting in thin blood smear microscopy images, named RBCNet, using a dual deep learning architecture. RBCNet consists of a U-Net first stage for cell-cluster or superpixel segmentation, followed by a second refinement stage Faster R-CNN for detecting small cell objects within the connected component clusters. RBCNet uses cell clustering instead of region proposals, which is robust to cell fragmentation, is highly scalable for detecting small objects or fine scale morphological structures in very large images, can be trained using non-overlapping tiles, and during inference is adaptive to the scale of cell-clusters with a low memory footprint. We tested our method on an archived collection of human malaria smears with nearly 200,000 labeled cells across 965 images from 193 patients, acquired in Bangladesh, with each patient contributing five images. Cell detection accuracy using RBCNet was higher than 97 %. The novel dual cascade RBCNet architecture provides more accurate cell detections because the foreground cell-cluster masks from U-Net adaptively guide the detection stage, resulting in a notably higher true positive and lower false alarm rates, compared to traditional and other deep learning methods. The RBCNet pipeline implements a crucial step towards automated malaria diagnosis.
Collapse
Affiliation(s)
- Yasmin M. Kassim
- Lister Hill National Center for Biomedical CommunicationsNational Library of MedicineBethesdaMD20894USA
| | | | - Feng Yang
- Lister Hill National Center for Biomedical CommunicationsNational Library of MedicineBethesdaMD20894USA
| | - Mahdieh Poostchi
- Lister Hill National Center for Biomedical CommunicationsNational Library of MedicineBethesdaMD20894USA
| | - Nila Palaniappan
- School of MedicineUniversity of Missouri-Kansas CityKansas CityMO64110USA
| | - Richard J Maude
- Mahidol-Oxford Tropical Medicine Research UnitMahidol UniversityBangkok10400Thailand
- Centre for Tropical Medicine and Global Health, Nuffield Department of MedicineUniversity of OxfordOxfordOX3 7LGU.K.
- Harvard TH Chan School of Public HealthHarvard UniversityBostonMA02115USA
| | - Sameer Antani
- Lister Hill National Center for Biomedical CommunicationsNational Library of MedicineBethesdaMD20894USA
| | - Stefan Jaeger
- Lister Hill National Center for Biomedical CommunicationsNational Library of MedicineBethesdaMD20894USA
| |
Collapse
|
10
|
Wu R, Xu Z, Zhang J, Zhang L. Robust Global Motion Estimation for Video Stabilization Based on Improved K-Means Clustering and Superpixel. Sensors (Basel) 2021; 21:s21072505. [PMID: 33916773 PMCID: PMC8038417 DOI: 10.3390/s21072505] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Revised: 03/29/2021] [Accepted: 03/30/2021] [Indexed: 11/16/2022]
Abstract
Obtaining accurate global motion is a crucial step for video stabilization. This paper proposes a robust and simple method to implement global motion estimation. We don’t extend the framework of 2D video stabilization but add a “plug and play” module to motion estimation based on feature points. Firstly, simple linear iterative clustering (SLIC) pre-segmentation is used to obtain superpixels of the video frame, clustering is performed according to the superpixel centroid motion vector and cluster center with large value is eliminated. Secondly, in order to obtain accurate global motion estimation, an improved K-means clustering is proposed. We match the feature points of the remaining superpixels between two adjacent frames, establish a feature points’ motion vector space, and use improved K-means clustering for clustering. Finally, the richest cluster is being retained, and the global motion is obtained by homography transformation. Our proposed method has been verified on different types of videos and has efficient performance than traditional approaches. The stabilization video has an average improvement of 0.24 in the structural similarity index than the original video and 0.1 higher than the traditional method.
Collapse
Affiliation(s)
- Rouwan Wu
- Key Laboratory of Optical Engineering, Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China;
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610200, China; (J.Z.); (L.Z.)
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zhiyong Xu
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610200, China; (J.Z.); (L.Z.)
- Correspondence:
| | - Jianlin Zhang
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610200, China; (J.Z.); (L.Z.)
| | - Lihong Zhang
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610200, China; (J.Z.); (L.Z.)
| |
Collapse
|
11
|
Li Y, Qin X, Zhang Z, Dong H. A robust identification method for nonferrous metal scraps based on deep learning and superpixel optimization. Waste Manag Res 2021; 39:573-583. [PMID: 33499775 DOI: 10.1177/0734242x20987884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
End-of-life vehicles (ELVs) provide a particularly potent source of supply for metals. Hence, the recycling and sorting techniques for ferrous and nonferrous metal scraps from ELVs significantly increase metal resource utilization. However, different kinds of nonferrous metal scraps, such as aluminium (Al) and copper (Cu), are not further automatically classified due to the lack of proper techniques. The purpose of this study is to propose an identification method for different nonferrous metal scraps, facilitate the further separation of nonferrous metal scraps, achieve better management of recycled metal resources and increase sustainability. A convolutional neural network (CNN) and SEEDS (superpixels extracted via energy-driven sampling) were adopted in this study. To build the classifier, 80 training images of randomly chosen Al and Cu scraps were taken, and some practical methods were proposed, including training patch generation with SEEDS, image data augmentation and automatic labelling methods for enormous training data. To obtain more accurate results, SEEDS was also used to optimize the coarse results obtained from the pretrained CNN model. Five indicators were adopted to evaluate the final identification results. Furthermore, 15 test samples concerning different classification environments were tested through the proposed model, and it performed well under all of the employed evaluation indexes, with an average precision of 0.98. The results demonstrate that the proposed model is robust for metal scrap identification, which can be expanded to a complex industrial environment, and it presents new possibilities for highly accurate automatic nonferrous metal scrap classification.
Collapse
Affiliation(s)
- Yifeng Li
- School of Automotive Engineering, Wuhan University of Technology, People's Republic of China
- Hubei Key Laboratory of Advanced Technology for Automotive Components, People's Republic of China
- Hubei Collaborative Innovation Center for Automotive Components Technology, People's Republic of China
| | - Xunpeng Qin
- School of Automotive Engineering, Wuhan University of Technology, People's Republic of China
- Hubei Key Laboratory of Advanced Technology for Automotive Components, People's Republic of China
- Hubei Collaborative Innovation Center for Automotive Components Technology, People's Republic of China
| | - Zhenyuan Zhang
- School of Automotive Engineering, Wuhan University of Technology, People's Republic of China
- Hubei Key Laboratory of Advanced Technology for Automotive Components, People's Republic of China
| | - Huanyu Dong
- School of Automotive Engineering, Wuhan University of Technology, People's Republic of China
- Hubei Key Laboratory of Advanced Technology for Automotive Components, People's Republic of China
| |
Collapse
|
12
|
Li Y, Al-Sarayreh M, Irie K, Hackell D, Bourdot G, Reis MM, Ghamkhar K. Identification of Weeds Based on Hyperspectral Imaging and Machine Learning. Front Plant Sci 2021; 11:611622. [PMID: 33569069 PMCID: PMC7868399 DOI: 10.3389/fpls.2020.611622] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 12/30/2020] [Indexed: 06/12/2023]
Abstract
Weeds can be major environmental and economic burdens in New Zealand. Traditional methods of weed control including manual and chemical approaches can be time consuming and costly. Some chemical herbicides may have negative environmental and human health impacts. One of the proposed important steps for providing alternatives to these traditional approaches is the automated identification and mapping of weeds. We used hyperspectral imaging data and machine learning to explore the possibility of fast, accurate and automated discrimination of weeds in pastures where ryegrass and clovers are the sown species. Hyperspectral images from two grasses (Setaria pumila [yellow bristle grass] and Stipa arundinacea [wind grass]) and two broad leaf weed species (Ranunculus acris [giant buttercup] and Cirsium arvense [Californian thistle]) were acquired and pre-processed using the standard normal variate method. We trained three classification models, namely partial least squares-discriminant analysis, support vector machine, and Multilayer Perceptron (MLP) using whole plant averaged (Av) spectra and superpixels (Sp) averaged spectra from each weed sample. All three classification models showed repeatable identification of four weeds using both Av and Sp spectra with a range of overall accuracy of 70-100%. However, MLP based on the Sp method produced the most reliable and robust prediction result (89.1% accuracy). Four significant spectral regions were found as highly informative for characterizing the four weed species and could form the basis for a rapid and efficient methodology for identifying weeds in ryegrass/clover pastures.
Collapse
Affiliation(s)
- Yanjie Li
- AgResearch Ltd., Grasslands Research Centre, Palmerston North, New Zealand
| | | | - Kenji Irie
- Red Fern Solutions Ltd, Christchurch, New Zealand
| | - Deborah Hackell
- AgResearch Ltd., Ruakura Research Centre, Hamilton, New Zealand
| | | | - Marlon M. Reis
- AgResearch Ltd., Grasslands Research Centre, Palmerston North, New Zealand
| | - Kioumars Ghamkhar
- AgResearch Ltd., Grasslands Research Centre, Palmerston North, New Zealand
| |
Collapse
|
13
|
Liu F, Zhang X, Wang H, Feng J. Context-Aware Superpixel and Bilateral Entropy-Image Coherence Induces Less Entropy. Entropy (Basel) 2019; 22:e22010020. [PMID: 33285796 PMCID: PMC7516443 DOI: 10.3390/e22010020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2019] [Revised: 12/18/2019] [Accepted: 12/20/2019] [Indexed: 02/02/2023]
Abstract
Superpixel clustering is one of the most popular computer vision techniques that aggregates coherent pixels into perceptually meaningful groups, taking inspiration from Gestalt grouping rules. However, due to brain complexity, the underlying mechanisms of such perceptual rules are unclear. Thus, conventional superpixel methods do not completely follow them and merely generate a flat image partition rather than hierarchical ones like a human does. In addition, those methods need to initialize the total number of superpixels, which may not suit diverse images. In this paper, we first propose context-aware superpixel (CASP) that follows both Gestalt grouping rules and the top-down hierarchical principle. Thus, CASP enables to adapt the total number of superpixels to specific images automatically. Next, we propose bilateral entropy, with two aspects conditional intensity entropy and spatial occupation entropy, to evaluate the encoding efficiency of image coherence. Extensive experiments demonstrate CASP achieves better superpixel segmentation performance and less entropy than baseline methods. More than that, using Pearson’s correlation coefficient, a collection of data with a total of 120 samples demonstrates a strong correlation between local image coherence and superpixel segmentation performance. Our results inversely support the reliability of above-mentioned perceptual rules, and eventually, we suggest designing novel entropy criteria to test the encoding efficiency of more complex patterns.
Collapse
Affiliation(s)
- Feihong Liu
- School of Information Science and Technology, Northwest University, Xi’an 710027, China;
- Correspondence: (F.L.); (J.F.)
| | - Xiao Zhang
- School of Information Science and Technology, Northwest University, Xi’an 710027, China;
| | - Hongyu Wang
- School of Computer Science and Technology, Xi’an University of Posts and Telecommunications, Xi’an 710121, China;
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi’an University of Posts and Telecommunications, Xi’an 710121, China
| | - Jun Feng
- School of Information Science and Technology, Northwest University, Xi’an 710027, China;
- State-Province Joint Engineering and Research Center of Advanced Networking and Intelligent Information Services, School of Information Science and Technology, Northwest University, Xi’an 710127, China
- Correspondence: (F.L.); (J.F.)
| |
Collapse
|
14
|
Gaur U, Manjunath BS. Superpixel Embedding Network. IEEE Trans Image Process 2019; 29:10.1109/TIP.2019.2957937. [PMID: 31831424 PMCID: PMC7286767 DOI: 10.1109/tip.2019.2957937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Superpixel segmentation is a fundamental computer vision technique that finds application in a multitude of high level computer vision tasks. Most state-of-the-art superpixel segmentation methods are unsupervised in nature and thus cannot fully utilize frequently occurring texture patterns or incorporate multiscale context. In this paper, we show that superpixel segmentation can be improved by leveraging the superior modeling power of deep convolutional autoencoders in a fully unsupervised manner. We pose the superpixel segmentation problem as one of manifold learning where pixels that belong to similar texture patterns are assigned near identical embedding vectors. The proposed deep network is able to learn image-wide and dataset-wide feature patterns and the relationships between them. This knowledge is used to segment and group pixels in a way that is consistent with a more global definition of pattern coherence. Experiments demonstrate that the superpixels obtained from the embeddings learned by the proposed method outperform the state-of-theart superpixel segmentation methods for boundary precision and recall values. Additionally, we find that semantic edges obtained from the superpixel embeddings to be significantly better than the contemporary unsupervised approaches.
Collapse
|
15
|
Gao C, Wang J, Liu L, Yu JG, Sang N. Superpixel-Based Temporally Aligned Representation for Video-Based Person Re-Identification. Sensors (Basel) 2019; 19:s19183861. [PMID: 31500196 PMCID: PMC6766808 DOI: 10.3390/s19183861] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 09/03/2019] [Accepted: 09/03/2019] [Indexed: 11/29/2022]
Abstract
Most existing person re-identification methods focus on matching still person images across non-overlapping camera views. Despite their excellent performance in some circumstances, these methods still suffer from occlusion and the changes of pose, viewpoint or lighting. Video-based re-id is a natural way to overcome these problems, by exploiting space–time information from videos. One of the most challenging problems in video-based person re-identification is temporal alignment, in addition to spatial alignment. To address the problem, we propose an effective superpixel-based temporally aligned representation for video-based person re-identification, which represents a video sequence only using one walking cycle. Particularly, we first build a candidate set of walking cycles by extracting motion information at superpixel level, which is more robust than that at the pixel level. Then, from the candidate set, we propose an effective criterion to select the walking cycle most matching the intrinsic periodicity property of walking persons. Finally, we propose a temporally aligned pooling scheme to describe the video data in the selected walking cycle. In addition, to characterize the individual still images in the cycle, we propose a superpixel-based representation to improve spatial alignment. Extensive experimental results on three public datasets demonstrate the effectiveness of the proposed method compared with the state-of-the-art approaches.
Collapse
Affiliation(s)
- Changxin Gao
- Key Laboratory of Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Jin Wang
- Key Laboratory of Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Leyuan Liu
- National Engineering Research Center for E-Learning, Central China Normal University, Wuhan 430079, China.
| | - Jin-Gang Yu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510640, China.
| | - Nong Sang
- Key Laboratory of Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China.
| |
Collapse
|
16
|
Xie T, Huang J, Shi Q, Wang Q, Yuan N. PSDSD-A Superpixel Generating Method Based on Pixel Saliency Difference and Spatial Distance for SAR Images. Sensors (Basel) 2019; 19:s19020304. [PMID: 30646529 PMCID: PMC6358750 DOI: 10.3390/s19020304] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Revised: 01/08/2019] [Accepted: 01/10/2019] [Indexed: 11/16/2022]
Abstract
Superpixel methods are widely used in the processing of synthetic aperture radar (SAR) images. In recent years, a number of superpixel algorithms for SAR images have been proposed, and have achieved acceptable results despite the inherent speckle noise of SAR images. However, it is still difficult for existing algorithms to obtain satisfactory results in the inhomogeneous edge and texture areas. To overcome those problems, we propose a superpixel generating method based on pixel saliency difference and spatial distance for SAR images in this article. Firstly, a saliency map is calculated based on the Gaussian kernel function weighted local contrast measure, which can not only effectively suppress the speckle noise, but also enhance the fuzzy edges and regions with intensity inhomogeneity. Secondly, superpixels are generated by the local k-means clustering method based on the proposed distance measure, which can efficiently sort pixels to different clusters. In this step, the distance measure is calculated by combining the saliency difference and spatial distance with a proposed adaptive local compactness parameter. Thirdly, post-processing is utilized to clean up small segments. The evaluation experiments on the simulated SAR image demonstrate that our proposed method dramatically outperforms four state-of-the-art methods in terms of boundary recall, under-segmentation error, and achievable segmentation accuracy under almost all of the experimental parameters at a moderate segment speed. The experiments on real-world SAR images of different sceneries validate the superiority of our method. The superpixel results of the proposed method adhere well to the contour of targets, and correctly reflect the boundaries of texture details for the inhomogeneous regions.
Collapse
Affiliation(s)
- Tao Xie
- State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Sanyi Avenue, Kaifu District, Changsha 410073, Hunan, China.
| | - Jingjian Huang
- State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Sanyi Avenue, Kaifu District, Changsha 410073, Hunan, China.
| | - Qingzhan Shi
- State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Sanyi Avenue, Kaifu District, Changsha 410073, Hunan, China.
| | - Qingping Wang
- State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Sanyi Avenue, Kaifu District, Changsha 410073, Hunan, China.
| | - Naichang Yuan
- State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Sanyi Avenue, Kaifu District, Changsha 410073, Hunan, China.
| |
Collapse
|
17
|
Na T, Xie J, Zhao Y, Zhao Y, Liu Y, Wang Y, Liu J. Retinal vascular segmentation using superpixel-based line operator and its application to vascular topology estimation. Med Phys 2018; 45:3132-3146. [PMID: 29744887 DOI: 10.1002/mp.12953] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Revised: 03/28/2018] [Accepted: 04/22/2018] [Indexed: 02/03/2023] Open
Abstract
PURPOSE Automatic methods of analyzing of retinal vascular networks, such as retinal blood vessel detection, vascular network topology estimation, and arteries/veins classification are of great assistance to the ophthalmologist in terms of diagnosis and treatment of a wide spectrum of diseases. METHODS We propose a new framework for precisely segmenting retinal vasculatures, constructing retinal vascular network topology, and separating the arteries and veins. A nonlocal total variation inspired Retinex model is employed to remove the image intensity inhomogeneities and relatively poor contrast. For better generalizability and segmentation performance, a superpixel-based line operator is proposed as to distinguish between lines and the edges, thus allowing more tolerance in the position of the respective contours. The concept of dominant sets clustering is adopted to estimate retinal vessel topology and classify the vessel network into arteries and veins. RESULTS The proposed segmentation method yields competitive results on three public data sets (STARE, DRIVE, and IOSTAR), and it has superior performance when compared with unsupervised segmentation methods, with accuracy of 0.954, 0.957, and 0.964, respectively. The topology estimation approach has been applied to five public databases (DRIVE,STARE, INSPIRE, IOSTAR, and VICAVR) and achieved high accuracy of 0.830, 0.910, 0.915, 0.928, and 0.889, respectively. The accuracies of arteries/veins classification based on the estimated vascular topology on three public databases (INSPIRE, DRIVE and VICAVR) are 0.90.9, 0.910, and 0.907, respectively. CONCLUSIONS The experimental results show that the proposed framework has effectively addressed crossover problem, a bottleneck issue in segmentation and vascular topology reconstruction. The vascular topology information significantly improves the accuracy on arteries/veins classification.
Collapse
Affiliation(s)
- Tong Na
- Georgetown Preparatory School, North Bethesda, 20852, USA.,Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China.,Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Jianyang Xie
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China.,Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China.,Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Yifan Zhao
- School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, MK43 0AL, UK
| | - Yue Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Jiang Liu
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China
| |
Collapse
|
18
|
Qin W, Wu J, Han F, Yuan Y, Zhao W, Ibragimov B, Gu J, Xing L. Superpixel-based and boundary-sensitive convolutional neural network for automated liver segmentation. Phys Med Biol 2018; 63:095017. [PMID: 29633960 PMCID: PMC5983385 DOI: 10.1088/1361-6560/aabd19] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Segmentation of liver in abdominal computed tomography (CT) is an important step for radiation therapy planning of hepatocellular carcinoma. Practically, a fully automatic segmentation of liver remains challenging because of low soft tissue contrast between liver and its surrounding organs, and its highly deformable shape. The purpose of this work is to develop a novel superpixel-based and boundary sensitive convolutional neural network (SBBS-CNN) pipeline for automated liver segmentation. The entire CT images were first partitioned into superpixel regions, where nearby pixels with similar CT number were aggregated. Secondly, we converted the conventional binary segmentation into a multinomial classification by labeling the superpixels into three classes: interior liver, liver boundary, and non-liver background. By doing this, the boundary region of the liver was explicitly identified and highlighted for the subsequent classification. Thirdly, we computed an entropy-based saliency map for each CT volume, and leveraged this map to guide the sampling of image patches over the superpixels. In this way, more patches were extracted from informative regions (e.g. the liver boundary with irregular changes) and fewer patches were extracted from homogeneous regions. Finally, deep CNN pipeline was built and trained to predict the probability map of the liver boundary. We tested the proposed algorithm in a cohort of 100 patients. With 10-fold cross validation, the SBBS-CNN achieved mean Dice similarity coefficients of 97.31 ± 0.36% and average symmetric surface distance of 1.77 ± 0.49 mm. Moreover, it showed superior performance in comparison with state-of-art methods, including U-Net, pixel-based CNN, active contour, level-sets and graph-cut algorithms. SBBS-CNN provides an accurate and effective tool for automated liver segmentation. It is also envisioned that the proposed framework is directly applicable in other medical image segmentation scenarios.
Collapse
Affiliation(s)
- Wenjian Qin
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China. Medical Physics Division in the Department of Radiation Oncology, Stanford University, Palo Alto, CA 94305, United States of America. University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
| | | | | | | | | | | | | | | |
Collapse
|
19
|
Kong X, Li J. Image Registration-Based Bolt Loosening Detection of Steel Joints. Sensors (Basel) 2018; 18:s18041000. [PMID: 29597264 PMCID: PMC5948713 DOI: 10.3390/s18041000] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2018] [Revised: 03/05/2018] [Accepted: 03/12/2018] [Indexed: 12/05/2022]
Abstract
Self-loosening of bolts caused by repetitive loads and vibrations is one of the common defects that can weaken the structural integrity of bolted steel joints in civil structures. Many existing approaches for detecting loosening bolts are based on physical sensors and, hence, require extensive sensor deployment, which limit their abilities to cost-effectively detect loosened bolts in a large number of steel joints. Recently, computer vision-based structural health monitoring (SHM) technologies have demonstrated great potential for damage detection due to the benefits of being low cost, easy to deploy, and contactless. In this study, we propose a vision-based non-contact bolt loosening detection method that uses a consumer-grade digital camera. Two images of the monitored steel joint are first collected during different inspection periods and then aligned through two image registration processes. If the bolt experiences rotation between inspections, it will introduce differential features in the registration errors, serving as a good indicator for bolt loosening detection. The performance and robustness of this approach have been validated through a series of experimental investigations using three laboratory setups including a gusset plate on a cross frame, a column flange, and a girder web. The bolt loosening detection results are presented for easy interpretation such that informed decisions can be made about the detected loosened bolts.
Collapse
Affiliation(s)
- Xiangxiong Kong
- Department of Civil, Environmental, and Architectural Engineering, University of Kansas, Lawrence, KS 66045, USA.
| | - Jian Li
- Department of Civil, Environmental, and Architectural Engineering, University of Kansas, Lawrence, KS 66045, USA.
| |
Collapse
|
20
|
Ban Z, Chen Z, Liu J. Supervoxel Segmentation with Voxel-Related Gaussian Mixture Model. Sensors (Basel) 2018; 18. [PMID: 29303972 DOI: 10.3390/s18010128] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 12/28/2017] [Accepted: 01/02/2018] [Indexed: 01/27/2023]
Abstract
Extended from superpixel segmentation by adding an additional constraint on temporal consistency, supervoxel segmentation is to partition video frames into atomic segments. In this work, we propose a novel scheme for supervoxel segmentation to address the problem of new and moving objects, where the segmentation is performed on every two consecutive frames and thus each internal frame has two valid superpixel segmentations. This scheme provides coarse-grained parallel ability, and subsequent algorithms can validate their result using two segmentations that will further improve robustness. To implement this scheme, a voxel-related Gaussian mixture model (GMM) is proposed, in which each supervoxel is assumed to be distributed in a local region and represented by two Gaussian distributions that share the same color parameters to capture temporal consistency. Our algorithm has a lower complexity with respect to frame size than the traditional GMM. According to our experiments, it also outperforms the state-of-the-art in accuracy.
Collapse
|
21
|
You D, Kim MM, Aryal MP, Parmar H, Piert M, Lawrence TS, Cao Y. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images. J Med Imaging (Bellingham) 2017; 5:011009. [PMID: 29181433 DOI: 10.1117/1.jmi.5.1.011009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2017] [Accepted: 10/27/2017] [Indexed: 11/14/2022] Open
Abstract
To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.
Collapse
Affiliation(s)
- Daekeun You
- University of Michigan, Department of Radiation Oncology, Ann Arbor, Michigan, United States
| | - Michelle M Kim
- University of Michigan, Department of Radiation Oncology, Ann Arbor, Michigan, United States
| | - Madhava P Aryal
- University of Michigan, Department of Radiation Oncology, Ann Arbor, Michigan, United States
| | - Hemant Parmar
- University of Michigan, Department of Radiology, Ann Arbor, Michigan, United States
| | - Morand Piert
- University of Michigan, Department of Radiology, Ann Arbor, Michigan, United States
| | - Theodore S Lawrence
- University of Michigan, Department of Radiation Oncology, Ann Arbor, Michigan, United States
| | - Yue Cao
- University of Michigan, Department of Radiation Oncology, Ann Arbor, Michigan, United States.,University of Michigan, Department of Radiology, Ann Arbor, Michigan, United States.,University of Michigan, Department of Biomedical Engineering, Ann Arbor, Michigan, United States
| |
Collapse
|
22
|
Liu J, Tang Z, Cui Y, Wu G. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing. Sensors (Basel) 2017; 17:E1364. [PMID: 28604641 DOI: 10.3390/s17061364] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/04/2017] [Revised: 05/27/2017] [Accepted: 06/08/2017] [Indexed: 11/17/2022]
Abstract
Remote sensing technologies have been widely applied in urban environments’ monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the “salt and pepper” phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.
Collapse
|
23
|
Abstract
In this paper, we propose a segmentation method based on normalized cut and superpixels. The method relies on color and texture cues for fast computation and efficient use of memory. The method is used for food image segmentation as part of a mobile food record system we have developed for dietary assessment and management. The accurate estimate of nutrients relies on correctly labelled food items and sufficiently well-segmented regions. Our method achieves competitive results using the Berkeley Segmentation Dataset and outperforms some of the most popular techniques in a food image dataset.
Collapse
Affiliation(s)
- Yu Wang
- School of Electrical and Computer Engineering, Purdue University
| | - Chang Liu
- School of Electrical and Computer Engineering, Purdue University
| | - Fengqing Zhu
- School of Electrical and Computer Engineering, Purdue University
| | - Carol J Boushey
- Cancer Epidemiology Program, University of Hawaii Cancer Center
| | - Edward J Delp
- School of Electrical and Computer Engineering, Purdue University
| |
Collapse
|
24
|
Zou H, Qin X, Zhou S, Ji K. A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution. Sensors (Basel) 2016; 16:E1107. [PMID: 27438840 DOI: 10.3390/s16071107] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2016] [Revised: 07/07/2016] [Accepted: 07/13/2016] [Indexed: 11/16/2022]
Abstract
The simple linear iterative clustering (SLIC) method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR) images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD). Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images.
Collapse
|
25
|
Lu G, Qin X, Wang D, Muller S, Zhang H, Chen A, Chen ZG, Fei B. Hyperspectral Imaging of Neoplastic Progression in a Mouse Model of Oral Carcinogenesis. Proc SPIE Int Soc Opt Eng 2016; 9788. [PMID: 27656034 DOI: 10.1117/12.2216553] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Hyperspectral imaging (HSI) is an emerging modality for medical applications and holds great potential for noninvasive early detection of cancer. It has been reported that early cancer detection can improve the survival and quality of life of head and neck cancer patients. In this paper, we explored the possibility of differentiating between premalignant lesions and healthy tongue tissue using hyperspectral imaging in a chemical induced oral cancer animal model. We proposed a novel classification algorithm for cancer detection using hyperspectral images. The method detected the dysplastic tissue with an average area under the curve (AUC) of 0.89. The hyperspectral imaging and classification technique may provide a new tool for oral cancer detection.
Collapse
Affiliation(s)
- Guolan Lu
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA
| | - Xulei Qin
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Dongsheng Wang
- Department of Hematology and Medical Oncology, Emory University, Atlanta, GA
| | - Susan Muller
- Department of Otolaryngology, Emory University School of Medicine, Atlanta, GA
| | - Hongzheng Zhang
- Department of Otolaryngology, Emory University School of Medicine, Atlanta, GA
| | - Amy Chen
- Department of Otolaryngology, Emory University School of Medicine, Atlanta, GA
| | - Zhuo Georgia Chen
- Department of Hematology and Medical Oncology, Emory University, Atlanta, GA
| | - Baowei Fei
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA; Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA; Department of Mathematics & Computer Science, Emory University, Atlanta, GA; Winship Cancer Institute of Emory University, Atlanta, GA
| |
Collapse
|
26
|
Abstract
This paper proposes a method for segmenting the prostate on magnetic resonance (MR) images. A superpixel-based 3D graph cut algorithm is proposed to obtain the prostate surface. Instead of pixels, superpixels are considered as the basic processing units to construct a 3D superpixel-based graph. The superpixels are labeled as the prostate or background by minimizing an energy function using graph cut based on the 3D superpixel-based graph. To construct the energy function, we proposed a superpixel-based shape data term, an appearance data term, and two superpixel-based smoothness terms. The proposed superpixel-based terms provide the effectiveness and robustness for the segmentation of the prostate. The segmentation result of graph cuts is used as an initialization of a 3D active contour model to overcome the drawback of the graph cut. The result of 3D active contour model is then used to update the shape model and appearance model of the graph cut. Iterations of the 3D graph cut and 3D active contour model have the ability to jump out of local minima and obtain a smooth prostate surface. On our 43 MR volumes, the proposed method yields a mean Dice ratio of 89.3 ±1.9%. On PROMISE12 test data set, our method was ranked at the second place; the mean Dice ratio and standard deviation is 87.0±3.2%. The experimental results show that the proposed method outperforms several state-of-the-art prostate MRI segmentation methods.
Collapse
Affiliation(s)
- Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30329 USA
| | - Lizhi Liu
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30329 USA. Center for Medical Imaging & Image-guided Therapy, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhenfeng Zhang
- Center for Medical Imaging & Image-guided Therapy, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, also with Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA 30329 USA. website: www.feilab.org
| |
Collapse
|