1
|
Mei H, Peng J, Wang T, Zhou T, Zhao H, Zhang T, Yang Z. Overcoming the Limits of Cross-Sensitivity: Pattern Recognition Methods for Chemiresistive Gas Sensor Array. NANO-MICRO LETTERS 2024; 16:269. [PMID: 39141168 PMCID: PMC11324646 DOI: 10.1007/s40820-024-01489-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Accepted: 07/21/2024] [Indexed: 08/15/2024]
Abstract
As information acquisition terminals for artificial olfaction, chemiresistive gas sensors are often troubled by their cross-sensitivity, and reducing their cross-response to ambient gases has always been a difficult and important point in the gas sensing area. Pattern recognition based on sensor array is the most conspicuous way to overcome the cross-sensitivity of gas sensors. It is crucial to choose an appropriate pattern recognition method for enhancing data analysis, reducing errors and improving system reliability, obtaining better classification or gas concentration prediction results. In this review, we analyze the sensing mechanism of cross-sensitivity for chemiresistive gas sensors. We further examine the types, working principles, characteristics, and applicable gas detection range of pattern recognition algorithms utilized in gas-sensing arrays. Additionally, we report, summarize, and evaluate the outstanding and novel advancements in pattern recognition methods for gas identification. At the same time, this work showcases the recent advancements in utilizing these methods for gas identification, particularly within three crucial domains: ensuring food safety, monitoring the environment, and aiding in medical diagnosis. In conclusion, this study anticipates future research prospects by considering the existing landscape and challenges. It is hoped that this work will make a positive contribution towards mitigating cross-sensitivity in gas-sensitive devices and offer valuable insights for algorithm selection in gas recognition applications.
Collapse
Affiliation(s)
- Haixia Mei
- Key Lab Intelligent Rehabil & Barrier Free Disable (Ministry of Education), Changchun University, Changchun, 130022, People's Republic of China
| | - Jingyi Peng
- Key Lab Intelligent Rehabil & Barrier Free Disable (Ministry of Education), Changchun University, Changchun, 130022, People's Republic of China
| | - Tao Wang
- Shanghai Key Laboratory of Intelligent Sensing and Detection Technology, School of Mechanical and Power Engineering, East China University of Science and Technology, Shanghai, 200237, People's Republic of China.
| | - Tingting Zhou
- State Key Laboratory of Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, Changchun, 130012, People's Republic of China
| | - Hongran Zhao
- State Key Laboratory of Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, Changchun, 130012, People's Republic of China
| | - Tong Zhang
- State Key Laboratory of Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, Changchun, 130012, People's Republic of China.
| | - Zhi Yang
- National Key Laboratory of Advanced Micro and Nano Manufacture Technology, Department of Micro/Nano Electronics, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, People's Republic of China.
| |
Collapse
|
2
|
Kumar A, Dyer S, Kim J, Li C, Leong PHW, Fulham M, Feng D. Adapting content-based image retrieval techniques for the semantic annotation of medical images. Comput Med Imaging Graph 2016; 49:37-45. [PMID: 26890880 DOI: 10.1016/j.compmedimag.2016.01.001] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Revised: 12/10/2015] [Accepted: 01/14/2016] [Indexed: 10/22/2022]
Abstract
The automatic annotation of medical images is a prerequisite for building comprehensive semantic archives that can be used to enhance evidence-based diagnosis, physician education, and biomedical research. Annotation also has important applications in the automatic generation of structured radiology reports. Much of the prior research work has focused on annotating images with properties such as the modality of the image, or the biological system or body region being imaged. However, many challenges remain for the annotation of high-level semantic content in medical images (e.g., presence of calcification, vessel obstruction, etc.) due to the difficulty in discovering relationships and associations between low-level image features and high-level semantic concepts. This difficulty is further compounded by the lack of labelled training data. In this paper, we present a method for the automatic semantic annotation of medical images that leverages techniques from content-based image retrieval (CBIR). CBIR is a well-established image search technology that uses quantifiable low-level image features to represent the high-level semantic content depicted in those images. Our method extends CBIR techniques to identify or retrieve a collection of labelled images that have similar low-level features and then uses this collection to determine the best high-level semantic annotations. We demonstrate our annotation method using retrieval via weighted nearest-neighbour retrieval and multi-class classification to show that our approach is viable regardless of the underlying retrieval strategy. We experimentally compared our method with several well-established baseline techniques (classification and regression) and showed that our method achieved the highest accuracy in the annotation of liver computed tomography (CT) images.
Collapse
Affiliation(s)
- Ashnil Kumar
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia.
| | - Shane Dyer
- School of Electrical and Information Engineering, University of Sydney, Australia.
| | - Jinman Kim
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia.
| | - Changyang Li
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia.
| | - Philip H W Leong
- School of Electrical and Information Engineering, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia.
| | - Michael Fulham
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia; Department of Molecular Imaging, Royal Prince Alfred Hospital, Sydney, Australia; Sydney Medical School, University of Sydney, Australia.
| | - Dagan Feng
- School of Information Technologies, University of Sydney, Australia; Institute of Biomedical Engineering and Technology, University of Sydney, Australia; Med-X Research Institute, Shanghai Jiao Tong University, China.
| |
Collapse
|
3
|
Kumar A, Fulham M. Efficient PET-CT image retrieval using graphs embedded into a vector space. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2014; 2014:1901-1904. [PMID: 25570350 DOI: 10.1109/embc.2014.6943982] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Combined positron emission tomography and computed tomography (PET-CT) produces functional data (from PET) in relation to anatomical context (from CT) and it has made a major contribution to improved cancer diagnosis, tumour localisation, and staging. The ability to retrieve PET-CT images from large archives has potential applications in diagnosis, education, and research. PET-CT image retrieval requires the consideration of modality-specific 3D image features and spatial contextual relationships between features in both modalities. Graph-based retrieval methods have recently been applied to represent contextual relationships during PET-CT image retrieval. However, accurate methods are computationally complex, often requiring offline processing, and are unable to retrieve images at interactive rates. In this paper, we propose a method for PET-CT image retrieval using a vector space embedding of graph descriptors. Our method defines the vector space in terms of the distance between a graph representing a PET-CT image and a set of fixed-sized prototype graphs; each vector component measures the dissimilarity of the graph and a prototype. Our evaluation shows that our method is significantly faster (≈800× speedup, p <; 0.05) than retrieval using the graph-edit distance while maintaining comparable precision (5% difference, p > 0.05).
Collapse
|
4
|
Kumar A, Kim J, Wen L, Fulham M, Feng D. A graph-based approach for the retrieval of multi-modality medical images. Med Image Anal 2013; 18:330-42. [PMID: 24378541 DOI: 10.1016/j.media.2013.11.003] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2012] [Revised: 11/25/2013] [Accepted: 11/27/2013] [Indexed: 11/17/2022]
Abstract
In this paper, we address the retrieval of multi-modality medical volumes, which consist of two different imaging modalities, acquired sequentially, from the same scanner. One such example, positron emission tomography and computed tomography (PET-CT), provides physicians with complementary functional and anatomical features as well as spatial relationships and has led to improved cancer diagnosis, localisation, and staging. The challenge of multi-modality volume retrieval for cancer patients lies in representing the complementary geometric and topologic attributes between tumours and organs. These attributes and relationships, which are used for tumour staging and classification, can be formulated as a graph. It has been demonstrated that graph-based methods have high accuracy for retrieval by spatial similarity. However, naïvely representing all relationships on a complete graph obscures the structure of the tumour-anatomy relationships. We propose a new graph structure derived from complete graphs that structurally constrains the edges connected to tumour vertices based upon the spatial proximity of tumours and organs. This enables retrieval on the basis of tumour localisation. We also present a similarity matching algorithm that accounts for different feature sets for graph elements from different imaging modalities. Our method emphasises the relationships between a tumour and related organs, while still modelling patient-specific anatomical variations. Constraining tumours to related anatomical structures improves the discrimination potential of graphs, making it easier to retrieve similar images based on tumour location. We evaluated our retrieval methodology on a dataset of clinical PET-CT volumes. Our results showed that our method enabled the retrieval of multi-modality images using spatial features. Our graph-based retrieval algorithm achieved a higher precision than several other retrieval techniques: gray-level histograms as well as state-of-the-art methods such as visual words using the scale- invariant feature transform (SIFT) and relational matrices representing the spatial arrangements of objects.
Collapse
Affiliation(s)
- Ashnil Kumar
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, University of Sydney, Sydney, Australia.
| | - Jinman Kim
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, University of Sydney, Sydney, Australia.
| | - Lingfeng Wen
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, University of Sydney, Sydney, Australia; Department of Molecular Imaging, Royal Prince Alfred Hospital, Sydney, Australia.
| | - Michael Fulham
- Department of Molecular Imaging, Royal Prince Alfred Hospital, Sydney, Australia; Sydney Medical School, University of Sydney, Sydney, Australia.
| | - Dagan Feng
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, University of Sydney, Sydney, Australia; Med-X Research Institute, Shanghai Jiao Tong University, China.
| |
Collapse
|
5
|
Kumar A, Kim J, Cai W, Fulham M, Feng D. Content-based medical image retrieval: a survey of applications to multidimensional and multimodality data. J Digit Imaging 2013; 26:1025-39. [PMID: 23846532 PMCID: PMC3824925 DOI: 10.1007/s10278-013-9619-2] [Citation(s) in RCA: 138] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Medical imaging is fundamental to modern healthcare, and its widespread use has resulted in the creation of image databases, as well as picture archiving and communication systems. These repositories now contain images from a diverse range of modalities, multidimensional (three-dimensional or time-varying) images, as well as co-aligned multimodality images. These image collections offer the opportunity for evidence-based diagnosis, teaching, and research; for these applications, there is a requirement for appropriate methods to search the collections for images that have characteristics similar to the case(s) of interest. Content-based image retrieval (CBIR) is an image search technique that complements the conventional text-based retrieval of images by using visual features, such as color, texture, and shape, as search criteria. Medical CBIR is an established field of study that is beginning to realize promise when applied to multidimensional and multimodality medical data. In this paper, we present a review of state-of-the-art medical CBIR approaches in five main categories: two-dimensional image retrieval, retrieval of images with three or more dimensions, the use of nonimage data to enhance the retrieval, multimodality image retrieval, and retrieval from diverse datasets. We use these categories as a framework for discussing the state of the art, focusing on the characteristics and modalities of the information used during medical image retrieval.
Collapse
Affiliation(s)
- Ashnil Kumar
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, University of Sydney, Building J12, Sydney, NSW, 2006, Australia,
| | | | | | | | | |
Collapse
|