1
|
Cao Y, Ikenoya Y, Kawaguchi T, Hashimoto S, Morino T. A Real-Time Application for the Analysis of Multi-Purpose Vending Machines with Machine Learning. Sensors (Basel) 2023; 23:1935. [PMID: 36850535 PMCID: PMC9967936 DOI: 10.3390/s23041935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 02/03/2023] [Accepted: 02/07/2023] [Indexed: 06/18/2023]
Abstract
With the development of mobile payment, the Internet of Things (IoT) and artificial intelligence (AI), smart vending machines, as a kind of unmanned retail, are moving towards a new future. However, the scarcity of data in vending machine scenarios is not conducive to the development of its unmanned services. This paper focuses on using machine learning on small data to detect the placement of the spiral rack indicated by the end of the spiral rack, which is the most crucial factor in causing a product potentially to get stuck in vending machines during the dispensation. To this end, we propose a k-means clustering-based method for splitting small data that is unevenly distributed both in number and in features due to real-world constraints and design a remarkably lightweight convolutional neural network (CNN) as a classifier model for the benefit of real-time application. Our proposal of data splitting along with the CNN is visually interpreted to be effective in that the trained model is robust enough to be unaffected by changes in products and reaches an accuracy of 100%. We also design a single-board computer-based handheld device and implement the trained model to demonstrate the feasibility of a real-time application.
Collapse
Affiliation(s)
- Yu Cao
- Program of Intelligence and Control, Cluster of Electronics and Mechanical Engineering, School of Science and Technology, Gunma University, 1-5-1 Tenjin-cho, Kiryu 376-8515, Japan
| | - Yudai Ikenoya
- Program of Intelligence and Control, Cluster of Electronics and Mechanical Engineering, School of Science and Technology, Gunma University, 1-5-1 Tenjin-cho, Kiryu 376-8515, Japan
| | - Takahiro Kawaguchi
- Program of Intelligence and Control, Cluster of Electronics and Mechanical Engineering, School of Science and Technology, Gunma University, 1-5-1 Tenjin-cho, Kiryu 376-8515, Japan
| | - Seiji Hashimoto
- Program of Intelligence and Control, Cluster of Electronics and Mechanical Engineering, School of Science and Technology, Gunma University, 1-5-1 Tenjin-cho, Kiryu 376-8515, Japan
| | | |
Collapse
|
2
|
Wan Y, Zhou H, Zhang X. An Interpretation Architecture for Deep Learning Models with the Application of COVID-19 Diagnosis. Entropy (Basel) 2021; 23:204. [PMID: 33562309 PMCID: PMC7916048 DOI: 10.3390/e23020204] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 01/27/2021] [Accepted: 02/04/2021] [Indexed: 12/15/2022]
Abstract
The Coronavirus disease 2019 (COVID-19) has become one of the threats to the world. Computed tomography (CT) is an informative tool for the diagnosis of COVID-19 patients. Many deep learning approaches on CT images have been proposed and brought promising performance. However, due to the high complexity and non-transparency of deep models, the explanation of the diagnosis process is challenging, making it hard to evaluate whether such approaches are reliable. In this paper, we propose a visual interpretation architecture for the explanation of the deep learning models and apply the architecture in COVID-19 diagnosis. Our architecture designs a comprehensive interpretation about the deep model from different perspectives, including the training trends, diagnostic performance, learned features, feature extractors, the hidden layers, the support regions for diagnostic decision, and etc. With the interpretation architecture, researchers can make a comparison and explanation about the classification performance, gain insight into what the deep model learned from images, and obtain the supports for diagnostic decisions. Our deep model achieves the diagnostic result of 94.75%, 93.22%, 96.69%, 97.27%, and 91.88% in the criteria of accuracy, sensitivity, specificity, positive predictive value, and negative predictive value, which are 8.30%, 4.32%, 13.33%, 10.25%, and 6.19% higher than that of the compared traditional methods. The visualized features in 2-D and 3-D spaces provide the reasons for the superiority of our deep model. Our interpretation architecture would allow researchers to understand more about how and why deep models work, and can be used as interpretation solutions for any deep learning models based on convolutional neural network. It can also help deep learning methods to take a step forward in the clinical COVID-19 diagnosis field.
Collapse
Affiliation(s)
- Yuchai Wan
- Beijing Key Laboratory of Big Data Technology for Food Safety, School of Computer Science, Beijing Technology and Business University, Beijing 100048, China; (H.Z.); (X.Z.)
| | | | | |
Collapse
|
3
|
Wei FF, Lyu R, He WW, Wang L, Cheng X, Zhang XB, Shi TT, Jin L. [Extraction of distribution information of Angelicae sinensis plants in Weiyuan county based on remote sensing technology]. Zhongguo Zhong Yao Za Zhi 2019; 44:4125-4128. [PMID: 31872688 DOI: 10.19540/j.cnki.cjcmm.20190731.110] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Due to the large amount of nutrients required during the cultivation of Angelica sinensis and in order to prevent the occurrence of pests and diseases,and the annual reduction of the planting area of Angelica and the balance of supply and demand of A. sinensis,the A. sinensis plantation adopts the rotation mode. This paper takes Wuyuan county of Gansu province as the research scope and use GF-1 Satellite data as the data source,using remote sensing technology combined with field survey results,to explore the effective method of visual interpretation for the extraction of A. sinensis planting area. A sample was selected to generate a spectrum according to different feature types. The different characteristics of A. sinensis and other features were analyzed and distinguished in remote sensing images,so that the A. sinensis planting plots were extracted and verified in remote sensing images. The results showed that the accuracy verification value of the visual interpretation method was 95. 85%. It is determined that the visual interpretation method can effectively extract the A. sinensis planting plots within the research scope and realize the comprehensive grasp of the spatial distribution information of A. sinensis.
Collapse
Affiliation(s)
- Fei-Fei Wei
- College of Pharmacy,Gansu University of Chinese Medicine Lanzhou 730000,China
| | - Rong Lyu
- College of Pharmacy,Gansu University of Chinese Medicine Lanzhou 730000,China
| | - Wei-Wei He
- College of Pharmacy,Gansu University of Chinese Medicine Lanzhou 730000,China
| | - Li Wang
- College of Pharmacy,Gansu University of Chinese Medicine Lanzhou 730000,China
| | - Xi Cheng
- College of Pharmacy,Gansu University of Chinese Medicine Lanzhou 730000,China
| | - Xiao-Bo Zhang
- State Key Laboratory Breeding Base of Dao-di Herbs,Chinese Medicine Resource Center,China Academy of Chinese Medical Sciences Beijing 100700,China
| | - Ting-Ting Shi
- State Key Laboratory Breeding Base of Dao-di Herbs,Chinese Medicine Resource Center,China Academy of Chinese Medical Sciences Beijing 100700,China
| | - Ling Jin
- College of Pharmacy,Gansu University of Chinese Medicine Lanzhou 730000,China Research Institute of Chinese( Tibetan) Medicinal Resources Lanzhou 730000,China
| |
Collapse
|
4
|
Bai JQ, Gao S, Wang PF, Wang L, Liu WW, Wang XP, Zhang XB, Shi TT. [Bletilla striata planting area in Ningshan county extraction based on multi-temporal remote sensing images]. Zhongguo Zhong Yao Za Zhi 2019; 44:4129-4133. [PMID: 31872689 DOI: 10.19540/j.cnki.cjcmm.20190731.112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Traditional Chinese medicine is planted in mountainous areas with suitable natural conditions. The planting area is complex in terrain,and the planting plots are mostly irregularly shaped. It is difficult to accurately calculate the planting area by traditional survey methods. The method of extracting Chinese herbal medicine planting area combined with remote sensing and GIS technology is of great significance for the rational development and utilization of traditional Chinese medicine resources. Taking Bletilla striata planting in Ningshan county of Shaanxi province as an example,the extraction method of planting area of traditional Chinese medicine in county was studied. High-resolution ZY-3 and GF-1 multi-spectral multi-temporal remote sensing images were used as data sources. Through field sampling,samples such as B. striata,cultivated land,forest land,water body,artificial surface,alpine meadow,etc. are collected. The spectral features,texture features and shape features of remotely identifiable objects in different planting areas and cultivated land,vegetable sheds were analyzed,confusing ground objects were eliminated and interpretation marks were establish. The method of visual interpretation is used to realize the extraction of B. striata planting areas,and the B. striata planting area are calculated by combining GIS technology. The results showed that the method of visual interpretation,using high-resolution ZY-3 and GF-1 multi-spectral multi-temporal remote sensing image data extracted the planting area of 403.05 mu. It can effectively extract the B. striata planting area in research region.
Collapse
Affiliation(s)
- Ji-Qing Bai
- Shaanxi University of Chinese Medicine Xianyang 712046,China Shaanxi Quality Monitoring and Technology Service Center for Chinese Materia Medica Raw Materials Xianyang 712046,China
| | - Su Gao
- Shaanxi University of Chinese Medicine Xianyang 712046,China
| | - Peng-Fei Wang
- Shaanxi University of Chinese Medicine Xianyang 712046,China
| | - Lin Wang
- Ningshan County Hospital of Traditional Chinese Medicine Ningshan 711600,China
| | - Wei-Wei Liu
- Ningshan County Hospital of Traditional Chinese Medicine Ningshan 711600,China
| | - Xiao-Ping Wang
- Shaanxi University of Chinese Medicine Xianyang 712046,China
| | - Xiao-Bo Zhang
- State Key Laboratory Breeding Base of Dao-di Herbs,National Resource Center for Chinese Materia Medica,China Academy of Chinese Medical Sciences Beijing 100700,China
| | - Ting-Ting Shi
- State Key Laboratory Breeding Base of Dao-di Herbs,National Resource Center for Chinese Materia Medica,China Academy of Chinese Medical Sciences Beijing 100700,China
| |
Collapse
|
5
|
Chen Y, Jiao J, Wei Y, Zhao H, Yu W, Cao B, Xu H, Yan F, Wu D, Li H. Accuracy Assessment of the Planar Morphology of Valley Bank Gullies Extracted with High Resolution Remote Sensing Imagery on the Loess Plateau, China. Int J Environ Res Public Health 2019; 16:E369. [PMID: 30696108 PMCID: PMC6388579 DOI: 10.3390/ijerph16030369] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Revised: 12/16/2018] [Accepted: 01/23/2019] [Indexed: 11/17/2022]
Abstract
Gully erosion is a serious environmental problem worldwide, causing soil loss, land degradation, silting up of reservoirs and even catastrophic flooding. Mapping gully features from remote sensing imagery is crucial for assisting in understanding gully erosion mechanisms, predicting its development processes and assessing its environmental and socio-economic effects over large areas, especially under the increasing global climate extremes and intensive human activities. However, the potential of using increasingly available high-resolution remote sensing imagery to detect and delineate gullies has been less evaluated. Hence, 130 gullies occurred along a transect were selected from a typical watershed in the hilly and gully region of the Chinese Loess Plateau, and visually interpreted from a Pleiades-1B satellite image (panchromatic-sharpened image at 0.5 m resolution fused with 2.0 m multi-spectral bands). The interpreted gullies were compared with their measured data obtained in the field using a differential global positioning system (GPS). Results showed that gullies could generally be accurately interpreted from the image, with an average relative error of gully area and gully perimeter being 11.1% and 8.9%, respectively, and 74.2% and 82.3% of the relative errors for gully area and gully perimeter were within 15%. But involving field measurements of gullies in present imagery-based gully studies is still recommended. To judge whether gullies were mapped accurately further, a standard adopting one-pixel tolerance along the mapped gully edges was proposed and proved to be practical. Correlation analysis indicated that larger gullies could be interpreted more accurately but increasing gully shape complexity would decrease interpreting accuracy. Overall lower vegetation coverage in winter due to the withering and falling of vegetation rarely affected gully interpreting. Furthermore, gully detectability on remote sensing imagery in this region was lower than the other places of the world, due to the overall broken topography in the Loess Plateau, thus images with higher resolution than normally perceived are needed when mapping erosion features here. Taking these influencing factors (gully dimension and shape complexity, vegetation coverage, topography) into account will be favorable to select appropriate imagery and gullies (as study objects) in future imagery-based gully studies. Finally, two linear regression models were built to correct gully area (Aip, m²) and gully perimeter (Pip, m) visually extracted, by connecting them with the measured area (Ams, m²) and perimeter (Pms, m). The correction models were Ams=1.021Aip+0.139 and Pms=0.949Pip+ 0.722, respectively. These models could be helpful for improving the accuracy of interpreting results, and further accurately estimating gully development and developing more effective automated gully extraction methods on the Loess Plateau of China.
Collapse
Affiliation(s)
- Yixian Chen
- State Key Laboratory of Soil Erosion and Dryland Farming on the Loess Plateau, Institute of Soil and Water Conservation, Chinese Academy of Sciences and Ministry of Water Resources, Yangling 712100, Shaanxi, China.
- University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Juying Jiao
- State Key Laboratory of Soil Erosion and Dryland Farming on the Loess Plateau, Institute of Soil and Water Conservation, Chinese Academy of Sciences and Ministry of Water Resources, Yangling 712100, Shaanxi, China.
- Institute of Soil and Water Conservation, Northwest A&F University, Yangling 712100, Shaanxi, China.
| | - Yanhong Wei
- State Key Laboratory of Soil Erosion and Dryland Farming on the Loess Plateau, Institute of Soil and Water Conservation, Chinese Academy of Sciences and Ministry of Water Resources, Yangling 712100, Shaanxi, China.
- University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Hengkang Zhao
- Institute of Soil and Water Conservation, Northwest A&F University, Yangling 712100, Shaanxi, China.
| | - Weijie Yu
- Institute of Soil and Water Conservation, Northwest A&F University, Yangling 712100, Shaanxi, China.
| | - Binting Cao
- Institute of Soil and Water Conservation, Northwest A&F University, Yangling 712100, Shaanxi, China.
| | - Haiyan Xu
- Institute of Soil and Water Conservation, Northwest A&F University, Yangling 712100, Shaanxi, China.
| | - Fangchen Yan
- Institute of Soil and Water Conservation, Northwest A&F University, Yangling 712100, Shaanxi, China.
| | - Duoyang Wu
- Institute of Soil and Water Conservation, Northwest A&F University, Yangling 712100, Shaanxi, China.
| | - Hang Li
- Institute of Soil and Water Conservation, Northwest A&F University, Yangling 712100, Shaanxi, China.
| |
Collapse
|
6
|
Lesiv M, Laso Bayas JC, See L, Duerauer M, Dahlia D, Durando N, Hazarika R, Kumar Sahariah P, Vakolyuk M, Blyshchyk V, Bilous A, Perez‐Hoyos A, Gengler S, Prestele R, Bilous S, Akhtar IUH, Singha K, Choudhury SB, Chetri T, Malek Ž, Bungnamei K, Saikia A, Sahariah D, Narzary W, Danylo O, Sturn T, Karner M, McCallum I, Schepaschenko D, Moltchanova E, Fraisl D, Moorthy I, Fritz S. Estimating the global distribution of field size using crowdsourcing. Glob Chang Biol 2019; 25:174-186. [PMID: 30549201 PMCID: PMC7379266 DOI: 10.1111/gcb.14492] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Accepted: 09/16/2018] [Indexed: 05/07/2023]
Abstract
There is an increasing evidence that smallholder farms contribute substantially to food production globally, yet spatially explicit data on agricultural field sizes are currently lacking. Automated field size delineation using remote sensing or the estimation of average farm size at subnational level using census data are two approaches that have been used. However, both have limitations, for example, automatic field size delineation using remote sensing has not yet been implemented at a global scale while the spatial resolution is very coarse when using census data. This paper demonstrates a unique approach to quantifying and mapping agricultural field size globally using crowdsourcing. A campaign was run in June 2017, where participants were asked to visually interpret very high resolution satellite imagery from Google Maps and Bing using the Geo-Wiki application. During the campaign, participants collected field size data for 130 K unique locations around the globe. Using this sample, we have produced the most accurate global field size map to date and estimated the percentage of different field sizes, ranging from very small to very large, in agricultural areas at global, continental, and national levels. The results show that smallholder farms occupy up to 40% of agricultural areas globally, which means that, potentially, there are many more smallholder farms in comparison with the two different current global estimates of 12% and 24%. The global field size map and the crowdsourced data set are openly available and can be used for integrated assessment modeling, comparative studies of agricultural dynamics across different contexts, for training and validation of remote sensing field size delineation, and potential contributions to the Sustainable Development Goal of Ending hunger, achieve food security and improved nutrition and promote sustainable agriculture.
Collapse
Affiliation(s)
- Myroslava Lesiv
- International Institute for Applied Systems Analysis, ESMLaxenburgAustria
| | | | - Linda See
- International Institute for Applied Systems Analysis, ESMLaxenburgAustria
| | - Martina Duerauer
- International Institute for Applied Systems Analysis, ESMLaxenburgAustria
| | - Domian Dahlia
- International Institute for Applied Systems Analysis, ESMLaxenburgAustria
| | | | | | | | - Mar'yana Vakolyuk
- Department of Energy and Mass Exchange in GeosystemsState Institution Scientific Centre for Aerospace Research of the Earth Institute of Geological Science National Academy of Sciences of UkraineKyivUkraine
| | - Volodymyr Blyshchyk
- Forest ManagementNacional'nyj Universytet Bioresursiv i Pryrodokorystuvannya UkrayinyKyivUkraine
| | - Andrii Bilous
- Department of Energy and Mass Exchange in GeosystemsState Institution Scientific Centre for Aerospace Research of the Earth Institute of Geological Science National Academy of Sciences of UkraineKyivUkraine
| | - Ana Perez‐Hoyos
- European Commission Joint Research Centre Ispra SectorIspraItaly
| | - Sarah Gengler
- Environmental SciencesUniversité catholique de Louvain, Earth and Life InstituteLouvain‐la‐NeuveBelgium
| | - Reinhard Prestele
- Department of Earth Sciences, Environmental Geography GroupVrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Svitlana Bilous
- Forest ManagementNacional'nyj Universytet Bioresursiv i Pryrodokorystuvannya UkrayinyKyivUkraine
| | - Ibrar ul Hassan Akhtar
- Department of MeteorologyCOMSATS UniversityIslamabadPakistan
- Pakistan Space and Upper Atmosphere Research CommissionIslamabadPakistan
| | | | | | | | - Žiga Malek
- Vrije Universiteit Amsterdam Faculteit Economische wetenschappen en BedrijfskundeAmsterdamThe Netherlands
| | | | | | | | | | - Olha Danylo
- International Institute for Applied Systems Analysis, ESMLaxenburgAustria
| | - Tobias Sturn
- International Institute for Applied Systems Analysis, ESMLaxenburgAustria
| | - Mathias Karner
- International Institute for Applied Systems Analysis, ESMLaxenburgAustria
| | - Ian McCallum
- International Institute for Applied Systems Analysis, ESMLaxenburgAustria
| | - Dmitry Schepaschenko
- International Institute for Applied Systems Analysis, ESMLaxenburgAustria
- Soil ScienceMoscow State Forest UniversityMoscowRussia
| | | | - Dilek Fraisl
- International Institute for Applied Systems Analysis, ESMLaxenburgAustria
| | - Inian Moorthy
- International Institute for Applied Systems Analysis, ESMLaxenburgAustria
| | - Steffen Fritz
- International Institute for Applied Systems Analysis, ESMLaxenburgAustria
| |
Collapse
|
7
|
Abstract
Computational models of vision have advanced in recent years at a rapid rate, rivalling in some areas human-level performance. Much of the progress to date has focused on analysing the visual scene at the object level-the recognition and localization of objects in the scene. Human understanding of images reaches a richer and deeper image understanding both 'below' the object level, such as identifying and localizing object parts and sub-parts, as well as 'above' the object level, such as identifying object relations, and agents with their actions and interactions. In both cases, understanding depends on recovering meaningful structures in the image, and their components, properties and inter-relations, a process referred here as 'image interpretation'. In this paper, we describe recent directions, based on human and computer vision studies, towards human-like image interpretation, beyond the reach of current schemes, both below the object level, as well as some aspects of image interpretation at the level of meaningful configurations beyond the recognition of individual objects, and in particular, interactions between two people in close contact. In both cases the recognition process depends on the detailed interpretation of so-called 'minimal images', and at both levels recognition depends on combining 'bottom-up' processing, proceeding from low to higher levels of a processing hierarchy, together with 'top-down' processing, proceeding from high to lower levels stages of visual analysis.
Collapse
Affiliation(s)
- Guy Ben-Yosef
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Shimon Ullman
- Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 7610001, Israel
| |
Collapse
|
8
|
Li Z, Han R, Yan Z, Li L, Feng Z. Antinuclear antibodies detection: A comparative study between automated recognition and conventional visual interpretation. J Clin Lab Anal 2018; 33:e22619. [PMID: 30030865 DOI: 10.1002/jcla.22619] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2018] [Accepted: 06/20/2018] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND The indirect immunofluorescence assay (IIFA) for the detection of antinuclear antibodies (ANA) was firstly described in 1958 and is still considered the reference method for ANA screening. Currently, an automated processing and recognition system for standardized and efficient ANA interpretation by human epithelial (HEp-2) cell-based immunofluorescence (IIF; EUROPattern Suite, Euroimmun) is available in China. METHODS In this study, the performance of this novel system for positive/negative classification, pattern recognition (including homogenous, speckled, nucleolar, nuclear dots, cytoplasmic, and centromeres patterns) and titers evaluation was evaluated by comparing to visual interpretation. RESULTS Referring to the total of 3681 collected samples, there was an agreement of 98.7% (κ = 0.973) between the visual and automated examination regarding positive/negative discrimination. In sera with single pattern, correct pattern recognition was observed in 94.6% of the samples. The efficiency of automated recognition for single pattern varied for the different patterns. The automatically determined patterns were correct and complete in 1071 of 1620 cases and correct and meaningful but not complete ("main pattern") in another 405 cases, enabling main pattern recognition in 91.1% of all cases. Referring to the titers evaluation, the results within the next titer were considered to be consistent. In 1603 positive sera both by visual and automated evaluation, titers of 1514 sample were consistent, accounting for 94.4%. CONCLUSION Attributed to the performance characteristics, EUROPattern system is suitable for clinical use as its high degree of automation and result reliability, and may help clinical laboratories to standardize of IIF evaluation.
Collapse
Affiliation(s)
- ZhiYan Li
- Department of Clinical Laboratory, Peking University First Hospital, Beijing, China
| | - RuiLin Han
- Department of Clinical Laboratory, Peking University First Hospital, Beijing, China
| | - ZhenLin Yan
- Department of Clinical Laboratory, Peking University First Hospital, Beijing, China
| | - LiJuan Li
- Department of Clinical Laboratory, Peking University First Hospital, Beijing, China
| | - ZhenRu Feng
- Department of Clinical Laboratory, Peking University First Hospital, Beijing, China
| |
Collapse
|
9
|
Na L, Zhang J, Bao Y, Bao Y, Na R, Tong S, Si A. Himawari-8 Satellite Based Dynamic Monitoring of Grassland Fire in China-Mongolia Border Regions. Sensors (Basel) 2018; 18:E276. [PMID: 29346289 PMCID: PMC5795838 DOI: 10.3390/s18010276] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Revised: 01/15/2018] [Accepted: 01/15/2018] [Indexed: 11/17/2022]
Abstract
In this study, we used bands 7, 4, and 3 of the Advance Himawari Imager (AHI) data, combined with a Threshold Algorithm and a visual interpretation method to monitor the entire process of grassland fires that occurred on the China-Mongolia border regions, between 05:40 (UTC) on April 19th to 13:50 (UTC) on April 21st 2016. The results of the AHI data monitoring are evaluated by the fire point product data, the wind field data, and the environmental information data of the area in which the fire took place. The monitoring result shows that, the grassland fire burned for two days and eight hours with a total burned area of about 2708.29 km². It mainly spread from the northwest to the southeast, with a maximum burning speed of 20.9 m/s, a minimum speed of 2.52 m/s, and an average speed of about 12.07 m/s. Thus, using AHI data can not only quickly and accurately track the dynamic development of a grassland fire, but also estimate the spread speed and direction. The evaluation of fire monitoring results reveals that AHI data with high precision and timeliness can be highly consistent with the actual situation.
Collapse
Affiliation(s)
- Li Na
- School of Environment, Northeast Normal University, Changchun 130024, China.
- Key Laboratory for Vegetation Ecology, Ministry of Education, Changchun 130024, China.
| | - Jiquan Zhang
- School of Environment, Northeast Normal University, Changchun 130024, China.
- Key Laboratory for Vegetation Ecology, Ministry of Education, Changchun 130024, China.
| | - Yulong Bao
- Collage of Geography, Inner Mongolia Normal University, Hohhot 010022, China.
| | - Yongbin Bao
- School of Environment, Northeast Normal University, Changchun 130024, China.
| | - Risu Na
- School of Geographical Sciences, Northeast Normal University, Changchun 130024, China.
| | - Siqin Tong
- School of Environment, Northeast Normal University, Changchun 130024, China.
- Key Laboratory for Vegetation Ecology, Ministry of Education, Changchun 130024, China.
| | - Alu Si
- School of Environment, Northeast Normal University, Changchun 130024, China.
| |
Collapse
|
10
|
Liao CC, Qin YY, Tan XH, Hu JJ, Tang Q, Rong Y, Cen H, Li LQ. Predictive value of interim PET/CT visual interpretation in the prognosis of patients with aggressive non-Hodgkin's lymphoma. Onco Targets Ther 2017; 10:5727-5738. [PMID: 29238205 PMCID: PMC5716325 DOI: 10.2147/ott.s154995] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Objective The objective of the study was to evaluate the prognostic value of positron emission tomography (PET)/computed tomography (CT) visual interpretation in patients with aggressive non-Hodgkin’s lymphoma (NHL) using a meta-analysis and systematic review. Methods Using the PubMed, Embase, and Web of Science databases, we performed a systematic review of the use of visual evaluation mid-chemotherapy to evaluate the prognosis of aggressive NHL in studies published up to May 2017. Prospective and retrospective studies assessing progression-free survival (PFS) and overall survival (OS) were included. We used hazard ratio (HR) to determine the value of Deauville criteria and International Harmonization Project (IHP) criteria for measuring survival. Subgroup analysis was performed based on the number of chemotherapy cycles before the mid-term evaluation as well as the visual evaluation method. Results A total of 11 studies were included. PFS (HR =2.93, 95% confidence interval [CI]: 2.93–3.90, p<0.0001) and OS (HR =2.55, 95% CI: 1.76–3.68, p<0.0001) of PET/CT-positive patients were significantly lower when determined by the visual method. In subgroup analysis, IHP, Deauville criteria, and having no standard interpretation groups were factors able to predict PFS; IHP and having no standard interpretation group were able to predict OS. With PET/CT, IHP, and Deauville 5-point criteria, the PFS of patients receiving 2–4 cycles of chemotherapy before PET/CT was significantly lower than that of PET/CT-negative patients. No significant difference in OS was observed when patients received 3 or fewer cycles of chemotherapy before PET/CT, though OS was significantly lower in patients receiving more than 3 chemotherapy cycles. Conclusion IHP and Deauville criteria are commonly used for PET/CT visual evaluation at present. Interim PET/CT analysis after 3–4 chemotherapy cycles is capable of predicting disease prognosis. Large-scale prospective clinical trials are needed to confirm whether PET/CT analysis can be used as an indication for changing a treatment strategy.
Collapse
Affiliation(s)
| | - Yun-Ying Qin
- Department of Radiology, Affiliated Tumor Hospital of Guangxi Medical University
| | | | - Jia-Jie Hu
- Department of the Communist Youth League, Basic Medical College of Guangxi Medical University
| | - Qi Tang
- Department of Radiology, Affiliated Tumor Hospital of Guangxi Medical University
| | | | | | - Le-Qun Li
- Department of Hepatobiliary Surgery, Affiliated Tumor Hospital of Guangxi Medical University.,Department of Liver Cancer Treatment, Guangxi Liver Cancer Diagnosis and Treatment Engineering and Technology Research Center, Nanning, People's Republic of China
| |
Collapse
|
11
|
Booij J, Dubroff J, Pryma D, Yu J, Agarwal R, Lakhani P, Kuo PH. Diagnostic Performance of the Visual Reading of 123I-Ioflupane SPECT Images With or Without Quantification in Patients With Movement Disorders or Dementia. J Nucl Med 2017; 58:1821-1826. [PMID: 28473597 DOI: 10.2967/jnumed.116.189266] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Accepted: 04/19/2017] [Indexed: 11/16/2022] Open
Abstract
Visual interpretation of 123I-ioflupane SPECT images has high diagnostic accuracy for differentiating parkinsonian syndromes (PS), from essential tremor and probable dementia with Lewy bodies (DLB) from Alzheimer disease. In this study, we investigated the impact on accuracy and reader confidence offered by the addition of image quantification in comparison with visual interpretation alone. Methods: We collected 304 123I-ioflupane images from 3 trials that included subjects with a clinical diagnosis of PS, non-PS (mainly essential tremor), probable DLB, and non-DLB (mainly Alzheimer disease). Images were reconstructed with standardized parameters before striatal binding ratios were quantified against a normal database. Images were assessed by 5 nuclear medicine physicians who had limited prior experience with 123I-ioflupane interpretation. In 2 readings at least 1 mo apart, readers performed either a visual interpretation alone or a combined reading (i.e., visual plus quantitative data were available). Readers were asked to rate their confidence of image interpretation and judge scans as easy or difficult to read. Diagnostic accuracy was assessed by comparing image results with the standard of truth (i.e., diagnosis at follow-up) by measuring the positive percentage of agreement (equivalent to sensitivity) and the negative percentage of agreement (equivalent to specificity). The hypothesis that the results of the combined reading were not inferior to the results of the visual reading analysis was tested. Results: A comparison of the combined reading and the visual reading revealed a small, insignificant increase in the mean negative percentage of agreement (89.9% vs. 87.9%) and equivalent positive percentages of agreement (80.2% vs. 80.1%). Readers who initially performed a combined analysis had significantly greater accuracy (85.8% vs. 79.2%; P = 0.018), and their accuracy was close to that of the expert readers in the original studies (range, 83.3%-87.2%). Mean reader confidence in the interpretation of images showed a significant improvement when combined analysis was used (P < 0.0001). Conclusion: The addition of quantification allowed readers with limited experience in the interpretation of 123I-ioflupane SPECT scans to have diagnostic accuracy equivalent to that of the experienced readers in the initial studies. Also, the results of the combined reading were not inferior to the results of the visual reading analysis and offered an increase in reader confidence.
Collapse
Affiliation(s)
- Jan Booij
- Department of Nuclear Medicine, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Jacob Dubroff
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania
| | - Daniel Pryma
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania
| | - Jian Yu
- Diagnostic Imaging, Fox Chase Cancer Center, Philadelphia, Pennsylvania
| | | | - Paras Lakhani
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pennsylvania; and
| | - Phillip H Kuo
- Departments of Medical Imaging, Medicine, and Biomedical Engineering, University of Arizona, Tucson, Arizona
| |
Collapse
|