1
|
ChatGPT vs pharmacy students in the pharmacotherapy time-limit test: A comparative study in Thailand. CURRENTS IN PHARMACY TEACHING & LEARNING 2024; 16:404-410. [PMID: 38641483 DOI: 10.1016/j.cptl.2024.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 04/03/2024] [Accepted: 04/04/2024] [Indexed: 04/21/2024]
Abstract
OBJECTIVES ChatGPT is an innovative artificial intelligence designed to enhance human activities and serve as a potent tool for information retrieval. This study aimed to evaluate the performance and limitation of ChatGPT on fourth-year pharmacy student examination. METHODS This cross-sectional study was conducted on February 2023 at the Faculty of Pharmacy, Chiang Mai University, Thailand. The exam contained 16 multiple-choice questions and 2 short-answer questions, focusing on classification and medical management of shock and electrolyte disorders. RESULTS Out of the 18 questions, ChatGPT provided 44% (8 out of 18) correct responses. In contrast, the students provided a higher accuracy rate with 66% (12 out of 18) correctly answered questions. The findings of this study underscore that while AI exhibits proficiency, it encounters limitations when confronted with specific queries derived from practical scenarios, on the contrary with pharmacy students who possess the liberty to explore and collaborate, mirroring real-world scenarios. CONCLUSIONS Users must exercise caution regarding its reliability, and interpretations of AI-generated answers should be approached judiciously due to potential restrictions in multi-step analysis and reliance on outdated data. Future advancements in AI models, with refinements and tailored enhancements, offer the potential for improved performance.
Collapse
|
2
|
|
3
|
AI will transform science - now researchers must tame it. Nature 2023; 621:658. [PMID: 37758895 DOI: 10.1038/d41586-023-02988-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
|
4
|
An artificial intelligence model to predict hepatocellular carcinoma risk in Korean and Caucasian patients with chronic hepatitis B. J Hepatol 2022; 76:311-318. [PMID: 34606915 DOI: 10.1016/j.jhep.2021.09.025] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 09/13/2021] [Accepted: 09/15/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND & AIMS Several models have recently been developed to predict risk of hepatocellular carcinoma (HCC) in patients with chronic hepatitis B (CHB). Our aims were to develop and validate an artificial intelligence-assisted prediction model of HCC risk. METHODS Using a gradient-boosting machine (GBM) algorithm, a model was developed using 6,051 patients with CHB who received entecavir or tenofovir therapy from 4 hospitals in Korea. Two external validation cohorts were independently established: Korean (5,817 patients from 14 Korean centers) and Caucasian (1,640 from 11 Western centers) PAGE-B cohorts. The primary outcome was HCC development. RESULTS In the derivation cohort and the 2 validation cohorts, cirrhosis was present in 26.9%-50.2% of patients at baseline. A model using 10 parameters at baseline was derived and showed good predictive performance (c-index 0.79). This model showed significantly better discrimination than previous models (PAGE-B, modified PAGE-B, REACH-B, and CU-HCC) in both the Korean (c-index 0.79 vs. 0.64-0.74; all p <0.001) and Caucasian validation cohorts (c-index 0.81 vs. 0.57-0.79; all p <0.05 except modified PAGE-B, p = 0.42). A calibration plot showed a satisfactory calibration function. When the patients were grouped into 4 risk groups, the minimal-risk group (11.2% of the Korean cohort and 8.8% of the Caucasian cohort) had a less than 0.5% risk of HCC during 8 years of follow-up. CONCLUSIONS This GBM-based model provides the best predictive power for HCC risk in Korean and Caucasian patients with CHB treated with entecavir or tenofovir. LAY SUMMARY Risk scores have been developed to predict the risk of hepatocellular carcinoma (HCC) in patients with chronic hepatitis B. We developed and validated a new risk prediction model using machine learning algorithms in 13,508 antiviral-treated patients with chronic hepatitis B. Our new model, based on 10 common baseline characteristics, demonstrated superior performance in risk stratification compared with previous risk scores. This model also identified a group of patients at minimal risk of developing HCC, who could be indicated for less intensive HCC surveillance.
Collapse
|
5
|
Intelligent Diagnosis Method for New Diseases Based on Fuzzy SVM Incremental Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7631271. [PMID: 35069792 PMCID: PMC8776429 DOI: 10.1155/2022/7631271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 11/24/2021] [Accepted: 12/13/2021] [Indexed: 11/27/2022]
Abstract
The diagnosis of new diseases is a challenging problem. In the early stage of the emergence of new diseases, there are few case samples; this may lead to the low accuracy of intelligent diagnosis. Because of the advantages of support vector machine (SVM) in dealing with small sample problems, it is selected for the intelligent diagnosis method. The standard SVM diagnosis model updating needs to retrain all samples. It costs huge storage and calculation costs and is difficult to adapt to the changing reality. In order to solve this problem, this paper proposes a new disease diagnosis method based on Fuzzy SVM incremental learning. According to SVM theory, the support vector set and boundary sample set related to the SVM diagnosis model are extracted. Only these sample sets are considered in incremental learning to ensure the accuracy and reduce the cost of calculation and storage. To reduce the impact of noise points caused by the reduction of training samples, FSVM is used to update the diagnosis model, and the generalization is improved. The simulation results on the banana dataset show that the proposed method can improve the classification accuracy from 86.4% to 90.4%. Finally, the method is applied in COVID-19's diagnostic. The diagnostic accuracy reaches 98.2% as the traditional SVM only gets 84%. With the increase of the number of case samples, the model is updated. When the training samples increase to 400, the number of samples participating in training is only 77; the amount of calculation of the updated model is small.
Collapse
|
6
|
Cross-Camera External Validation for Artificial Intelligence Software in Diagnosis of Diabetic Retinopathy. J Diabetes Res 2022; 2022:5779276. [PMID: 35308093 PMCID: PMC8926465 DOI: 10.1155/2022/5779276] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 02/12/2022] [Accepted: 02/17/2022] [Indexed: 11/18/2022] Open
Abstract
AIMS To investigate the applicability of deep learning image assessment software VeriSee DR to different color fundus cameras for the screening of diabetic retinopathy (DR). METHODS Color fundus images of diabetes patients taken with three different nonmydriatic fundus cameras, including 477 Topcon TRC-NW400, 459 Topcon TRC-NW8 series, and 471 Kowa nonmyd 8 series that were judged as "gradable" by one ophthalmologist were enrolled for validation. VeriSee DR was then used for the diagnosis of referable DR according to the International Clinical Diabetic Retinopathy Disease Severity Scale. Gradability, sensitivity, and specificity were calculated for each camera model. RESULTS All images (100%) from the three camera models were gradable for VeriSee DR. The sensitivity for diagnosing referable DR in the TRC-NW400, TRC-NW8, and non-myd 8 series was 89.3%, 94.6%, and 95.7%, respectively, while the specificity was 94.2%, 90.4%, and 89.3%, respectively. Neither the sensitivity nor the specificity differed significantly between these camera models and the original camera model used for VeriSee DR development (p = 0.40, p = 0.065, respectively). CONCLUSIONS VeriSee DR was applicable to a variety of color fundus cameras with 100% agreement with ophthalmologists in terms of gradability and good sensitivity and specificity for the diagnosis of referable DR.
Collapse
|
7
|
Pivotal Evaluation of an Artificial Intelligence System for Autonomous Detection of Referrable and Vision-Threatening Diabetic Retinopathy. JAMA Netw Open 2021; 4:e2134254. [PMID: 34779843 PMCID: PMC8593763 DOI: 10.1001/jamanetworkopen.2021.34254] [Citation(s) in RCA: 64] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 09/19/2021] [Indexed: 01/31/2023] Open
Abstract
Importance Diabetic retinopathy (DR) is a leading cause of blindness in adults worldwide. Early detection and intervention can prevent blindness; however, many patients do not receive their recommended annual diabetic eye examinations, primarily owing to limited access. Objective To evaluate the safety and accuracy of an artificial intelligence (AI) system (the EyeArt Automated DR Detection System, version 2.1.0) in detecting both more-than-mild diabetic retinopathy (mtmDR) and vision-threatening diabetic retinopathy (vtDR). Design, Setting, and Participants A prospective multicenter cross-sectional diagnostic study was preregistered (NCT03112005) and conducted from April 17, 2017, to May 30, 2018. A total of 942 individuals aged 18 years or older who had diabetes gave consent to participate at 15 primary care and eye care facilities. Data analysis was performed from February 14 to July 10, 2019. Interventions Retinal imaging for the autonomous AI system and Early Treatment Diabetic Retinopathy Study (ETDRS) reference standard determination. Main Outcomes and Measures Primary outcome measures included the sensitivity and specificity of the AI system in identifying participants' eyes with mtmDR and/or vtDR by 2-field undilated fundus photography vs a rigorous clinical reference standard comprising reading center grading of 4 wide-field dilated images using the ETDRS severity scale. Secondary outcome measures included the evaluation of imageability, dilated-if-needed analysis, enrichment correction analysis, worst-case imputation, and safety outcomes. Results Of 942 consenting individuals, 893 patients (1786 eyes) met the inclusion criteria and completed the study protocol. The population included 449 men (50.3%). Mean (SD) participant age was 53.9 (15.2) years (median, 56; range, 18-88 years), 655 were White (73.3%), and 206 had type 1 diabetes (23.1%). Sensitivity and specificity of the AI system were high in detecting mtmDR (sensitivity: 95.5%; 95% CI, 92.4%-98.5% and specificity: 85.0%; 95% CI, 82.6%-87.4%) and vtDR (sensitivity: 95.1%; 95% CI, 90.1%-100% and specificity: 89.0%; 95% CI, 87.0%-91.1%) without dilation. Imageability was high without dilation, with the AI system able to grade 87.4% (95% CI, 85.2%-89.6%) of the eyes with reading center grades. When eyes with ungradable results were dilated per the protocol, the imageability improved to 97.4% (95% CI, 96.4%-98.5%), with the sensitivity and specificity being similar. After correcting for enrichment, the mtmDR specificity increased to 87.8% (95% CI, 86.3%-89.5%) and the sensitivity remained similar; for vtDR, both sensitivity (97.0%; 95% CI, 91.2%-100%) and specificity (90.1%; 95% CI, 89.4%-91.5%) improved. Conclusions and Relevance This prospective multicenter cross-sectional diagnostic study noted safety and accuracy with use of the EyeArt Automated DR Detection System in detecting both mtmDR and, for the first time, vtDR, without physician assistance. These findings suggest that improved access to accurate, reliable diabetic eye examinations may increase adherence to recommended annual screenings and allow for accelerated referral of patients identified as having vtDR.
Collapse
|
8
|
Partitioning variability in animal behavioral videos using semi-supervised variational autoencoders. PLoS Comput Biol 2021; 17:e1009439. [PMID: 34550974 PMCID: PMC8489729 DOI: 10.1371/journal.pcbi.1009439] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 10/04/2021] [Accepted: 09/09/2021] [Indexed: 12/02/2022] Open
Abstract
Recent neuroscience studies demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from video data. Here we introduce a new video analysis tool that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more information than pose estimates alone. We demonstrate this tool by extracting interpretable behavioral features from videos of three different head-fixed mouse preparations, as well as a freely moving mouse in an open field arena, and show how these interpretable features can facilitate downstream behavioral and neural analyses. We also show how the behavioral features produced by our model improve the precision and interpretation of these downstream analyses compared to using the outputs of either fully supervised or fully unsupervised methods alone.
Collapse
|
9
|
Multistage Optimization Using a Modified Gaussian Mixture Model in Sperm Motility Tracking. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:6953593. [PMID: 34497665 PMCID: PMC8421170 DOI: 10.1155/2021/6953593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Revised: 06/24/2021] [Accepted: 08/12/2021] [Indexed: 11/17/2022]
Abstract
Infertility is a condition whereby pregnancy does not occur despite having unprotected sexual intercourse for at least one year. The main reason could originate from either the male or the female, and sometimes, both contribute to the fertility disorder. For the male, sperm disorder was found to be the most common reason for infertility. In this paper, we proposed male infertility analysis based on automated sperm motility tracking. The proposed method worked in multistages, where the first stage focused on the sperm detection process using an improved Gaussian Mixture Model. A new optimization protocol was proposed to accurately detect the motile sperms prior to the sperm tracking process. Since the optimization protocol was imposed in the proposed system, the sperm tracking and velocity estimation processes are improved. The proposed method attained the highest average accuracy, sensitivity, and specificity of 92.3%, 96.3%, and 72.4%, respectively, when tested on 10 different samples. Our proposed method depicted better sperm detection quality when qualitatively observed as compared to other state-of-the-art techniques.
Collapse
|
10
|
Utilizing Artificial Intelligence to Manage COVID-19 Scientific Evidence Torrent with Risklick AI: A Critical Tool for Pharmacology and Therapy Development. Pharmacology 2021; 106:244-253. [PMID: 33910199 PMCID: PMC8247831 DOI: 10.1159/000515908] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 03/11/2021] [Indexed: 01/19/2023]
Abstract
INTRODUCTION The SARS-CoV-2 pandemic has led to one of the most critical and boundless waves of publications in the history of modern science. The necessity to find and pursue relevant information and quantify its quality is broadly acknowledged. Modern information retrieval techniques combined with artificial intelligence (AI) appear as one of the key strategies for COVID-19 living evidence management. Nevertheless, most AI projects that retrieve COVID-19 literature still require manual tasks. METHODS In this context, we pre-sent a novel, automated search platform, called Risklick AI, which aims to automatically gather COVID-19 scientific evidence and enables scientists, policy makers, and healthcare professionals to find the most relevant information tailored to their question of interest in real time. RESULTS Here, we compare the capacity of Risklick AI to find COVID-19-related clinical trials and scientific publications in comparison with clinicaltrials.gov and PubMed in the field of pharmacology and clinical intervention. DISCUSSION The results demonstrate that Risklick AI is able to find COVID-19 references more effectively, both in terms of precision and recall, compared to the baseline platforms. Hence, Risklick AI could become a useful alternative assistant to scientists fighting the COVID-19 pandemic.
Collapse
|
11
|
[Artificial intelligence in psychiatry: predictive value of characteristics on MR imaging of the brain]. NEDERLANDS TIJDSCHRIFT VOOR GENEESKUNDE 2021; 165:D5434. [PMID: 33793127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The clinical application of neuroimaging for psychological complaints has so far been limited to the exclusion of somatic pathology. Radiological assessment of brain scans usually does not explain the psychological symptoms. However, that does not mean that psychological symptoms have no neurobiological basis. Hope has therefore been placed on functional MRI, which measures the activity of the brain. However, this has not yet resulted in clinical applications. A multivariate approach using machine learning analysis now appears to be changing this. Recent studies show that machine learning analysis of functional as well as structural MRI images can also provide diagnostic, prognostic and predictive biomarkers for psychiatry. Larger studies are needed to develop clinical applications, such as clinical decision support systems to support personalized treatment choices.
Collapse
|
12
|
Abstract
Leveraging artificial intelligence (AI) approaches in animal health (AH) makes it possible to address highly complex issues such as those encountered in quantitative and predictive epidemiology, animal/human precision-based medicine, or to study host × pathogen interactions. AI may contribute (i) to diagnosis and disease case detection, (ii) to more reliable predictions and reduced errors, (iii) to representing more realistically complex biological systems and rendering computing codes more readable to non-computer scientists, (iv) to speeding-up decisions and improving accuracy in risk analyses, and (v) to better targeted interventions and anticipated negative effects. In turn, challenges in AH may stimulate AI research due to specificity of AH systems, data, constraints, and analytical objectives. Based on a literature review of scientific papers at the interface between AI and AH covering the period 2009-2019, and interviews with French researchers positioned at this interface, the present study explains the main AH areas where various AI approaches are currently mobilised, how it may contribute to renew AH research issues and remove methodological or conceptual barriers. After presenting the possible obstacles and levers, we propose several recommendations to better grasp the challenge represented by the AH/AI interface. With the development of several recent concepts promoting a global and multisectoral perspective in the field of health, AI should contribute to defract the different disciplines in AH towards more transversal and integrative research.
Collapse
|
13
|
Sexual reproductive health chatbots: should we be so quick to throw artificial intelligence out with the bathwater? BMJ SEXUAL & REPRODUCTIVE HEALTH 2021; 47:73. [PMID: 32883682 DOI: 10.1136/bmjsrh-2020-200823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
|
14
|
Application of artificial intelligence in screening for adverse perinatal outcomes: A protocol for systematic review. Medicine (Baltimore) 2020; 99:e23681. [PMID: 33327357 PMCID: PMC7738040 DOI: 10.1097/md.0000000000023681] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Accepted: 11/13/2020] [Indexed: 01/04/2023] Open
Abstract
The article presents a systematic review protocol. The aim of the study is an assessment of current studies regarding the application of artificial intelligence and neural networks in the screening for adverse perinatal outcomes. We intend to compare the reported efficacy of these methods to improve pregnancy care and outcomes. There are more and more studies that describe the role of machine learning in facilitating the diagnosis of adverse perinatal outcomes, like gestational diabetes or pregnancy hypertension. A systematic review of available literature seems to be crucial to compare the known efficacy and application. Publication of a systematic review in this category would improve the value of future studies. The studies reporting on artificial intelligence application will have a major impact on future prenatal practice.
Collapse
|
15
|
Mapping the co-evolution of artificial intelligence, robotics, and the internet of things over 20 years (1998-2017). PLoS One 2020; 15:e0242984. [PMID: 33264328 PMCID: PMC7710114 DOI: 10.1371/journal.pone.0242984] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 11/12/2020] [Indexed: 11/25/2022] Open
Abstract
Understanding the emergence, co-evolution, and convergence of science and technology (S&T) areas offers competitive intelligence for researchers, managers, policy makers, and others. This paper presents new funding, publication, and scholarly network metrics and visualizations that were validated via expert surveys. The metrics and visualizations exemplify the emergence and convergence of three areas of strategic interest: artificial intelligence (AI), robotics, and internet of things (IoT) over the last 20 years (1998-2017). For 32,716 publications and 4,497 NSF awards, we identify their topical coverage (using the UCSD map of science), evolving co-author networks, and increasing convergence. The results support data-driven decision making when setting proper research and development (R&D) priorities; developing future S&T investment strategies; or performing effective research program assessment.
Collapse
|
16
|
Clinician and computer: a study on patient perceptions of artificial intelligence in skeletal radiography. BMJ Health Care Inform 2020; 27:e100233. [PMID: 33187956 PMCID: PMC7668302 DOI: 10.1136/bmjhci-2020-100233] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 10/05/2020] [Accepted: 10/15/2020] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Up to half of all musculoskeletal injuries are investigated with plain radiographs. However, high rates of image interpretation error mean that novel solutions such as artificial intelligence (AI) are being explored. OBJECTIVES To determine patient confidence in clinician-led radiograph interpretation, the perception of AI-assisted interpretation and management, and to identify factors which might influence these views. METHODS A novel questionnaire was distributed to patients attending fracture clinic in a large inner-city teaching hospital. Categorical and Likert scale questions were used to assess participant demographics, daily electronics use, pain score and perceptions towards AI used to assist in interpretation of their radiographs, and guide management. RESULTS 216 questionnaires were included (M=126, F=90). Significantly higher confidence in clinician rather than AI-assisted interpretation was observed (clinician=9.20, SD=1.27 vs AI=7.06, SD=2.13), 95.4% reported favouring clinician over AI-performed interpretation in the event of disagreement.Small positive correlations were observed between younger age/educational achievement and confidence in AI-assistance. Students demonstrated similarly increased confidence (8.43, SD 1.80), and were over-represented in the minority who indicated a preference for AI-assessment over their clinicians (50%). CONCLUSIONS Participant's held the clinician's assessment in the highest regard and expressed a clear preference for it over the hypothetical AI assessment. However, robust confidence scores for the role of AI-assistance in interpreting skeletal imaging suggest patients view the technology favourably.Findings indicate that younger, more educated patients are potentially more comfortable with a role for AI-assistance however further research is needed to overcome the small number of responses on which these observations are based.
Collapse
|
17
|
Abstract
IMPORTANCE Chest radiography is the most common diagnostic imaging examination performed in emergency departments (EDs). Augmenting clinicians with automated preliminary read assistants could help expedite their workflows, improve accuracy, and reduce the cost of care. OBJECTIVE To assess the performance of artificial intelligence (AI) algorithms in realistic radiology workflows by performing an objective comparative evaluation of the preliminary reads of anteroposterior (AP) frontal chest radiographs performed by an AI algorithm and radiology residents. DESIGN, SETTING, AND PARTICIPANTS This diagnostic study included a set of 72 findings assembled by clinical experts to constitute a full-fledged preliminary read of AP frontal chest radiographs. A novel deep learning architecture was designed for an AI algorithm to estimate the findings per image. The AI algorithm was trained using a multihospital training data set of 342 126 frontal chest radiographs captured in ED and urgent care settings. The training data were labeled from their associated reports. Image-based F1 score was chosen to optimize the operating point on the receiver operating characteristics (ROC) curve so as to minimize the number of missed findings and overcalls per image read. The performance of the model was compared with that of 5 radiology residents recruited from multiple institutions in the US in an objective study in which a separate data set of 1998 AP frontal chest radiographs was drawn from a hospital source representative of realistic preliminary reads in inpatient and ED settings. A triple consensus with adjudication process was used to derive the ground truth labels for the study data set. The performance of AI algorithm and radiology residents was assessed by comparing their reads with ground truth findings. All studies were conducted through a web-based clinical study application system. The triple consensus data set was collected between February and October 2018. The comparison study was preformed between January and October 2019. Data were analyzed from October to February 2020. After the first round of reviews, further analysis of the data was performed from March to July 2020. MAIN OUTCOMES AND MEASURES The learning performance of the AI algorithm was judged using the conventional ROC curve and the area under the curve (AUC) during training and field testing on the study data set. For the AI algorithm and radiology residents, the individual finding label performance was measured using the conventional measures of label-based sensitivity, specificity, and positive predictive value (PPV). In addition, the agreement with the ground truth on the assignment of findings to images was measured using the pooled κ statistic. The preliminary read performance was recorded for AI algorithm and radiology residents using new measures of mean image-based sensitivity, specificity, and PPV designed for recording the fraction of misses and overcalls on a per image basis. The 1-sided analysis of variance test was used to compare the means of each group (AI algorithm vs radiology residents) using the F distribution, and the null hypothesis was that the groups would have similar means. RESULTS The trained AI algorithm achieved a mean AUC across labels of 0.807 (weighted mean AUC, 0.841) after training. On the study data set, which had a different prevalence distribution, the mean AUC achieved was 0.772 (weighted mean AUC, 0.865). The interrater agreement with ground truth finding labels for AI algorithm predictions had pooled κ value of 0.544, and the pooled κ for radiology residents was 0.585. For the preliminary read performance, the analysis of variance test was used to compare the distributions of AI algorithm and radiology residents' mean image-based sensitivity, PPV, and specificity. The mean image-based sensitivity for AI algorithm was 0.716 (95% CI, 0.704-0.729) and for radiology residents was 0.720 (95% CI, 0.709-0.732) (P = .66), while the PPV was 0.730 (95% CI, 0.718-0.742) for the AI algorithm and 0.682 (95% CI, 0.670-0.694) for the radiology residents (P < .001), and specificity was 0.980 (95% CI, 0.980-0.981) for the AI algorithm and 0.973 (95% CI, 0.971-0.974) for the radiology residents (P < .001). CONCLUSIONS AND RELEVANCE These findings suggest that it is possible to build AI algorithms that reach and exceed the mean level of performance of third-year radiology residents for full-fledged preliminary read of AP frontal chest radiographs. This diagnostic study also found that while the more complex findings would still benefit from expert overreads, the performance of AI algorithms was associated with the amount of data available for training rather than the level of difficulty of interpretation of the finding. Integrating such AI systems in radiology workflows for preliminary interpretations has the potential to expedite existing radiology workflows and address resource scarcity while improving overall accuracy and reducing the cost of care.
Collapse
|
18
|
Helping the Blind to Get through COVID-19: Social Distancing Assistant Using Real-Time Semantic Segmentation on RGB-D Video. SENSORS 2020; 20:s20185202. [PMID: 32932585 PMCID: PMC7571123 DOI: 10.3390/s20185202] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 09/09/2020] [Accepted: 09/10/2020] [Indexed: 11/16/2022]
Abstract
The current COVID-19 pandemic is having a major impact on our daily lives. Social distancing is one of the measures that has been implemented with the aim of slowing the spread of the disease, but it is difficult for blind people to comply with this. In this paper, we present a system that helps blind people to maintain physical distance to other persons using a combination of RGB and depth cameras. We use a real-time semantic segmentation algorithm on the RGB camera to detect where persons are and use the depth camera to assess the distance to them; then, we provide audio feedback through bone-conducting headphones if a person is closer than 1.5 m. Our system warns the user only if persons are nearby but does not react to non-person objects such as walls, trees or doors; thus, it is not intrusive, and it is possible to use it in combination with other assistive devices. We have tested our prototype system on one blind and four blindfolded persons, and found that the system is precise, easy to use, and amounts to low cognitive load.
Collapse
|
19
|
Artificial Intelligence and Clinical Decision Support for Radiologists and Referring Providers. J Am Coll Radiol 2020; 16:1351-1356. [PMID: 31492414 DOI: 10.1016/j.jacr.2019.06.010] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Revised: 06/03/2019] [Accepted: 06/04/2019] [Indexed: 01/05/2023]
Abstract
Recent advances in artificial intelligence (AI) are providing an opportunity to enhance existing clinical decision support (CDS) tools to improve patient safety and drive value-based imaging. We discuss the advantages and potential applications that may be realized with the synergy between AI and CDS systems. From the perspective of both radiologist and ordering provider, CDS could be significantly empowered using AI. CDS enhanced by AI could reduce friction in radiology workflows and can aid AI developers to identify relevant imaging features their tools should be seeking to extract from images. Furthermore, these systems can generate structured data to be used as input to develop machine learning algorithms, which can drive downstream care pathways. For referring providers, an AI-enabled CDS solution could enable an evolution from existing imaging-centric CDS toward decision support that takes into account a holistic patient perspective. More intelligent CDS could suggest imaging examinations in highly complex clinical scenarios, assist on the identification of appropriate imaging opportunities at the health system level, suggest appropriate individualized screening, or aid health care providers to ensure continuity of care. AI has the potential to enable the next generation of CDS, improving patient care and enhancing providers' and radiologists' experience.
Collapse
|
20
|
Using imaging to combat a pandemic: rationale for developing the UK National COVID-19 Chest Imaging Database. Eur Respir J 2020; 56:2001809. [PMID: 32616598 PMCID: PMC7331656 DOI: 10.1183/13993003.01809-2020] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 06/08/2020] [Indexed: 12/12/2022]
Abstract
The National COVID-19 Chest Imaging Database (NCCID) is a repository of chest radiographs, CT and MRI images and clinical data from COVID-19 patients across the UK, to support research and development of AI technology and give insight into COVID-19 disease https://bit.ly/3eQeuha
Collapse
|
21
|
A Comparative Analysis of Visual Encoding Models Based on Classification and Segmentation Task-Driven CNNs. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:5408942. [PMID: 32802150 PMCID: PMC7416280 DOI: 10.1155/2020/5408942] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/14/2020] [Revised: 05/31/2020] [Accepted: 06/06/2020] [Indexed: 11/17/2022]
Abstract
Nowadays, visual encoding models use convolution neural networks (CNNs) with outstanding performance in computer vision to simulate the process of human information processing. However, the prediction performances of encoding models will have differences based on different networks driven by different tasks. Here, the impact of network tasks on encoding models is studied. Using functional magnetic resonance imaging (fMRI) data, the features of natural visual stimulation are extracted using a segmentation network (FCN32s) and a classification network (VGG16) with different visual tasks but similar network structure. Then, using three sets of features, i.e., segmentation, classification, and fused features, the regularized orthogonal matching pursuit (ROMP) method is used to establish the linear mapping from features to voxel responses. The analysis results indicate that encoding models based on networks performing different tasks can effectively but differently predict stimulus-induced responses measured by fMRI. The prediction accuracy of the encoding model based on VGG is found to be significantly better than that of the model based on FCN in most voxels but similar to that of fused features. The comparative analysis demonstrates that the CNN performing the classification task is more similar to human visual processing than that performing the segmentation task.
Collapse
|
22
|
A fully automated artificial intelligence method for non-invasive, imaging-based identification of genetic alterations in glioblastomas. Sci Rep 2020; 10:11852. [PMID: 32678261 PMCID: PMC7366666 DOI: 10.1038/s41598-020-68857-8] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 06/29/2020] [Indexed: 02/02/2023] Open
Abstract
Glioblastoma is the most common malignant brain parenchymal tumor yet remains challenging to treat. The current standard of care-resection and chemoradiation-is limited in part due to the genetic heterogeneity of glioblastoma. Previous studies have identified several tumor genetic biomarkers that are frequently present in glioblastoma and can alter clinical management. Currently, genetic biomarker status is confirmed with tissue sampling, which is costly and only available after tumor resection or biopsy. The purpose of this study was to evaluate a fully automated artificial intelligence approach for predicting the status of several common glioblastoma genetic biomarkers on preoperative MRI. We retrospectively analyzed multisequence preoperative brain MRI from 199 adult patients with glioblastoma who subsequently underwent tumor resection and genetic testing. Radiomics features extracted from fully automated deep learning-based tumor segmentations were used to predict nine common glioblastoma genetic biomarkers with random forest regression. The proposed fully automated method was useful for predicting IDH mutations (sensitivity = 0.93, specificity = 0.88), ATRX mutations (sensitivity = 0.94, specificity = 0.92), chromosome 7/10 aneuploidies (sensitivity = 0.90, specificity = 0.88), and CDKN2 family mutations (sensitivity = 0.76, specificity = 0.86).
Collapse
|
23
|
Artificial intelligence in health care: value for whom? Lancet Digit Health 2020; 2:e338-e339. [PMID: 33328093 DOI: 10.1016/s2589-7500(20)30141-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 04/27/2020] [Accepted: 05/11/2020] [Indexed: 06/12/2023]
|
24
|
Acceptability of artificial intelligence (AI)-enabled chatbots, video consultations and live webchats as online platforms for sexual health advice. BMJ SEXUAL & REPRODUCTIVE HEALTH 2020; 46:210-217. [PMID: 31964779 DOI: 10.1136/bmjsrh-2018-200271] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Revised: 12/12/2019] [Accepted: 12/19/2019] [Indexed: 06/10/2023]
Abstract
OBJECTIVES Sexual and reproductive health (SRH) services are undergoing a digital transformation. This study explored the acceptability of three digital services, (i) video consultations via Skype, (ii) live webchats with a health advisor and (iii) artificial intelligence (AI)-enabled chatbots, as potential platforms for SRH advice. METHODS A pencil-and-paper 33-item survey was distributed in three clinics in Hampshire, UK for patients attending SRH services. Logistic regressions were performed to identify the correlates of acceptability. RESULTS In total, 257 patients (57% women, 50% aged <25 years) completed the survey. As the first point of contact, 70% preferred face-to-face consultations, 17% telephone consultation, 10% webchats and 3% video consultations. Most would be willing to use video consultations (58%) and webchat facilities (73%) for ongoing care, but only 40% found AI chatbots acceptable. Younger age (<25 years) (OR 2.43, 95% CI 1.35 to 4.38), White ethnicity (OR 2.87, 95% CI 1.30 to 6.34), past sexually transmitted infection (STI) diagnosis (OR 2.05, 95% CI 1.07 to 3.95), self-reported STI symptoms (OR 0.58, 95% CI 0.34 to 0.97), smartphone ownership (OR 16.0, 95% CI 3.64 to 70.5) and the preference for a SRH smartphone application (OR 1.95, 95% CI 1.13 to 3.35) were associated with video consultations, webchats or chatbots acceptability. CONCLUSIONS Although video consultations and webchat services appear acceptable, there is currently little support for SRH chatbots. The findings demonstrate a preference for human interaction in SRH services. Policymakers and intervention developers need to ensure that digital transformation is not only cost-effective but also acceptable to users, easily accessible and equitable to all populations using SRH services.
Collapse
|
25
|
Application of artificial intelligence using convolutional neural networks in determining the invasion depth of esophageal squamous cell carcinoma. Esophagus 2020; 17:250-256. [PMID: 31980977 DOI: 10.1007/s10388-020-00716-x] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Accepted: 01/12/2020] [Indexed: 02/06/2023]
Abstract
OBJECTIVES In Japan, endoscopic resection (ER) is often used to treat esophageal squamous cell carcinoma (ESCC) when invasion depths are diagnosed as EP-SM1, whereas ESCC cases deeper than SM2 are treated by surgical operation or chemoradiotherapy. Therefore, it is crucial to determine the invasion depth of ESCC via preoperative endoscopic examination. Recently, rapid progress in the utilization of artificial intelligence (AI) with deep learning in medical fields has been achieved. In this study, we demonstrate the diagnostic ability of AI to measure ESCC invasion depth. METHODS We retrospectively collected 1751 training images of ESCC at the Cancer Institute Hospital, Japan. We developed an AI-diagnostic system of convolutional neural networks using deep learning techniques with these images. Subsequently, 291 test images were prepared and reviewed by the AI-diagnostic system and 13 board-certified endoscopists to evaluate the diagnostic accuracy. RESULTS The AI-diagnostic system detected 95.5% (279/291) of the ESCC in test images in 10 s, analyzed the 279 images and correctly estimated the invasion depth of ESCC with a sensitivity of 84.1% and accuracy of 80.9% in 6 s. The accuracy score of this system exceeded those of 12 out of 13 board-certified endoscopists, and its area under the curve (AUC) was greater than the AUCs of all endoscopists. CONCLUSIONS The AI-diagnostic system demonstrated a higher diagnostic accuracy for ESCC invasion depth than those of endoscopists and, therefore, can be potentially used in ESCC diagnostics.
Collapse
|
26
|
Computer-aided diagnosis systems for osteoporosis detection: a comprehensive survey. Med Biol Eng Comput 2020; 58:1873-1917. [PMID: 32583141 DOI: 10.1007/s11517-020-02171-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 03/26/2020] [Indexed: 12/18/2022]
Abstract
Computer-aided diagnosis (CAD) has revolutionized the field of medical diagnosis. They assist in improving the treatment potentials and intensify the survival frequency by early diagnosing the diseases in an efficient, timely, and cost-effective way. The automatic segmentation has led the radiologist to successfully segment the region of interest to improve the diagnosis of diseases from medical images which is not so efficiently possible by manual segmentation. The aim of this paper is to survey the vision-based CAD systems especially focusing on the segmentation techniques for the pathological bone disease known as osteoporosis. Osteoporosis is the state of the bones where the mineral density of bones decreases and they become porous, making the bones easily susceptible to fractures by small injury or a fall. The article covers the image acquisition techniques for acquiring the medical images for osteoporosis diagnosis. The article also discusses the advanced machine learning paradigms employed in segmentation for osteoporosis disease. Other image processing steps in osteoporosis like feature extraction and classification are also briefly described. Finally, the paper gives the future directions to improve the osteoporosis diagnosis and presents the proposed architecture. Graphical abstract.
Collapse
|
27
|
|
28
|
Abstract
For the first time, a field programmable transistor array (FPTA) was used to evolve robot control circuits directly in analog hardware. Controllers were successfully incrementally evolved for a physical robot engaged in a series of visually guided behaviours, including finding a target in a complex environment where the goal was hidden from most locations. Circuits for recognising spoken commands were also evolved and these were used in conjunction with the controllers to enable voice control of the robot, triggering behavioural switching. Poor quality visual sensors were deliberately used to test the ability of evolved analog circuits to deal with noisy uncertain data in realtime. Visual features were coevolved with the controllers to automatically achieve dimensionality reduction and feature extraction and selection in an integrated way. An efficient new method was developed for simulating the robot in its visual environment. This allowed controllers to be evaluated in a simulation connected to the FPTA. The controllers then transferred seamlessly to the real world. The circuit replication issue was also addressed in experiments where circuits were evolved to be able to function correctly in multiple areas of the FPTA. A methodology was developed to analyse the evolved circuits which provided insights into their operation. Comparative experiments demonstrated the superior evolvability of the transistor array medium.
Collapse
|
29
|
Connecting Data to Insight: A Pan-Canadian Study on AI in Healthcare. Healthc Q 2020; 23:13-19. [PMID: 32249734 DOI: 10.12927/hcq.2020.26144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Across Canada, healthcare leaders are exploring the potential of artificial intelligence and advanced analytics to transform the healthcare system. This report shares a summary of the current state of healthcare analytics across major hospitals and public healthcare agencies in Canada. We present information on the current level of investment, data governance maturity, analytics talent and tools and models being leveraged across the nation. The findings point to an opportunity for enhanced collaboration in advanced analytics and the adoption of nascent artificial intelligence technologies in healthcare. The recommendations will help drive adoption in Canada, ultimately improving the patient experience and promoting better health outcomes for Canadians.
Collapse
|
30
|
Human vs. machine: the psychological and behavioral consequences of being compared to an outperforming artificial agent. PSYCHOLOGICAL RESEARCH 2020; 85:915-925. [PMID: 32206855 DOI: 10.1007/s00426-020-01317-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Accepted: 03/07/2020] [Indexed: 11/25/2022]
Abstract
While artificial agents (AA) such as Artificial Intelligence are being extensively developed, a popular belief that AA will someday surpass human intelligence is growing. The present research examined whether this common belief translates into negative psychological and behavioral consequences when individuals assess that an AA performs better than them on cognitive and intellectual tasks. In two studies, participants were led to believe that an AA performed better or less well than them on a cognitive inhibition task (Study 1) and on an intelligence task (Study 2). Results indicated that being outperformed by an AA increased subsequent participants' performance as long as they did not experience psychological discomfort towards the AA and self-threat. Psychological implications in terms of motivation and potential threat as well as the prerequisite for the future interactions of humans with AAs are further discussed.
Collapse
|
31
|
Image segmentation based on gray level and local relative entropy two dimensional histogram. PLoS One 2020; 15:e0229651. [PMID: 32126113 PMCID: PMC7053740 DOI: 10.1371/journal.pone.0229651] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Accepted: 02/12/2020] [Indexed: 11/18/2022] Open
Abstract
Though traditional thresholding methods are simple and efficient, they may result in poor segmentation results because only image's brightness information is taken into account in the procedure of threshold selection. Considering the contextual information between pixels can improve segmentation accuracy. To to this, a new thresholding method is proposed in this paper. The proposed method constructs a new two dimensional histogram using brightness of a pixel and local relative entropy of it's neighbor pixels. The local relative entropy (LRE) measures the brightness difference between a pixel and it's neighbor pixels. The two dimensional histogram, consisting of gray level and LRE, can reflect the contextual information between pixels to a certain extent. The optimal thresholding vector is obtained via minimizing cross entropy criteria. Experimental results show that the proposed method can achieve more accurate segmentation results than other thresholding methods.
Collapse
|
32
|
Abstract
Prosthetic vision is being applied to partially recover the retinal stimulation of visually impaired people. However, the phosphenic images produced by the implants have very limited information bandwidth due to the poor resolution and lack of color or contrast. The ability of object recognition and scene understanding in real environments is severely restricted for prosthetic users. Computer vision can play a key role to overcome the limitations and to optimize the visual information in the prosthetic vision, improving the amount of information that is presented. We present a new approach to build a schematic representation of indoor environments for simulated phosphene images. The proposed method combines a variety of convolutional neural networks for extracting and conveying relevant information about the scene such as structural informative edges of the environment and silhouettes of segmented objects. Experiments were conducted with normal sighted subjects with a Simulated Prosthetic Vision system. The results show good accuracy for object recognition and room identification tasks for indoor scenes using the proposed approach, compared to other image processing methods.
Collapse
|
33
|
|
34
|
Validation of automated artificial intelligence segmentation of optical coherence tomography images. PLoS One 2019; 14:e0220063. [PMID: 31419240 PMCID: PMC6697318 DOI: 10.1371/journal.pone.0220063] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Accepted: 07/08/2019] [Indexed: 12/15/2022] Open
Abstract
PURPOSE To benchmark the human and machine performance of spectral-domain (SD) and swept-source (SS) optical coherence tomography (OCT) image segmentation, i.e., pixel-wise classification, for the compartments vitreous, retina, choroid, sclera. METHODS A convolutional neural network (CNN) was trained on OCT B-scan images annotated by a senior ground truth expert retina specialist to segment the posterior eye compartments. Independent benchmark data sets (30 SDOCT and 30 SSOCT) were manually segmented by three classes of graders with varying levels of ophthalmic proficiencies. Nine graders contributed to benchmark an additional 60 images in three consecutive runs. Inter-human and intra-human class agreement was measured and compared to the CNN results. RESULTS The CNN training data consisted of a total of 6210 manually segmented images derived from 2070 B-scans (1046 SDOCT and 1024 SSOCT; 630 C-Scans). The CNN segmentation revealed a high agreement with all grader groups. For all compartments and groups, the mean Intersection over Union (IOU) score of CNN compartmentalization versus group graders' compartmentalization was higher than the mean score for intra-grader group comparison. CONCLUSION The proposed deep learning segmentation algorithm (CNN) for automated eye compartment segmentation in OCT B-scans (SDOCT and SSOCT) is on par with manual segmentations by human graders.
Collapse
|
35
|
A fast two-stage active contour model for intensity inhomogeneous image segmentation. PLoS One 2019; 14:e0214851. [PMID: 31002667 PMCID: PMC6474649 DOI: 10.1371/journal.pone.0214851] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Accepted: 03/21/2019] [Indexed: 11/21/2022] Open
Abstract
This paper presents a fast two-stage image segmentation method for intensity inhomogeneous image using an energy function based on a local region-based active contour model with exponential family. In the first stage, we preliminary segment the down-sampled images by the local correntropy-based K-means clustering model with exponential family, which can fast obtain a coarse result with low computational complexity. Subsequently, by taking the up-sampled contour of the first stage as initialization, we precisely segment the original images by the improved local correntropy-based K-means clustering model with exponential family in the second stage. This stage can achieve accurate result rapidly as the result of the proper initialization. Meanwhile, we converge the energy function of two-stage by the Riemannian steepest descent method. Comparing with other statistical numerically methods, which are used to solve the partial differential equations(PDEs), this method can obtain the global minima with less iterations. Moreover, to promote regularity of energy function, we use a popular regular method which is an inner product and applies spatial smoothing to the gradient flow. Extensive experiments on synthetic and real images demonstrate that the proposed method is more efficient than the other state-of-art methods on intensity inhomogeneous images.
Collapse
|
36
|
Architectures and accuracy of artificial neural network for disease classification from omics data. BMC Genomics 2019; 20:167. [PMID: 30832569 PMCID: PMC6399893 DOI: 10.1186/s12864-019-5546-z] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 02/20/2019] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Deep learning has made tremendous successes in numerous artificial intelligence applications and is unsurprisingly penetrating into various biomedical domains. High-throughput omics data in the form of molecular profile matrices, such as transcriptomes and metabolomes, have long existed as a valuable resource for facilitating diagnosis of patient statuses/stages. It is timely imperative to compare deep learning neural networks against classical machine learning methods in the setting of matrix-formed omics data in terms of classification accuracy and robustness. RESULTS Using 37 high throughput omics datasets, covering transcriptomes and metabolomes, we evaluated the classification power of deep learning compared to traditional machine learning methods. Representative deep learning methods, Multi-Layer Perceptrons (MLP) and Convolutional Neural Networks (CNN), were deployed and explored in seeking optimal architectures for the best classification performance. Together with five classical supervised classification methods (Linear Discriminant Analysis, Multinomial Logistic Regression, Naïve Bayes, Random Forest, Support Vector Machine), MLP and CNN were comparatively tested on the 37 datasets to predict disease stages or to discriminate diseased samples from normal samples. MLPs achieved the highest overall accuracy among all methods tested. More thorough analyses revealed that single hidden layer MLPs with ample hidden units outperformed deeper MLPs. Furthermore, MLP was one of the most robust methods against imbalanced class composition and inaccurate class labels. CONCLUSION Our results concluded that shallow MLPs (of one or two hidden layers) with ample hidden neurons are sufficient to achieve superior and robust classification performance in exploiting numerical matrix-formed omics data for diagnosis purpose. Specific observations regarding optimal network width, class imbalance tolerance, and inaccurate labeling tolerance will inform future improvement of neural network applications on functional genomics data.
Collapse
|
37
|
Abstract
As with medicine, artistic practice has a historical relationship with technologies. As technology advances, artists and medical practitioners will struggle with the complexities of introducing artificial intelligence into pursuits that have long been defined as fundamentally human. How will intelligent mechanization continue to aid efforts in art and medicine, even as it complicates them? Which new dilemmas will arise as essentially human pursuits are ever more deeply aligned with the rise of thinking machines?
Collapse
|
38
|
A computer-aided diagnosis system using artificial intelligence for the diagnosis and characterization of breast masses on ultrasound: Added value for the inexperienced breast radiologist. Medicine (Baltimore) 2019; 98:e14146. [PMID: 30653149 PMCID: PMC6370030 DOI: 10.1097/md.0000000000014146] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
To evaluate the value of the computer-aided diagnosis (CAD) program applied to diagnostic breast ultrasonography (US) based on operator experience.US images of 100 breast masses from 91 women over 2 months (from May to June 2015) were collected and retrospectively analyzed. Three less experienced and 2 experienced breast imaging radiologists analyzed the US features of the breast masses without and with CAD according to the Breast Imaging Reporting and Data System (BI-RADS) lexicon and categories. We then compared the diagnostic performance between the experienced and less experienced radiologists and analyzed the interobserver agreement among the radiologists.Of the 100 breast masses, 41 (41%) were malignant and 59 (59%) were benign. Compared with the experienced radiologists, the less experienced radiologists had significantly improved negative predictive value (86.7%-94.7% vs 53.3%-76.2%, respectively) and area under receiver operating characteristics curve (0.823-0.839 vs 0.623-0.759, respectively) with CAD assistance (all P < .05). In contrast, experienced radiologists had significantly improved specificity (52.5% and 54.2% vs 66.1% and 66.1%) and positive predictive value (55.6% and 58.5% vs 64.9% and 64.9%, respectively) with CAD assistance (all P < .05). Interobserver variability of US features and final assessment by categories were significantly improved and moderate agreement was seen in the final assessment after CAD combination regardless of the radiologist's experience.CAD is a useful additional diagnostic tool for breast US in all radiologists, with benefits differing depending on the radiologist's level of experience. In this study, CAD improved the interobserver agreement and showed acceptable agreement in the characterization of breast masses.
Collapse
|
39
|
A comparative quantitative study of utilizing artificial intelligence on electronic health records in the USA and China during 2008-2017. BMC Med Inform Decis Mak 2018; 18:117. [PMID: 30526643 PMCID: PMC6284279 DOI: 10.1186/s12911-018-0692-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND The application of artificial intelligence techniques for processing electronic health records data plays increasingly significant role in advancing clinical decision support. This study conducts a quantitative comparison on the research of utilizing artificial intelligence on electronic health records between the USA and China to discovery their research similarities and differences. METHODS Publications from both Web of Science and PubMed are retrieved to explore the research status and academic performances of the two countries quantitatively. Bibliometrics, geographic visualization, collaboration degree calculation, social network analysis, latent dirichlet allocation, and affinity propagation clustering are applied to analyze research quantity, collaboration relations, and hot research topics. RESULTS There are 1031 publications from the USA and 173 publications from China during 2008-2017 period. The annual numbers of publications from the USA and China increase polynomially. JAMIA with 135 publications and JBI with 13 publications are the top prolific journals for the USA and China, respectively. Harvard University with 101 publications and Zhejiang University with 12 publications are the top prolific affiliations for the USA and China, respectively. Massachusetts is the most prolific region with 211 publications for the USA, while for China, Taiwan is the top 1 with 47 publications. China has relatively higher institutional and international collaborations. Nine main research areas for the USA are identified, differentiating 7 for China. CONCLUSIONS There is a steadily growing presence and increasing visibility of utilizing artificial intelligence on electronic health records for the USA and China over the years. The results of the study demonstrate the research similarities and differences, as well as strengths and weaknesses of the two countries.
Collapse
|
40
|
Safety of patient-facing digital symptom checkers. Lancet 2018; 392:2263-2264. [PMID: 30413281 DOI: 10.1016/s0140-6736(18)32819-8] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Revised: 10/27/2018] [Accepted: 10/30/2018] [Indexed: 10/27/2022]
|
41
|
Faceness-Net: Face Detection through Deep Facial Part Responses. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2018; 40:1845-1859. [PMID: 28809674 DOI: 10.1109/tpami.2017.2738644] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We propose a deep convolutional neural network (CNN) for face detection leveraging on facial attributes based supervision. We observe a phenomenon that part detectors emerge within CNN trained to classify attributes from uncropped face images, without any explicit part supervision. The observation motivates a new method for finding faces through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is data-driven, and carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variations. Our method achieves promising performance on popular benchmarks including FDDB, PASCAL Faces, AFW, and WIDER FACE.
Collapse
|
42
|
Discriminatively Trained Latent Ordinal Model for Video Classification. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2018; 40:1829-1844. [PMID: 28841549 DOI: 10.1109/tpami.2017.2741482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We address the problem of video classification for facial analysis and human action recognition. We propose a novel weakly supervised learning method that models the video as a sequence of automatically mined, discriminative sub-events (e.g., onset and offset phase for "smile", running and jumping for "highjump"). The proposed model is inspired by the recent works on Multiple Instance Learning and latent SVM/HCRF - it extends such frameworks to model the ordinal aspect in the videos, approximately. We obtain consistent improvements over relevant competitive baselines on four challenging and publicly available video based facial analysis datasets for prediction of expression, clinical pain and intent in dyadic conversations, and on three challenging human action datasets. We also validate the method with qualitative results and show that they largely support the intuitions behind the method.
Collapse
|
43
|
Determining Anxiety in Obsessive Compulsive Disorder through Behavioural Clustering and Variations in Repetition Intensity. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 160:65-74. [PMID: 29728248 DOI: 10.1016/j.cmpb.2018.03.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2017] [Revised: 02/18/2018] [Accepted: 03/20/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVES Over the last decade, the application of computer vision techniques to the analysis of behavioural patterns has seen a considerable increase in research interest. One such interesting and recent application is the visual behavioural analysis of mental disorders. Despite the very recent surge in interest in this area, relatively little has been done thus far to assist individuals living with Obsessive Compulsive Disorder. The work proposed herein represents a proof of concept system designed to demonstrate the efficacy of such an approach, from the computational perspective. The specific focus of this work lies in demonstrating a mechanism for clustering different kinds of Obsessive Compulsive Disorder behaviours and then comparing new behaviours to existing behaviours to determine the approximate level of anxiety represented by a compulsive behaviour. METHODS The proposed system uses Temporal Motion Heat Maps, SURF descriptors, a visual bag of words model and SVM-based classification to categorise representations of various behaviours commonly seen in OCD. Moreover, we apply a set of statistical measures to the images in a given category in order to derive an approximate anxiety level for a given compulsive behaviour. This proof of concept is an essential step in the direction of integrating computational approaches into the treatment of psychiatric conditions such as Obsessive Compulsive Disorder, for more effective recovery. RESULTS Results gleaned from experimental simulations indicate that the proposed system is capable of correctly classifying different types of simulated Obsessive Compulsive Disorder behaviour classes 75% of the time, with the misclassifications almost exclusively occurring when two behavioural clusters appear highly similar. Based on this information the proposed system is then able to assign an approximate behavioural anxiety level to the compulsive behaviours that meets the approval of a mental health professional. CONCLUSIONS The proposed system demonstrates a good ability to categorise various types of simulated OCD behaviour, in addition to establishing an approximate anxiety level for a given compulsive behaviour. This research demonstrates strong potential for the use of such systems in assisting mental health professionals looking to better understand their patients' condition and treatment progress across time.
Collapse
|
44
|
Adaptive median binary patterns for fully automatic nerves tracking in ultrasound images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 160:129-140. [PMID: 29728240 DOI: 10.1016/j.cmpb.2018.03.013] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2017] [Revised: 02/07/2018] [Accepted: 03/20/2018] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVE In the last decade, Ultrasound-Guided Regional Anesthesia (UGRA) gained importance in surgical procedures and pain management, due to its ability to perform target delivery of local anesthetics under direct sonographic visualization. However, practicing UGRA can be challenging, since it requires high skilled and experienced operator. Among the difficult task that the operator can face, is the tracking of the nerve structure in ultrasound images. Tracking task in US images is very challenging due to the noise and other artifacts. METHODS In this paper, we introduce a new and robust tracking technique by using Adaptive Median Binary Pattern(AMBP) as texture feature for tracking algorithms (particle filter, mean-shift and Kanade-Lucas-Tomasi(KLT)). Moreover, we propose to incorporate Kalman filter as prediction and correction steps for the tracking algorithms, in order to enhance the accuracy, computational cost and handle target disappearance. RESULTS The proposed method have been applied on real data and evaluated in different situations. The obtained results show that tracking with AMBP features outperforms other descriptors and achieved best performance with 95% accuracy. CONCLUSIONS This paper presents the first fully automatic nerve tracking method in Ultrasound images. AMBP features outperforms other descriptors in all situations such as noisy and filtered images.
Collapse
|
45
|
Abstract
One of the first surgical specialties to adopt robotic procedures and one that continues to innovate
Collapse
|
46
|
Computational mechanisms underlying cortical responses to the affordance properties of visual scenes. PLoS Comput Biol 2018; 14:e1006111. [PMID: 29684011 PMCID: PMC5933806 DOI: 10.1371/journal.pcbi.1006111] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Revised: 05/03/2018] [Accepted: 03/31/2018] [Indexed: 11/24/2022] Open
Abstract
Biologically inspired deep convolutional neural networks (CNNs), trained for computer vision tasks, have been found to predict cortical responses with remarkable accuracy. However, the internal operations of these models remain poorly understood, and the factors that account for their success are unknown. Here we develop a set of techniques for using CNNs to gain insights into the computational mechanisms underlying cortical responses. We focused on responses in the occipital place area (OPA), a scene-selective region of dorsal occipitoparietal cortex. In a previous study, we showed that fMRI activation patterns in the OPA contain information about the navigational affordances of scenes; that is, information about where one can and cannot move within the immediate environment. We hypothesized that this affordance information could be extracted using a set of purely feedforward computations. To test this idea, we examined a deep CNN with a feedforward architecture that had been previously trained for scene classification. We found that responses in the CNN to scene images were highly predictive of fMRI responses in the OPA. Moreover the CNN accounted for the portion of OPA variance relating to the navigational affordances of scenes. The CNN could thus serve as an image-computable candidate model of affordance-related responses in the OPA. We then ran a series of in silico experiments on this model to gain insights into its internal operations. These analyses showed that the computation of affordance-related features relied heavily on visual information at high-spatial frequencies and cardinal orientations, both of which have previously been identified as low-level stimulus preferences of scene-selective visual cortex. These computations also exhibited a strong preference for information in the lower visual field, which is consistent with known retinotopic biases in the OPA. Visualizations of feature selectivity within the CNN suggested that affordance-based responses encoded features that define the layout of the spatial environment, such as boundary-defining junctions and large extended surfaces. Together, these results map the sensory functions of the OPA onto a fully quantitative model that provides insights into its visual computations. More broadly, they advance integrative techniques for understanding visual cortex across multiple level of analysis: from the identification of cortical sensory functions to the modeling of their underlying algorithms. How does visual cortex compute behaviorally relevant properties of the local environment from sensory inputs? For decades, computational models have been able to explain only the earliest stages of biological vision, but recent advances in deep neural networks have yielded a breakthrough in the modeling of high-level visual cortex. However, these models are not explicitly designed for testing neurobiological theories, and, like the brain itself, their internal operations remain poorly understood. We examined a deep neural network for insights into the cortical representation of navigational affordances in visual scenes. In doing so, we developed a set of high-throughput techniques and statistical tools that are broadly useful for relating the internal operations of neural networks with the information processes of the brain. Our findings demonstrate that a deep neural network with purely feedforward computations can account for the processing of navigational layout in high-level visual cortex. We next performed a series of experiments and visualization analyses on this neural network. These analyses characterized a set of stimulus input features that may be critical for computing navigationally related cortical representations, and they identified a set of high-level, complex scene features that may serve as a basis set for the cortical coding of navigational layout. These findings suggest a computational mechanism through which high-level visual cortex might encode the spatial structure of the local navigational environment, and they demonstrate an experimental approach for leveraging the power of deep neural networks to understand the visual computations of the brain.
Collapse
|
47
|
Artificial Intelligence Use in Healthcare Growing Fast. JOURNAL OF AHIMA 2017; 88:76. [PMID: 29424995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
|
48
|
[Not Available]. REVUE MEDICALE SUISSE 2017; 13:928. [PMID: 28727366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
|
49
|
Predicting IVF Outcome: A Proposed Web-based System Using Artificial Intelligence. In Vivo 2016; 30:507-512. [PMID: 27381616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2016] [Accepted: 04/11/2016] [Indexed: 06/06/2023]
Abstract
AIM To propose a functional in vitro fertilization (IVF) prediction model to assist clinicians in tailoring personalized treatment of subfertile couples and improve assisted reproduction outcome. MATERIALS AND METHODS Construction and evaluation of an enhanced web-based system with a novel Artificial Neural Network (ANN) architecture and conformed input and output parameters according to the clinical and bibliographical standards, driven by a complete data set and "trained" by a network expert in an IVF setting. RESULTS The system is capable to act as a routine information technology platform for the IVF unit and is capable of recalling and evaluating a vast amount of information in a rapid and automated manner to provide an objective indication on the outcome of an artificial reproductive cycle. CONCLUSION ANNs are an exceptional candidate in providing the fertility specialist with numerical estimates to promote personalization of healthcare and adaptation of the course of treatment according to the indications.
Collapse
|
50
|
Twin Neurons for Efficient Real-World Data Distribution in Networks of Neural Cliques: Applications in Power Management in Electronic Circuits. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:375-387. [PMID: 26513805 DOI: 10.1109/tnnls.2015.2480545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Associative memories are data structures that allow retrieval of previously stored messages given part of their content. They, thus, behave similarly to the human brain's memory that is capable, for instance, of retrieving the end of a song, given its beginning. Among different families of associative memories, sparse ones are known to provide the best efficiency (ratio of the number of bits stored to that of the bits used). Recently, a new family of sparse associative memories achieving almost optimal efficiency has been proposed. Their structure, relying on binary connections and neurons, induces a direct mapping between input messages and stored patterns. Nevertheless, it is well known that nonuniformity of the stored messages can lead to a dramatic decrease in performance. In this paper, we show the impact of nonuniformity on the performance of this recent model, and we exploit the structure of the model to improve its performance in practical applications, where data are not necessarily uniform. In order to approach the performance of networks with uniformly distributed messages presented in theoretical studies, twin neurons are introduced. To assess the adapted model, twin neurons are used with the real-world data to optimize power consumption of electronic circuits in practical test cases.
Collapse
|