1
|
Al-Sabri R, Gao J, Chen J, Oloulade BM, Lyu T. Multi-View Graph Neural Architecture Search for Biomedical Entity and Relation Extraction. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:1221-1233. [PMID: 36074877 DOI: 10.1109/tcbb.2022.3205113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Recently, graph neural architecture search (GNAS) frameworks have been successfully used to automatically design the optimal neural architectures for many problems such as node classification and graph classification. In the existing GNAS frameworks, the designed graph neural network (GNN) architectures learn the representation of homogenous graphs with one type of relationship connecting two nodes. However, multi-view graphs, where each view represents a type of relationship among nodes, are ubiquitous in the real world. The traditional GNAS frameworks learn the graph representation without considering the interactions between nodes and multiple relationships, so they fail to solve multi-view graph-based problems, such as multi-view graphs modelling the biomedical entity and relation extraction tasks. In this paper, we propose MVGNAS, a multi-view graph neural network automatic modelling framework for biomedical entity and relation extraction, to resolve this challenge. In MVGNAS, we propose an automatic multi-view representation learning to learn low-dimensional representations of nodes that capture multiple relationships in a multi-view graph, representing the first research work in literature to solve the problem of multi-view graph representation learning architecture search for biomedical entity and relation extraction tasks. The experimental results demonstrate that MVGNAS can achieve the best performance in biomedical entity and relation extraction tasks against the state-of-the-art baseline methods.
Collapse
|
2
|
de Freitas MP, Piai VA, Farias RH, Fernandes AMR, de Moraes Rossetto AG, Leithardt VRQ. Artificial Intelligence of Things Applied to Assistive Technology: A Systematic Literature Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:8531. [PMID: 36366227 PMCID: PMC9658699 DOI: 10.3390/s22218531] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 10/27/2022] [Accepted: 10/31/2022] [Indexed: 06/16/2023]
Abstract
According to the World Health Organization, about 15% of the world's population has some form of disability. Assistive Technology, in this context, contributes directly to the overcoming of difficulties encountered by people with disabilities in their daily lives, allowing them to receive education and become part of the labor market and society in a worthy manner. Assistive Technology has made great advances in its integration with Artificial Intelligence of Things (AIoT) devices. AIoT processes and analyzes the large amount of data generated by Internet of Things (IoT) devices and applies Artificial Intelligence models, specifically, machine learning, to discover patterns for generating insights and assisting in decision making. Based on a systematic literature review, this article aims to identify the machine-learning models used across different research on Artificial Intelligence of Things applied to Assistive Technology. The survey of the topics approached in this article also highlights the context of such research, their application, the IoT devices used, and gaps and opportunities for further development. The survey results show that 50% of the analyzed research address visual impairment, and, for this reason, most of the topics cover issues related to computational vision. Portable devices, wearables, and smartphones constitute the majority of IoT devices. Deep neural networks represent 81% of the machine-learning models applied in the reviewed research.
Collapse
Affiliation(s)
| | - Vinícius Aquino Piai
- School of Sea, Science and Technology, University of the Itajaí Valley, Itajaí 88302-901, Brazil
| | - Ricardo Heffel Farias
- School of Sea, Science and Technology, University of the Itajaí Valley, Itajaí 88302-901, Brazil
| | - Anita M. R. Fernandes
- School of Sea, Science and Technology, University of the Itajaí Valley, Itajaí 88302-901, Brazil
| | | | - Valderi Reis Quietinho Leithardt
- COPELABS, Lusófona University of Humanities and Technologies, Campo Grande 376, 1749-024 Lisboa, Portugal
- VALORIZA, Research Center for Endogenous Resources Valorization, Instituto Politécnico de Portalegre, 7300-555 Portalegre, Portugal
| |
Collapse
|
3
|
Piao C, Lv M, Wang S, Zhou R, Wang Y, Wei J, Liu J. Multi-objective data enhancement for deep learning-based ultrasound analysis. BMC Bioinformatics 2022; 23:438. [PMID: 36266626 PMCID: PMC9583467 DOI: 10.1186/s12859-022-04985-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 10/10/2022] [Indexed: 11/10/2022] Open
Abstract
Recently, Deep Learning based automatic generation of treatment recommendation has been attracting much attention. However, medical datasets are usually small, which may lead to over-fitting and inferior performances of deep learning models. In this paper, we propose multi-objective data enhancement method to indirectly scale up the medical data to avoid over-fitting and generate high quantity treatment recommendations. Specifically, we define a main and several auxiliary tasks on the same dataset and train a specific model for each of these tasks to learn different aspects of knowledge in limited data scale. Meanwhile, a Soft Parameter Sharing method is exploited to share learned knowledge among models. By sharing the knowledge learned by auxiliary tasks to the main task, the proposed method can take different semantic distributions into account during the training process of the main task. We collected an ultrasound dataset of thyroid nodules that contains Findings, Impressions and Treatment Recommendations labeled by professional doctors. We conducted various experiments on the dataset to validate the proposed method and justified its better performance than existing methods.
Collapse
Affiliation(s)
- Chengkai Piao
- College of Computer Science, Nankai University, Tianjin, China
| | - Mengyue Lv
- Department of Ultrasound, Cangzhou Municipal Haixing Hospital, Cangzhou, China
| | - Shujie Wang
- Department of Ultrasound, Cangzhou Municipal Haixing Hospital, Cangzhou, China
| | - Rongyan Zhou
- Department of Ultrasound, Cangzhou Municipal Haixing Hospital, Cangzhou, China
| | - Yuchen Wang
- College of Computer Science, Nankai University, Tianjin, China
| | - Jinmao Wei
- College of Computer Science, Nankai University, Tianjin, China.
| | - Jian Liu
- College of Computer Science, Nankai University, Tianjin, China.
| |
Collapse
|
4
|
MRE: A Military Relation Extraction Model Based on BiGRU and Multi-Head Attention. Symmetry (Basel) 2021. [DOI: 10.3390/sym13091742] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
A great deal of operational information exists in the form of text. Therefore, extracting operational information from unstructured military text is of great significance for assisting command decision making and operations. Military relation extraction is one of the main tasks of military information extraction, which aims at identifying the relation between two named entities from unstructured military texts. However, the traditional methods of extracting military relations cannot easily resolve problems such as inadequate manual features and inaccurate Chinese word segmentation in military fields, failing to make full use of symmetrical entity relations in military texts. With our approach, based on the pre-trained language model, we present a Chinese military relation extraction method, which combines the bi-directional gate recurrent unit (BiGRU) and multi-head attention mechanism (MHATT). More specifically, the conceptual foundation of our method lies in constructing an embedding layer and combining word embedding with position embedding, based on the pre-trained language model; the output vectors of BiGRU neural networks are symmetrically spliced to learn the semantic features of context, and they fuse the multi-head attention mechanism to improve the ability of expressing semantic information. On the military text corpus that we have built, we conduct extensive experiments. We demonstrate the superiority of our method over the traditional non-attention model, attention model, and improved attention model, and the comprehensive evaluation value F1-score of the model is improved by about 4%.
Collapse
|
5
|
Lee LH, Lu Y. Multiple Embeddings Enhanced Multi-Graph Neural Networks for Chinese Healthcare Named Entity Recognition. IEEE J Biomed Health Inform 2021; 25:2801-2810. [PMID: 33385314 DOI: 10.1109/jbhi.2020.3048700] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Named Entity Recognition (NER) is a natural language processing task for recognizing named entities in a given sentence. Chinese NER is difficult due to the lack of delimited spaces and conventional features for determining named entity boundaries and categories. This study proposes the ME-MGNN (Multiple Embeddings enhanced Multi-Graph Neural Networks) model for Chinese NER in the healthcare domain. We integrate multiple embeddings at different granularities from the radical, character to word levels for an extended character representation, and this is fed into multiple gated graph sequence neural networks to identify named entities and classify their types. The experimental datasets were collected from health-related news, digital health magazines and medical question/answer forums. Manual annotation was conducted for a total of 68,460 named entities across 10 entity types (body, symptom, instrument, examination, chemical, disease, drug, supplement, treatment and time) in 30,692 sentences. Experimental results indicated our ME-MGNN model achieved an F1-score result of 75.69, outperforming previous methods. In practice, a series of model analysis implied that our method is effective and efficient for Chinese healthcare NER.
Collapse
|
6
|
Mitra S, Saha S, Hasanuzzaman M. A Multi-View Deep Neural Network Model for Chemical-Disease Relation Extraction From Imbalanced Datasets. IEEE J Biomed Health Inform 2020; 24:3315-3325. [DOI: 10.1109/jbhi.2020.2983365] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
7
|
Zhao D, Wang J, Zhang Y, Wang X, Lin H, Yang Z. Incorporating representation learning and multihead attention to improve biomedical cross-sentence n-ary relation extraction. BMC Bioinformatics 2020; 21:312. [PMID: 32677883 PMCID: PMC7364499 DOI: 10.1186/s12859-020-03629-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 06/23/2020] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Most biomedical information extraction focuses on binary relations within single sentences. However, extracting n-ary relations that span multiple sentences is in huge demand. At present, in the cross-sentence n-ary relation extraction task, the mainstream method not only relies heavily on syntactic parsing but also ignores prior knowledge. RESULTS In this paper, we propose a novel cross-sentence n-ary relation extraction method that utilizes the multihead attention and knowledge representation that is learned from the knowledge graph. Our model is built on self-attention, which can directly capture the relations between two words regardless of their syntactic relation. In addition, our method makes use of entity and relation information from the knowledge base to impose assistance while predicting the relation. Experiments on n-ary relation extraction show that combining context and knowledge representations can significantly improve the n-ary relation extraction performance. Meanwhile, we achieve comparable results with state-of-the-art methods. CONCLUSIONS We explored a novel method for cross-sentence n-ary relation extraction. Unlike previous approaches, our methods operate directly on the sequence and learn how to model the internal structures of sentences. In addition, we introduce the knowledge representations learned from the knowledge graph into the cross-sentence n-ary relation extraction. Experiments based on knowledge representation learning show that entities and relations can be extracted in the knowledge graph, and coding this knowledge can provide consistent benefits.
Collapse
Affiliation(s)
- Di Zhao
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Jian Wang
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China.
| | - Yijia Zhang
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Xin Wang
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Hongfei Lin
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Zhihao Yang
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| |
Collapse
|
8
|
Bio-semantic relation extraction with attention-based external knowledge reinforcement. BMC Bioinformatics 2020; 21:213. [PMID: 32448122 PMCID: PMC7245897 DOI: 10.1186/s12859-020-3540-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 05/07/2020] [Indexed: 12/13/2022] Open
Abstract
Background Semantic resources such as knowledge bases contains high-quality-structured knowledge and therefore require significant effort from domain experts. Using the resources to reinforce the information retrieval from the unstructured text may further exploit the potentials of such unstructured text resources and their curated knowledge. Results The paper proposes a novel method that uses a deep neural network model adopting the prior knowledge to improve performance in the automated extraction of biological semantic relations from the scientific literature. The model is based on a recurrent neural network combining the attention mechanism with the semantic resources, i.e., UniProt and BioModels. Our method is evaluated on the BioNLP and BioCreative corpus, a set of manually annotated biological text. The experiments demonstrate that the method outperforms the current state-of-the-art models, and the structured semantic information could improve the result of bio-text-mining. Conclusion The experiment results show that our approach can effectively make use of the external prior knowledge information and improve the performance in the protein-protein interaction extraction task. The method should be able to be generalized for other types of data, although it is validated on biomedical texts.
Collapse
|