1
|
Guo Z, Chen F. Idle-state detection in motor imagery of articulation using early information: A functional Near-infrared spectroscopy study. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103369] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
2
|
Wang C, Yan H, Huang W, Li J, Wang Y, Fan YS, Sheng W, Liu T, Li R, Chen H. Reconstructing Rapid Natural Vision with fMRI-Conditional Video Generative Adversarial Network. Cereb Cortex 2022; 32:4502-4511. [PMID: 35078227 DOI: 10.1093/cercor/bhab498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 10/24/2021] [Accepted: 12/03/2021] [Indexed: 11/14/2022] Open
Abstract
Recent functional magnetic resonance imaging (fMRI) studies have made significant progress in reconstructing perceived visual content, which advanced our understanding of the visual mechanism. However, reconstructing dynamic natural vision remains a challenge because of the limitation of the temporal resolution of fMRI. Here, we developed a novel fMRI-conditional video generative adversarial network (f-CVGAN) to reconstruct rapid video stimuli from evoked fMRI responses. In this model, we employed a generator to produce spatiotemporal reconstructions and employed two separate discriminators (spatial and temporal discriminators) for the assessment. We trained and tested the f-CVGAN on two publicly available video-fMRI datasets, and the model produced pixel-level reconstructions of 8 perceived video frames from each fMRI volume. Experimental results showed that the reconstructed videos were fMRI-related and captured important spatial and temporal information of the original stimuli. Moreover, we visualized the cortical importance map and found that the visual cortex is extensively involved in the reconstruction, whereas the low-level visual areas (V1/V2/V3/V4) showed the largest contribution. Our work suggests that slow blood oxygen level-dependent signals describe neural representations of the fast perceptual process that can be decoded in practice.
Collapse
Affiliation(s)
- Chong Wang
- The Clinical Hospital of Chengdu Brain Science Institute, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
- MOE Key Lab for Neuroinformation; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Hongmei Yan
- The Clinical Hospital of Chengdu Brain Science Institute, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
- MOE Key Lab for Neuroinformation; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Wei Huang
- The Clinical Hospital of Chengdu Brain Science Institute, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Jiyi Li
- The Clinical Hospital of Chengdu Brain Science Institute, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Yuting Wang
- The Clinical Hospital of Chengdu Brain Science Institute, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Yun-Shuang Fan
- The Clinical Hospital of Chengdu Brain Science Institute, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Wei Sheng
- The Clinical Hospital of Chengdu Brain Science Institute, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Tao Liu
- The Clinical Hospital of Chengdu Brain Science Institute, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Rong Li
- The Clinical Hospital of Chengdu Brain Science Institute, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
- MOE Key Lab for Neuroinformation; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Huafu Chen
- The Clinical Hospital of Chengdu Brain Science Institute, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
- MOE Key Lab for Neuroinformation; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu 610054, China
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
3
|
A neural decoding algorithm that generates language from visual activity evoked by natural images. Neural Netw 2021; 144:90-100. [PMID: 34478941 DOI: 10.1016/j.neunet.2021.08.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 06/22/2021] [Accepted: 08/05/2021] [Indexed: 11/23/2022]
Abstract
Transforming neural activities into language is revolutionary for human-computer interaction as well as functional restoration of aphasia. Present rapid development of artificial intelligence makes it feasible to decode the neural signals of human visual activities. In this paper, a novel Progressive Transfer Language Decoding Model (PT-LDM) is proposed to decode visual fMRI signals into phrases or sentences when natural images are being watched. The PT-LDM consists of an image-encoder, a fMRI encoder and a language-decoder. The results showed that phrases and sentences were successfully generated from visual activities. Similarity analysis showed that three often-used evaluation indexes BLEU, ROUGE and CIDEr reached 0.182, 0.197 and 0.680 averagely between the generated texts and the corresponding annotated texts in the testing set respectively, significantly higher than the baseline. Moreover, we found that higher visual areas usually had better performance than lower visual areas and the contribution curve of visual response patterns in language decoding varied at successively different time points. Our findings demonstrate that the neural representations elicited in visual cortices when scenes are being viewed have already contained semantic information that can be utilized to generate human language. Our study shows potential application of language-based brain-machine interfaces in the future, especially for assisting aphasics in communicating more efficiently with fMRI signals.
Collapse
|
4
|
Huang W, Yan H, Cheng K, Wang Y, Wang C, Li J, Li C, Li C, Zuo Z, Chen H. A dual-channel language decoding from brain activity with progressive transfer training. Hum Brain Mapp 2021; 42:5089-5100. [PMID: 34314088 PMCID: PMC8449118 DOI: 10.1002/hbm.25603] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 06/24/2021] [Accepted: 07/13/2021] [Indexed: 01/03/2023] Open
Abstract
When we view a scene, the visual cortex extracts and processes visual information in the scene through various kinds of neural activities. Previous studies have decoded the neural activity into single/multiple semantic category tags which can caption the scene to some extent. However, these tags are isolated words with no grammatical structure, insufficiently conveying what the scene contains. It is well‐known that textual language (sentences/phrases) is superior to single word in disclosing the meaning of images as well as reflecting people's real understanding of the images. Here, based on artificial intelligence technologies, we attempted to build a dual‐channel language decoding model (DC‐LDM) to decode the neural activities evoked by images into language (phrases or short sentences). The DC‐LDM consisted of five modules, namely, Image‐Extractor, Image‐Encoder, Nerve‐Extractor, Nerve‐Encoder, and Language‐Decoder. In addition, we employed a strategy of progressive transfer to train the DC‐LDM for improving the performance of language decoding. The results showed that the texts decoded by DC‐LDM could describe natural image stimuli accurately and vividly. We adopted six indexes to quantitatively evaluate the difference between the decoded texts and the annotated texts of corresponding visual images, and found that Word2vec‐Cosine similarity (WCS) was the best indicator to reflect the similarity between the decoded and the annotated texts. In addition, among different visual cortices, we found that the text decoded by the higher visual cortex was more consistent with the description of the natural image than the lower one. Our decoding model may provide enlightenment in language‐based brain‐computer interface explorations.
Collapse
Affiliation(s)
- Wei Huang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Hongmei Yan
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Kaiwen Cheng
- School of Language Intelligence, Sichuan International Studies University, Chongqing, China
| | - Yuting Wang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Chong Wang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Jiyi Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Chen Li
- Department of Medical Information Engineering, Sichuan University, Chengdu, China
| | - Chaorong Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhentao Zuo
- State Key Laboratory of Brain and Cognitive Science, Beijing MR Center for Brain Research, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Huafu Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
5
|
Huang W, Yan H, Wang C, Yang X, Li J, Zuo Z, Zhang J, Chen H. Deep Natural Image Reconstruction from Human Brain Activity Based on Conditional Progressively Growing Generative Adversarial Networks. Neurosci Bull 2021; 37:369-379. [PMID: 33222145 PMCID: PMC7954952 DOI: 10.1007/s12264-020-00613-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Accepted: 06/16/2020] [Indexed: 01/01/2023] Open
Abstract
Brain decoding based on functional magnetic resonance imaging has recently enabled the identification of visual perception and mental states. However, due to the limitations of sample size and the lack of an effective reconstruction model, accurate reconstruction of natural images is still a major challenge. The current, rapid development of deep learning models provides the possibility of overcoming these obstacles. Here, we propose a deep learning-based framework that includes a latent feature extractor, a latent feature decoder, and a natural image generator, to achieve the accurate reconstruction of natural images from brain activity. The latent feature extractor is used to extract the latent features of natural images. The latent feature decoder predicts the latent features of natural images based on the response signals from the higher visual cortex. The natural image generator is applied to generate reconstructed images from the predicted latent features of natural images and the response signals from the visual cortex. Quantitative and qualitative evaluations were conducted with test images. The results showed that the reconstructed image achieved comparable, accurate reproduction of the presented image in both high-level semantic category information and low-level pixel information. The framework we propose shows promise for decoding the brain activity.
Collapse
Affiliation(s)
- Wei Huang
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Hongmei Yan
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| | - Chong Wang
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Xiaoqing Yang
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Jiyi Li
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Zhentao Zuo
- State Key Laboratory of Brain and Cognitive Science, Beijing MR Center for Brain Research, Institute of Biophysics, Chinese Academy of Sciences, Beijing, 100101, China.
| | - Jiang Zhang
- Department of Medical Information Engineering, Sichuan University, Chengdu, 610065, China
| | - Huafu Chen
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| |
Collapse
|
6
|
Wang H, Liang P, Zheng L, Long C, Li H, Zuo Y. eHSCPr discriminating the cell identity involved in endothelial to hematopoietic transition. Bioinformatics 2021; 37:2157-2164. [PMID: 33532815 DOI: 10.1093/bioinformatics/btab071] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 01/15/2021] [Accepted: 01/28/2021] [Indexed: 12/11/2022] Open
Abstract
MOTIVATION Hematopoietic stem cells (HSCs) give rise to all blood cells and play a vital role throughout the whole lifespan through their pluripotency and self-renewal properties. Accurately identifying the stages of early HSCs is extremely important, as it may open up new prospects for extracorporeal blood research. Existing experimental techniques for identifying the early stages of HSCs development are time-consuming and expensive. Machine learning has shown its excellence in massive single-cell data processing and it is desirable to develop related computational models as good complements to experimental techniques. RESULTS In this study, we presented a novel predictor called eHSCPr specifically for predicting the early stages of HSCs development. To reveal the distinct genes at each developmental stage of HSCs, we compared F-score with three state-of-art differential gene selection methods (limma, DESeq2, edgeR) and evaluated their performance. F-score captured the more critical surface markers of endothelial cells and hematopoietic cells, and the area under receiver operating characteristic curve (ROC) value was 0.987. Based on SVM, the 10-fold cross-validation accuracy of eHSCpr in the independent dataset and the training dataset reached 94.84% and 94.19%, respectively. Importantly, we performed transcription analysis on the F-score gene set, which indeed further enriched the signal markers of HSCs development stages. eHSCPr can be a powerful tool for predicting early stages of HSCs development, facilitating hypothesis-driven experimental design and providing crucial clues for the in vitro blood regeneration studies. AVAILABILITY http://bioinfor.imu.edu.cn/ehscpr. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Hao Wang
- State Key Laboratory of Reproductive Regulation and Breeding of Grassland Livestock, College of Life Sciences, Inner Mongolia University, Hohhot, 010070, China
| | - Pengfei Liang
- State Key Laboratory of Reproductive Regulation and Breeding of Grassland Livestock, College of Life Sciences, Inner Mongolia University, Hohhot, 010070, China
| | - Lei Zheng
- State Key Laboratory of Reproductive Regulation and Breeding of Grassland Livestock, College of Life Sciences, Inner Mongolia University, Hohhot, 010070, China
| | - ChunShen Long
- State Key Laboratory of Reproductive Regulation and Breeding of Grassland Livestock, College of Life Sciences, Inner Mongolia University, Hohhot, 010070, China
| | - HanShuang Li
- State Key Laboratory of Reproductive Regulation and Breeding of Grassland Livestock, College of Life Sciences, Inner Mongolia University, Hohhot, 010070, China
| | - Yongchun Zuo
- State Key Laboratory of Reproductive Regulation and Breeding of Grassland Livestock, College of Life Sciences, Inner Mongolia University, Hohhot, 010070, China
| |
Collapse
|
7
|
Chen H, Lu F, He B. Topographic property of backpropagation artificial neural network: From human functional connectivity network to artificial neural network. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.07.103] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
8
|
Wang C, Yan H, Huang W, Li J, Yang J, Li R, Zhang L, Li L, Zhang J, Zuo Z, Chen H. ‘When’ and ‘what’ did you see? A novel fMRI-based visual decoding framework. J Neural Eng 2020; 17:056013. [DOI: 10.1088/1741-2552/abb691] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
9
|
Huang W, Yan H, Wang C, Li J, Yang X, Li L, Zuo Z, Zhang J, Chen H. Long short-term memory-based neural decoding of object categories evoked by natural images. Hum Brain Mapp 2020; 41:4442-4453. [PMID: 32648632 PMCID: PMC7502843 DOI: 10.1002/hbm.25136] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 05/19/2020] [Accepted: 06/29/2020] [Indexed: 01/18/2023] Open
Abstract
Visual perceptual decoding is one of the important and challenging topics in cognitive neuroscience. Building a mapping model between visual response signals and visual contents is the key point of decoding. Most previous studies used peak response signals to decode object categories. However, brain activities measured by functional magnetic resonance imaging are a dynamic process with time dependence, so peak signals cannot fully represent the whole process, which may affect the performance of decoding. Here, we propose a decoding model based on long short‐term memory (LSTM) network to decode five object categories from multitime response signals evoked by natural images. Experimental results show that the average decoding accuracy using the multitime (2–6 s) response signals is 0.540 from the five subjects, which is significantly higher than that using the peak ones (6 s; accuracy: 0.492; p < .05). In addition, from the perspective of different durations, methods and visual areas, the decoding performances of the five object categories are deeply and comprehensively explored. The analysis of different durations and decoding methods reveals that the LSTM‐based decoding model with sequence simulation ability can fit the time dependence of the multitime visual response signals to achieve higher decoding performance. The comparative analysis of different visual areas demonstrates that the higher visual cortex (VC) contains more semantic category information needed for visual perceptual decoding than lower VC.
Collapse
Affiliation(s)
- Wei Huang
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, People's Republic of China
| | - Hongmei Yan
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, People's Republic of China
| | - Chong Wang
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, People's Republic of China
| | - Jiyi Li
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, People's Republic of China
| | - Xiaoqing Yang
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, People's Republic of China
| | - Liang Li
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, People's Republic of China
| | - Zhentao Zuo
- State Key Laboratory of Brain and Cognitive Science, Beijing MR Center for Brain Research, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Jiang Zhang
- Department of Medical Information Engineering, Sichuan University, Chengdu, China
| | - Huafu Chen
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, People's Republic of China
| |
Collapse
|
10
|
Perception-to-Image: Reconstructing Natural Images from the Brain Activity of Visual Perception. Ann Biomed Eng 2020; 48:2323-2332. [DOI: 10.1007/s10439-020-02502-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Accepted: 03/30/2020] [Indexed: 10/24/2022]
|
11
|
Liang D, Yin YH, Miao LY, Zheng X, Gao W, Chen XD, Wei M, Chen SJ, Li S, Xin GZ, Li P, Li HJ. Integrating chemical similarity and bioequivalence: A pilot study on quality consistency evaluation of dispensing granule and traditional decoction of Scutellariae Radix by a totality-of-the-evidence approach. J Pharm Biomed Anal 2019; 169:1-10. [PMID: 30826486 DOI: 10.1016/j.jpba.2019.02.030] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Revised: 02/09/2019] [Accepted: 02/19/2019] [Indexed: 12/31/2022]
Abstract
There is an increasing focus on the quality consistency evaluation of dispensing granule in traditional Chinese medicines (TCMs). According to the guideline from Chinese Pharmacopoeia Commission, the substantial equivalence of dispensing granule and traditional decoction should be determined, and the chromatographic fingerprint has been recommended as a comprehensive qualitative approach to assess the quality consistency between dispensing granule and traditional decoction. However, a high-degree chemical similarity does not equal a bioequivalence. Attempting to realize the quality evaluation by integrating chemical consistency and bioequivalence, we herein proposed a totality-of-the-evidence approach based on clustering analysis and equivalence evaluation taking the dispensing granule and traditional decoction of Scutellariae Radix (SR) as a typical case. Chemical fingerprints were developed by high performance liquid chromatography coupled with photodiode array detector and quadrupole time-of-flight mass spectrometry (HPLC-PDA/QTOF-MS). Subsequently, a feature selection strategy, integrated linear and nonlinear correlation analysis, was carried out to assess the correlation between chemical profiles and biological activities. Finally, quality consistency between the dispensing granule and the traditional decoction was determined by bioactive marker-guided hierarchical clustering analysis (HCA), k-means clustering method and bioequivalence evaluation. The available evidence suggested that not all the dispensing granule of SR were sufficiently similar to the traditional decoction. This study provides an applicable methodology for quality consistency evaluation of dispensing granule and traditional decoction in TCMs.
Collapse
Affiliation(s)
- Dan Liang
- State Key Laboratory of Natural Medicines, China Pharmaceutical University, No. 24 Tongjia Lane, Nanjing, 210009, China
| | - Ying-Hao Yin
- State Key Laboratory of Natural Medicines, China Pharmaceutical University, No. 24 Tongjia Lane, Nanjing, 210009, China
| | - Lan-Yun Miao
- State Key Laboratory of Natural Medicines, China Pharmaceutical University, No. 24 Tongjia Lane, Nanjing, 210009, China
| | - Xian Zheng
- State Key Laboratory of Natural Medicines, China Pharmaceutical University, No. 24 Tongjia Lane, Nanjing, 210009, China
| | - Wen Gao
- State Key Laboratory of Natural Medicines, China Pharmaceutical University, No. 24 Tongjia Lane, Nanjing, 210009, China
| | - Xiang-Dong Chen
- Guangdong Efong Pharmaceutical Co., Ltd., Foshan, 528244, China
| | - Mei Wei
- Guangdong Efong Pharmaceutical Co., Ltd., Foshan, 528244, China
| | - Sheng-Jun Chen
- Jiangyin Tianjiang Pharmaceutical Co., Ltd., Jiangyin, 214400, China
| | - Song Li
- Jiangyin Tianjiang Pharmaceutical Co., Ltd., Jiangyin, 214400, China
| | - Gui-Zhong Xin
- State Key Laboratory of Natural Medicines, China Pharmaceutical University, No. 24 Tongjia Lane, Nanjing, 210009, China.
| | - Ping Li
- State Key Laboratory of Natural Medicines, China Pharmaceutical University, No. 24 Tongjia Lane, Nanjing, 210009, China
| | - Hui-Jun Li
- State Key Laboratory of Natural Medicines, China Pharmaceutical University, No. 24 Tongjia Lane, Nanjing, 210009, China.
| |
Collapse
|