1
|
Deep learning model to predict Ki-67 expression of breast cancer using digital breast tomosynthesis. Breast Cancer 2024:10.1007/s12282-024-01549-7. [PMID: 38448777 DOI: 10.1007/s12282-024-01549-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 01/24/2024] [Indexed: 03/08/2024]
Abstract
BACKGROUND Developing a deep learning (DL) model for digital breast tomosynthesis (DBT) images to predict Ki-67 expression. METHODS The institutional review board approved this retrospective study and waived the requirement for informed consent from the patients. Initially, 499 patients (mean age: 50.5 years, range: 29-90 years) referred to our hospital for breast cancer were participated, 126 patients with pathologically confirmed breast cancer were selected and their Ki-67 expression measured. The Xception architecture was used in the DL model to predict Ki-67 expression levels. The high Ki-67 vs low Ki-67 expression diagnostic performance of our DL model was assessed by accuracy, sensitivity, specificity, areas under the receiver operating characteristic curve (AUC), and by using sub-datasets divided by the radiological characteristics of breast cancer. RESULTS The average accuracy, sensitivity, specificity, and AUC were 0.912, 0.629, 0.985, and 0.883, respectively. The AUC of the four subgroups separated by radiological findings for the mass, calcification, distortion, and focal asymmetric density sub-datasets were 0.890, 0.750, 0.870, and 0.660, respectively. CONCLUSIONS Our results suggest the potential application of our DL model to predict the expression of Ki-67 using DBT, which may be useful for preoperatively determining the treatment strategy for breast cancer.
Collapse
|
2
|
Intensive care unit mortality and cost-effectiveness associated with intensivist staffing: a Japanese nationwide observational study. J Intensive Care 2023; 11:60. [PMID: 38049894 PMCID: PMC10694900 DOI: 10.1186/s40560-023-00708-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 11/21/2023] [Indexed: 12/06/2023] Open
Abstract
BACKGROUND Japan has four types of intensive care units (ICUs) that are divided into two categories according to the management fee charged per day: ICU management fees 1 and 2 (ICU1/2) (equivalent to high-intensity staffing) and 3 and 4 (ICU3/4) (equivalent to low-intensity staffing). Although ICU1/2 charges a higher rate than ICU3/4, no cost-effectiveness analysis has been performed for ICU1/2. This study evaluated the clinical outcomes and cost-effectiveness of ICU1/2 compared with those of ICU3/4. METHODS This retrospective observational study used a nationwide Japanese administrative database to identify patients admitted to ICUs between April 2020 and March 2021 and divided them into the ICU1/2 and ICU3/4 groups. The ICU mortality rates and in-hospital mortality rates were determined, and the incremental cost-effectiveness ratio (ICER) (Japanese Yen (JPY)/QALY), defined as the difference between quality-adjusted life year (QALY) and medical costs, was compared between ICU1/2 and ICU3/4. Data analysis was performed using the Chi-squared test; an ICER of < 5 million JPY/QALY was considered cost-effective. RESULTS The ICU1/2 group (n = 71,412; 60.7%) had lower ICU mortality rates (ICU 1/2: 2.6% vs. ICU 3/4: 4.3%, p < 0.001) and lower in-hospital mortality rates (ICU 1/2: 6.1% vs. ICU 3/4: 8.9%, p < 0.001) than the ICU3/4 group (n = 46,330; 39.3%). The average cost per patient of ICU1/2 and ICU3/4 was 2,249,270 ± 1,955,953 JPY and 1,682,546 ± 1,588,928 JPY, respectively, with a difference of 566,724. The ICER was 718,659 JPY/QALY, which was below the cost-effectiveness threshold. CONCLUSIONS ICU1/2 is associated with lower ICU patient mortality than ICU3/4. Treatments under ICU1/2 are more cost-effective than those under ICU3/4, with an ICER of < 5 million JPY/QALY.
Collapse
|
3
|
Deep learning model for predicting the presence of stromal invasion of breast cancer on digital breast tomosynthesis. Radiol Phys Technol 2023; 16:406-413. [PMID: 37466807 DOI: 10.1007/s12194-023-00731-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 07/03/2023] [Accepted: 07/03/2023] [Indexed: 07/20/2023]
Abstract
To develop a deep learning (DL)-based algorithm to predict the presence of stromal invasion in breast cancer using digital breast tomosynthesis (DBT). Our institutional review board approved this retrospective study and waived the requirement for informed consent from the patients. Initially, 499 patients (mean age 50.5 years, age range, 29-90 years) who were referred to our hospital under the suspicion of breast cancer and who underwent DBT between March 1 and August 31, 2019, were enrolled in this study. Among the 499 patients, 140 who underwent surgery after being diagnosed with breast cancer were selected for the analysis. Based on the pathological reports, the 140 patients were classified into two groups: those with non-invasive cancer (n = 20) and those with invasive cancer (n = 120). VGG16, Resnet50, DenseNet121, and Xception architectures were used as DL models to differentiate non-invasive from invasive cancer. The diagnostic performance of the DL models was assessed based on the area under the receiver operating characteristic curve (AUC). The AUC for the four models were 0.56 [95% confidence intervals (95% CI) 0.49-0.62], 0.67 (95% CI 0.62-0.74), 0.71 (95% CI 0.65-0.75), and 0.75 (95% CI 0.69-0.81), respectively. Our proposed DL model trained on DBT images is useful for predicting the presence of stromal invasion in breast cancer.
Collapse
|
4
|
Deep learning model for breast cancer diagnosis based on bilateral asymmetrical detection (BilAD) in digital breast tomosynthesis images. Radiol Phys Technol 2023; 16:20-27. [PMID: 36342640 DOI: 10.1007/s12194-022-00686-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 11/09/2022]
Abstract
The purpose of this study was to develop a deep learning model to diagnose breast cancer by embedding a diagnostic algorithm that examines the asymmetry of bilateral breast tissue. This retrospective study was approved by the institutional review board. A total of 115 patients who underwent breast surgery and had pathologically confirmed breast cancer were enrolled in this study. Two image pairs [230 pairs of bilateral breast digital breast tomosynthesis (DBT) images with 115 malignant tumors and contralateral tissue (M/N), and 115 bilateral normal areas (N/N)] were generated from each patient enrolled in this study. The proposed deep learning model is called bilateral asymmetrical detection (BilAD), which is a modified convolutional neural network (CNN) model of Xception with two-dimensional tensors for bilateral breast images. BilAD was trained to classify the differences between pairs of M/N and N/N datasets. The results of the BilAD model were compared to those of the unilateral control CNN model (uCNN). The results of BilAD and the uCNN were as follows: accuracy, 0.84 and 0.75; sensitivity, 0.73 and 0.58; and specificity, 0.93 and 0.92, respectively. The mean area under the receiver operating characteristic curve of BilAD was significantly higher than that of the uCNN (p = 0.02): 0.90 and 0.84, respectively. The proposed deep learning model trained by embedding a diagnostic algorithm to examine the asymmetry of bilateral breast tissue improves the diagnostic accuracy for breast cancer.
Collapse
|
5
|
Machine learning approach to stratify complex heterogeneity of chronic heart failure: A report from the CHART-2 study. ESC Heart Fail 2023; 10:1597-1604. [PMID: 36788745 DOI: 10.1002/ehf2.14288] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 11/26/2022] [Accepted: 01/09/2023] [Indexed: 02/16/2023] Open
Abstract
AIMS Current approaches to classify chronic heart failure (HF) subpopulations may be limited due to the diversity of pathophysiology and co-morbidities in chronic HF. We aimed to elucidate the clusters of chronic patients with HF by data-driven approaches with machine learning in a hospital-based registry. METHODS AND RESULTS A total of 4649 patients with a broad spectrum of left ventricular ejection fraction (LVEF) in the CHART-2 (Chronic Heart Failure Analysis and Registry in the Tohoku District-2) study were enrolled to this study. Chronic HF patients were classified using random forest clustering with 56 multiscale clinical parameters. We assessed the influence of the clusters on cardiovascular death, non-cardiovascular death, all-cause death, and free from hospitalization by HF. Latent class analysis using random forest clustering identified 10 clusters with four primary components: cardiac function (LVEF, left atrial and ventricular diameters, diastolic blood pressure, and brain natriuretic peptide), renal function (glomerular filtration rate and blood urea nitrogen), anaemia (red blood cell, haematocrit, haemoglobin, and platelet count), and nutrition (albumin and body mass index). All 11 significant clinical parameters in the four primary components and two disease aetiologies (ischaemic heart disease and valvular heart disease) showed statistically significant differences among the 10 clusters (P < 0.01). Cluster 1 (26.7% of patients), which is characterized by preserved LVEF (<59%, 37% of the total) with lowest brain natriuretic peptide (>111.3 pg/mL, 0.9%) and lowest left atrial diameter (>42 mm, 37.4%), showed the best 5 year survival rate of 98.1% for cardiovascular death, 95.9% for non-cardiovascular death, 92.9% for all-cause death, and 91.7% for free from hospitalization by HF. Cluster 10 (6.0% of the total), which is co-morbid disorders of all four primary components, showed the worst survival rate of 39.1% for cardiovascular death, 68.9% for non-cardiovascular death, 23.9% for all-cause death, and 28.1% for free from hospitalization by HF. CONCLUSIONS These results suggest the potential applicability of the machine leaning approach, providing useful clinical prognostic information to stratify complex heterogeneity in patients with HF.
Collapse
|
6
|
Radiomics model of diffusion-weighted whole-body imaging with background signal suppression (DWIBS) for predicting axillary lymph node status in breast cancer. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:627-640. [PMID: 37038802 DOI: 10.3233/xst-230009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
BACKGROUND In breast cancer diagnosis and treatment, non-invasive prediction of axillary lymph node (ALN) metastasis can help avoid complications related to sentinel lymph node biopsy. OBJECTIVE This study aims to develop and evaluate machine learning models using radiomics features extracted from diffusion-weighted whole-body imaging with background signal suppression (DWIBS) examination for predicting the ALN status. METHODS A total of 100 patients with histologically proven, invasive, clinically N0 breast cancer who underwent DWIBS examination consisting of short tau inversion recovery (STIR) and DWIBS sequences before surgery were enrolled. Radiomic features were calculated using segmented primary lesions in DWIBS and STIR sequences and were divided into training (n = 75) and test (n = 25) datasets based on the examination date. Using the training dataset, optimal feature selection was performed using the least absolute shrinkage and selection operator algorithm, and the logistic regression model and support vector machine (SVM) classifier model were constructed with DWIBS, STIR, or a combination of DWIBS and STIR sequences to predict ALN status. Receiver operating characteristic curves were used to assess the prediction performance of radiomics models. RESULTS For the test dataset, the logistic regression model using DWIBS, STIR, and a combination of both sequences yielded an area under the curve (AUC) of 0.765 (95% confidence interval: 0.548-0.982), 0.801 (0.597-1.000), and 0.779 (0.567-0.992), respectively, whereas the SVM classifier model using DWIBS, STIR, and a combination of both sequences yielded an AUC of 0.765 (0.548-0.982), 0.757 (0.538-0.977), and 0.779 (0.567-0.992), respectively. CONCLUSIONS Use of machine learning models incorporating with the quantitative radiomic features derived from the DWIBS and STIR sequences can potentially predict ALN status.
Collapse
|
7
|
Deep learning approach of diffusion-weighted imaging as an outcome predictor in laryngeal and hypopharyngeal cancer patients with radiotherapy-related curative treatment: a preliminary study. Eur Radiol 2022; 32:5353-5361. [PMID: 35201406 DOI: 10.1007/s00330-022-08630-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 01/15/2022] [Accepted: 02/02/2022] [Indexed: 11/04/2022]
Abstract
OBJECTIVES This preliminary study aimed to develop a deep learning (DL) model using diffusion-weighted imaging (DWI) and apparent diffusion coefficient (ADC) maps to predict local recurrence and 2-year progression-free survival (PFS) in laryngeal and hypopharyngeal cancer patients treated with various forms of radiotherapy-related curative therapy. METHODS Seventy patients with laryngeal and hypopharyngeal cancers treated by radiotherapy, chemoradiotherapy, or induction-(chemo)radiotherapy were enrolled and divided into training (N = 49) and test (N = 21) groups based on presentation timeline. All patients underwent MR before and 4 weeks after the start of radiotherapy. The DL models that extracted imaging features on pre- and intra-treatment DWI and ADC maps were trained to predict the local recurrence within a 2-year follow-up. In the test group, each DL model was analyzed for recurrence prediction. Additionally, the Kaplan-Meier and multivariable Cox regression analyses were performed to evaluate the prognostic significance of the DL models and clinical variables. RESULTS The highest area under the receiver operating characteristics curve and accuracy for predicting the local recurrence in the DL model were 0.767 and 81.0%, respectively, using intra-treatment DWI (DWIintra). The log-rank test showed that DWIintra was significantly associated with PFS (p = 0.013). DWIintra was an independent prognostic factor for PFS in multivariate analysis (p = 0.023). CONCLUSION DL models using DWIintra may have prognostic value in patients with laryngeal and hypopharyngeal cancers treated by curative radiotherapy. The model-related findings may contribute to determining the therapeutic strategy in the early stage of the treatment. KEY POINTS • Deep learning models using intra-treatment diffusion-weighted imaging have prognostic value in patients with laryngeal and hypopharyngeal cancers treated by curative radiotherapy. • The findings from these models may contribute to determining the therapeutic strategy at the early stage of the treatment.
Collapse
|
8
|
[Object Detection Model Utilizing Deep Learning to Identify Retained Surgical Gauze in the Body on Postoperative Radiography: Phantom Study]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2021; 77:821-827. [PMID: 34421070 DOI: 10.6009/jjrt.2021_jsrt_77.8.821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE Foreign bodies such as a surgical gauze can be retained in the body after surgery and in some cases cannot be detected by postoperative radiography. The aim of this study was to develop an object detection model capable of postsurgical detection of retained gauze in the body. The object detection model used deep learning using abdominal radiographs, and a phantom study was performed to evaluate the ability of the model to automatically detect retained surgical gauze. MATERIALS AND METHODS The object detection model was constructed using a Single Shot MultiBox Detector (SSD) 300. In total, 268 abdominal phantom images were used: 180 gauze images were used as training data, 20 gauze images were used as validation data, and an additional 34 gauze images and 34 nongauze images were used as test data. To evaluate the performance of the object detection model, a confusion matrix was created and the accuracy and sensitivity were calculated. RESULT True-positive (TP) rate, true-negative (TN) rate, false-positive (FP) rate, and false-negative (FN) rate were 0.92, 1.00, 0.00, and 0.08, respectively. Accuracy was 0.96, and sensitivity was 0.92. CONCLUSION The object detection model could detect surgical gauze on abdominal phantom images with a high accuracy and sensitivity.
Collapse
|
9
|
Deep Learning for the Preoperative Diagnosis of Metastatic Cervical Lymph Nodes on Contrast-Enhanced Computed ToMography in Patients with Oral Squamous Cell Carcinoma. Cancers (Basel) 2021; 13:cancers13040600. [PMID: 33546279 PMCID: PMC7913286 DOI: 10.3390/cancers13040600] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 01/23/2021] [Accepted: 01/31/2021] [Indexed: 12/04/2022] Open
Abstract
Simple Summary Cervical lymph node (LN) metastasis in patients with oral squamous cell carcinoma is one of the important prognostic factors. Pretreatment cervical nodal staging is performed using computed tomography (CT) as the first-line examination. However, imaging findings focused on morphology are not specific for detecting cervical LN metastasis. In this study, deep learning (DL) analysis of pretreatment contrast-enhanced CT was evaluated and compared with radiologists’ assessments at levels I–II, I, and II using the independent test set. The DL model achieved higher diagnostic performance in discriminating between benign and metastatic cervical LNs at levels I–II, I, and II. Significant difference in the area under the curves of the DL model and the radiologists’ assessments at levels I–II and II were observed. Our findings suggest that this approach can provide additional value to treatment strategies. Abstract We investigated the value of deep learning (DL) in differentiating between benign and metastatic cervical lymph nodes (LNs) using pretreatment contrast-enhanced computed tomography (CT). This retrospective study analyzed 86 metastatic and 234 benign (non-metastatic) cervical LNs at levels I–V in 39 patients with oral squamous cell carcinoma (OSCC) who underwent preoperative CT and neck dissection. LNs were randomly divided into training (70%), validation (10%), and test (20%) sets. For the validation and test sets, cervical LNs at levels I–II were evaluated. Convolutional neural network analysis was performed using Xception architecture. Two radiologists evaluated the possibility of metastasis to cervical LNs using a 4-point scale. The area under the curve of the DL model and the radiologists’ assessments were calculated and compared at levels I–II, I, and II. In the test set, the area under the curves at levels I–II (0.898) and II (0.967) were significantly higher than those of each reader (both, p < 0.05). DL analysis of pretreatment contrast-enhanced CT can help classify cervical LNs in patients with OSCC with better diagnostic performance than radiologists’ assessments alone. DL may be a valuable diagnostic tool for differentiating between benign and metastatic cervical LNs.
Collapse
|
10
|
Sequential semi-supervised segmentation for serial electron microscopy image with small number of labels. J Neurosci Methods 2021; 351:109066. [PMID: 33417965 DOI: 10.1016/j.jneumeth.2021.109066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 12/29/2020] [Accepted: 01/02/2021] [Indexed: 10/22/2022]
Abstract
BACKGROUND Segmentation of electron microscopic continuous section images by deep learning has attracted attention as a technique to reduce the cost of annotation for researchers attempting to make observations using 3D reconstruction methods. However, when the observed samples are rare, or scanning circumstances are unstable, pursuing generalization performance for newly obtained samples is not appropriate. NEW METHODS We assume a transductive setting that predicts all labels in a dataset from only partially obtained labels while avoiding the pursuit of generalization performance for unknown data. Then, we propose sequential semi-supervised segmentation (4S), which semi-automatically extracts neural regions from electron microscopy image stacks. This method focuses on the fact that adjacent images have a strong correlation in serial images. Our 4S repeats training, inference, and pseudo-labeling using a minimal number of teacher labels and performs segmentation on all slices. RESULT Our experiments using two types of serial section images showed effectiveness in terms of both quality and quantity. In addition, we experimentally clarified the effect of the number and position of teacher labels on performance. COMPARISON WITH EXISTING METHODS Compared with supervised learning when a small number of labeled data was obtained, the performance of the proposed method was shown to be superior. CONCLUSION Our 4S leverages a limited number of labeled data and a large amount of unlabeled data to extract neural regions from serial image stacks in a transductive setting. We plan to develop this method as a core module of a general-purpose annotation tool in our future work.
Collapse
|
11
|
Effects of data count and image scaling on Deep Learning training. PeerJ Comput Sci 2020; 6:e312. [PMID: 33816963 PMCID: PMC7924688 DOI: 10.7717/peerj-cs.312] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 10/14/2020] [Indexed: 05/20/2023]
Abstract
BACKGROUND Deep learning using convolutional neural networks (CNN) has achieved significant results in various fields that use images. Deep learning can automatically extract features from data, and CNN extracts image features by convolution processing. We assumed that increasing the image size using interpolation methods would result in an effective feature extraction. To investigate how interpolation methods change as the number of data increases, we examined and compared the effectiveness of data augmentation by inversion or rotation with image augmentation by interpolation when the image data for training were small. Further, we clarified whether image augmentation by interpolation was useful for CNN training. To examine the usefulness of interpolation methods in medical images, we used a Gender01 data set, which is a sex classification data set, on chest radiographs. For comparison of image enlargement using an interpolation method with data augmentation by inversion and rotation, we examined the results of two- and four-fold enlargement using a Bilinear method. RESULTS The average classification accuracy improved by expanding the image size using the interpolation method. The biggest improvement was noted when the number of training data was 100, and the average classification accuracy of the training model with the original data was 0.563. However, upon increasing the image size by four times using the interpolation method, the average classification accuracy significantly improved to 0.715. Compared with the data augmentation by inversion and rotation, the model trained using the Bilinear method showed an improvement in the average classification accuracy by 0.095 with 100 training data and 0.015 with 50,000 training data. Comparisons of the average classification accuracy of the chest X-ray images showed a stable and high-average classification accuracy using the interpolation method. CONCLUSION Training the CNN by increasing the image size using the interpolation method is a useful method. In the future, we aim to conduct additional verifications using various medical images to further clarify the reason why image size is important.
Collapse
|
12
|
Putative Neural Network Within an Olfactory Sensory Unit for Nestmate and Non-nestmate Discrimination in the Japanese Carpenter Ant: The Ultra-structures and Mathematical Simulation. Front Cell Neurosci 2018; 12:310. [PMID: 30283303 PMCID: PMC6157317 DOI: 10.3389/fncel.2018.00310] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 08/27/2018] [Indexed: 11/13/2022] Open
Abstract
Ants are known to use a colony-specific blend of cuticular hydrocarbons (CHCs) as a pheromone to discriminate between nestmates and non-nestmates and the CHCs were sensed in the basiconic type of antennal sensilla (S. basiconica). To investigate the functional design of this type of antennal sensilla, we observed the ultra-structures at 2D and 3D in the Japanese carpenter ant, Camponotus japonicus, using a serial block-face scanning electron microscope (SBF-SEM), and conventional and high-voltage transmission electron microscopes. Based on the serial images of 352 cross sections of SBF-SEM, we reconstructed a 3D model of the sensillum revealing that each S. basiconica houses > 100 unbranched dendritic processes, which extend from the same number of olfactory receptor neurons (ORNs). The dendritic processes had characteristic beaded-structures and formed a twisted bundle within the sensillum. At the "beads," the cell membranes of the processes were closely adjacent in the interdigitated profiles, suggesting functional interactions via gap junctions (GJs). Immunohistochemistry with anti-innexin (invertebrate GJ protein) antisera revealed positive labeling in the antennae of C. japonicus. Innexin 3, one of the five antennal innexin subtypes, was detected as a dotted signal within the S. basiconica as a sensory organ for nestmate recognition. These morphological results suggest that ORNs form an electrical network via GJs between dendritic processes. We were unable to functionally certify the electric connections in an olfactory sensory unit comprising such multiple ORNs; however, with the aid of simulation of a mathematical model, we examined the putative function of this novel chemosensory information network, which possibly contributes to the distinct discrimination of colony-specific blends of CHCs or other odor detection.
Collapse
|
13
|
P1.27 Redundant and non-redundant effects of Ca2+ and Na+ on the activation of p94/calpain 3. Neuromuscul Disord 2010. [DOI: 10.1016/j.nmd.2010.07.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|