1
|
Su J, Luo Z, Wang C, Lian S, Lin X, Li S. Reconstruct incomplete relation for incomplete modality brain tumor segmentation. Neural Netw 2024; 180:106657. [PMID: 39186839 DOI: 10.1016/j.neunet.2024.106657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 08/14/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Different brain tumor magnetic resonance imaging (MRI) modalities provide diverse tumor-specific information. Previous works have enhanced brain tumor segmentation performance by integrating multiple MRI modalities. However, multi-modal MRI data are often unavailable in clinical practice. An incomplete modality leads to missing tumor-specific information, which degrades the performance of existing models. Various strategies have been proposed to transfer knowledge from a full modality network (teacher) to an incomplete modality one (student) to address this issue. However, they neglect the fact that brain tumor segmentation is a structural prediction problem that requires voxel semantic relations. In this paper, we propose a Reconstruct Incomplete Relation Network (RIRN) that transfers voxel semantic relational knowledge from the teacher to the student. Specifically, we propose two types of voxel relations to incorporate structural knowledge: Class-relative relations (CRR) and Class-agnostic relations (CAR). The CRR groups voxels into different tumor regions and constructs a relation between them. The CAR builds a global relation between all voxel features, complementing the local inter-region relation. Moreover, we use adversarial learning to align the holistic structural prediction between the teacher and the student. Extensive experimentation on both the BraTS 2018 and BraTS 2020 datasets establishes that our method outperforms all state-of-the-art approaches.
Collapse
Affiliation(s)
- Jiawei Su
- School of Computer Engineering, Jimei University, Xiamen, China; The Department of Artificial Intelligence, Xiamen University, Fujian, China
| | - Zhiming Luo
- The Department of Artificial Intelligence, Xiamen University, Fujian, China.
| | - Chengji Wang
- The School of Computer Science, Central China Normal University, Wuhan, China
| | - Sheng Lian
- The College of Computer and Data Science, Fuzhou University, Fujian, China
| | - Xuejuan Lin
- The Department of Traditional Chinese Medicine, Fujian University of Traditional Chinese Medicine, Fujian, China
| | - Shaozi Li
- The Department of Artificial Intelligence, Xiamen University, Fujian, China
| |
Collapse
|
2
|
Tan TWK, Nguyen KN, Zhang C, Kong R, Cheng SF, Ji F, Chong JSX, Yi Chong EJ, Venketasubramanian N, Orban C, Chee MWL, Chen C, Zhou JH, Yeo BTT. Evaluation of Brain Age as a Specific Marker of Brain Health. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.16.623903. [PMID: 39605400 PMCID: PMC11601463 DOI: 10.1101/2024.11.16.623903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
Brain age is a powerful marker of general brain health. Furthermore, brain age models are trained on large datasets, thus giving them a potential advantage in predicting specific outcomes - much like the success of finetuning large language models for specific applications. However, it is also well-accepted in machine learning that models trained to directly predict specific outcomes (i.e., direct models) often perform better than those trained on surrogate outcomes. Therefore, despite their much larger training data, it is unclear whether brain age models outperform direct models in predicting specific brain health outcomes. Here, we compare large-scale brain age models and direct models for predicting specific health outcomes in the context of Alzheimer's Disease (AD) dementia. Using anatomical T1 scans from three continents (N = 1,848), we find that direct models outperform brain age models without finetuning. Finetuned brain age models yielded similar performance as direct models, but importantly, did not outperform direct models although the brain age models were pretrained on 1000 times more data than the direct models: N = 53,542 vs N = 50. Overall, our results do not discount brain age as a useful marker of general brain health. However, in this era of large-scale brain age models, our results suggest that small-scale, targeted approaches for extracting specific brain health markers still hold significant value.
Collapse
Affiliation(s)
- Trevor Wei Kiat Tan
- Centre for Sleep and Cognition & Centre for Translational MR Research, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- Department of Medicine, Healthy Longevity Translational Research Programme, Human Potential Translational Research Programme & Institute for Digital Medicine (WisDM), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- N.1 Institute for Health, National University of Singapore, Singapore
- Integrative Sciences and Engineering Programme (ISEP), National University of Singapore, Singapore
| | - Kim-Ngan Nguyen
- Centre for Sleep and Cognition & Centre for Translational MR Research, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Chen Zhang
- Centre for Sleep and Cognition & Centre for Translational MR Research, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- Department of Medicine, Healthy Longevity Translational Research Programme, Human Potential Translational Research Programme & Institute for Digital Medicine (WisDM), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- N.1 Institute for Health, National University of Singapore, Singapore
| | - Ru Kong
- Centre for Sleep and Cognition & Centre for Translational MR Research, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- Department of Medicine, Healthy Longevity Translational Research Programme, Human Potential Translational Research Programme & Institute for Digital Medicine (WisDM), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- N.1 Institute for Health, National University of Singapore, Singapore
| | - Susan F Cheng
- Centre for Sleep and Cognition & Centre for Translational MR Research, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- Integrative Sciences and Engineering Programme (ISEP), National University of Singapore, Singapore
| | - Fang Ji
- Centre for Sleep and Cognition & Centre for Translational MR Research, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
| | - Joanna Su Xian Chong
- Centre for Sleep and Cognition & Centre for Translational MR Research, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
| | - Eddie Jun Yi Chong
- Memory, Aging and Cognition Centre, National University Health System, Singapore
- Department of Psychological Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | | | - Csaba Orban
- Centre for Sleep and Cognition & Centre for Translational MR Research, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- Department of Medicine, Healthy Longevity Translational Research Programme, Human Potential Translational Research Programme & Institute for Digital Medicine (WisDM), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- N.1 Institute for Health, National University of Singapore, Singapore
- Integrative Sciences and Engineering Programme (ISEP), National University of Singapore, Singapore
| | - Michael W L Chee
- Centre for Sleep and Cognition & Centre for Translational MR Research, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Medicine, Healthy Longevity Translational Research Programme, Human Potential Translational Research Programme & Institute for Digital Medicine (WisDM), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Christopher Chen
- Memory, Aging and Cognition Centre, National University Health System, Singapore
- Department of Psychological Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Juan Helen Zhou
- Centre for Sleep and Cognition & Centre for Translational MR Research, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- Integrative Sciences and Engineering Programme (ISEP), National University of Singapore, Singapore
| | - B T Thomas Yeo
- Centre for Sleep and Cognition & Centre for Translational MR Research, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- Department of Medicine, Healthy Longevity Translational Research Programme, Human Potential Translational Research Programme & Institute for Digital Medicine (WisDM), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- N.1 Institute for Health, National University of Singapore, Singapore
- Integrative Sciences and Engineering Programme (ISEP), National University of Singapore, Singapore
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| |
Collapse
|
3
|
Dufumier B, Gori P, Petiton S, Louiset R, Mangin JF, Grigis A, Duchesnay E. Exploring the potential of representation and transfer learning for anatomical neuroimaging: Application to psychiatry. Neuroimage 2024; 296:120665. [PMID: 38848981 DOI: 10.1016/j.neuroimage.2024.120665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 05/15/2024] [Accepted: 05/31/2024] [Indexed: 06/09/2024] Open
Abstract
The perspective of personalized medicine for brain disorders requires efficient learning models for anatomical neuroimaging-based prediction of clinical conditions. There is now a consensus on the benefit of deep learning (DL) in addressing many medical imaging tasks, such as image segmentation. However, for single-subject prediction problems, recent studies yielded contradictory results when comparing DL with Standard Machine Learning (SML) on top of classical feature extraction. Most existing comparative studies were limited in predicting phenotypes of little clinical interest, such as sex and age, and using a single dataset. Moreover, they conducted a limited analysis of the employed image pre-processing and feature selection strategies. This paper extensively compares DL and SML prediction capacity on five multi-site problems, including three increasingly complex clinical applications in psychiatry namely schizophrenia, bipolar disorder, and Autism Spectrum Disorder (ASD) diagnosis. To compensate for the relative scarcity of neuroimaging data on these clinical datasets, we also evaluate three pre-training strategies for transfer learning from brain imaging of the general healthy population: self-supervised learning, generative modeling and supervised learning with age. Overall, we find similar performance between randomly initialized DL and SML for the three clinical tasks and a similar scaling trend for sex prediction. This was replicated on an external dataset. We also show highly correlated discriminative brain regions between DL and linear ML models in all problems. Nonetheless, we demonstrate that self-supervised pre-training on large-scale healthy population imaging datasets (N≈10k), along with Deep Ensemble, allows DL to learn robust and transferable representations to smaller-scale clinical datasets (N≤1k). It largely outperforms SML on 2 out of 3 clinical tasks both in internal and external test sets. These findings suggest that the improvement of DL over SML in anatomical neuroimaging mainly comes from its capacity to learn meaningful and useful abstract representations of the brain anatomy, and it sheds light on the potential of transfer learning for personalized medicine in psychiatry.
Collapse
Affiliation(s)
- Benoit Dufumier
- Université Paris-Saclay, CEA, CNRS, UMR9027 Baobab, NeuroSpin, Saclay, France; LTCI, Télécom Paris, IPParis, Palaiseau, France.
| | - Pietro Gori
- LTCI, Télécom Paris, IPParis, Palaiseau, France
| | - Sara Petiton
- Université Paris-Saclay, CEA, CNRS, UMR9027 Baobab, NeuroSpin, Saclay, France
| | - Robin Louiset
- Université Paris-Saclay, CEA, CNRS, UMR9027 Baobab, NeuroSpin, Saclay, France; LTCI, Télécom Paris, IPParis, Palaiseau, France
| | | | - Antoine Grigis
- Université Paris-Saclay, CEA, CNRS, UMR9027 Baobab, NeuroSpin, Saclay, France
| | - Edouard Duchesnay
- Université Paris-Saclay, CEA, CNRS, UMR9027 Baobab, NeuroSpin, Saclay, France
| |
Collapse
|
4
|
Wang R, Chen ZS. Large-scale foundation models and generative AI for BigData neuroscience. Neurosci Res 2024:S0168-0102(24)00075-0. [PMID: 38897235 PMCID: PMC11649861 DOI: 10.1016/j.neures.2024.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Revised: 04/15/2024] [Accepted: 05/15/2024] [Indexed: 06/21/2024]
Abstract
Recent advances in machine learning have led to revolutionary breakthroughs in computer games, image and natural language understanding, and scientific discovery. Foundation models and large-scale language models (LLMs) have recently achieved human-like intelligence thanks to BigData. With the help of self-supervised learning (SSL) and transfer learning, these models may potentially reshape the landscapes of neuroscience research and make a significant impact on the future. Here we present a mini-review on recent advances in foundation models and generative AI models as well as their applications in neuroscience, including natural language and speech, semantic memory, brain-machine interfaces (BMIs), and data augmentation. We argue that this paradigm-shift framework will open new avenues for many neuroscience research directions and discuss the accompanying challenges and opportunities.
Collapse
Affiliation(s)
- Ran Wang
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Zhe Sage Chen
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA; Department of Neuroscience and Physiology, Neuroscience Institute, New York University Grossman School of Medicine, New York, NY 10016, USA; Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, NY 11201, USA.
| |
Collapse
|
5
|
Darvishi-Bayazi MJ, Ghaemi MS, Lesort T, Arefin MR, Faubert J, Rish I. Amplifying pathological detection in EEG signaling pathways through cross-dataset transfer learning. Comput Biol Med 2024; 169:107893. [PMID: 38183700 DOI: 10.1016/j.compbiomed.2023.107893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Revised: 11/28/2023] [Accepted: 12/20/2023] [Indexed: 01/08/2024]
Abstract
Pathology diagnosis based on EEG signals and decoding brain activity holds immense importance in understanding neurological disorders. With the advancement of artificial intelligence methods and machine learning techniques, the potential for accurate data-driven diagnoses and effective treatments has grown significantly. However, applying machine learning algorithms to real-world datasets presents diverse challenges at multiple levels. The scarcity of labeled data, especially in low regime scenarios with limited availability of real patient cohorts due to high costs of recruitment, underscores the vital deployment of scaling and transfer learning techniques. In this study, we explore a real-world pathology classification task to highlight the effectiveness of data and model scaling and cross-dataset knowledge transfer. As such, we observe varying performance improvements through data scaling, indicating the need for careful evaluation and labeling. Additionally, we identify the challenges of possible negative transfer and emphasize the significance of some key components to overcome distribution shifts and potential spurious correlations and achieve positive transfer. We see improvement in the performance of the target model on the target (NMT) datasets by using the knowledge from the source dataset (TUAB) when a low amount of labeled data was available. Our findings demonstrated that a small and generic model (e.g. ShallowNet) performs well on a single dataset, however, a larger model (e.g. TCN) performs better in transfer learning when leveraging a larger and more diverse dataset.
Collapse
Affiliation(s)
- Mohammad-Javad Darvishi-Bayazi
- Mila, Québec AI Institute, Montréal, QC, Canada; Faubert Lab, Montréal, QC, Canada; Université de Montréal, Montréal, QC, Canada.
| | | | - Timothee Lesort
- Mila, Québec AI Institute, Montréal, QC, Canada; Université de Montréal, Montréal, QC, Canada
| | - Md Rifat Arefin
- Mila, Québec AI Institute, Montréal, QC, Canada; Université de Montréal, Montréal, QC, Canada
| | - Jocelyn Faubert
- Faubert Lab, Montréal, QC, Canada; Université de Montréal, Montréal, QC, Canada
| | - Irina Rish
- Mila, Québec AI Institute, Montréal, QC, Canada; Université de Montréal, Montréal, QC, Canada
| |
Collapse
|
6
|
Zhou D, Xu L, Wang T, Wei S, Gao F, Lai X, Cao J. M-DDC: MRI based demyelinative diseases classification with U-Net segmentation and convolutional network. Neural Netw 2024; 169:108-119. [PMID: 37890361 DOI: 10.1016/j.neunet.2023.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 09/03/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Childhood demyelinative diseases classification (DDC) with brain magnetic resonance imaging (MRI) is crucial to clinical diagnosis. But few attentions have been paid to DDC in the past. How to accurately differentiate pediatric-onset neuromyelitis optica spectrum disorder (NMOSD) from acute disseminated encephalomyelitis (ADEM) based on MRI is challenging in DDC. In this paper, a novel architecture M-DDC based on joint U-Net segmentation network and deep convolutional network is developed. The U-Net segmentation can provide pixel-level structure information, that helps the lesion areas location and size estimation. The classification branch in DDC can detect the regions of interest inside MRIs, including the white matter regions where lesions appear. The performance of the proposed method is evaluated on MRIs of 201 subjects recorded from the Children's Hospital of Zhejiang University School of Medicine. The comparisons show that the proposed DDC achieves the highest accuracy of 99.19% and dice of 71.1% for ADEM and NMOSD classification and segmentation, respectively.
Collapse
Affiliation(s)
- Deyang Zhou
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China; HDU-ITMO Joint Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Lu Xu
- Department of Neurology, Children's Hospital, Zhejiang University School of Medicine, 310018, China.
| | - Tianlei Wang
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Shaonong Wei
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China; HDU-ITMO Joint Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Feng Gao
- Department of Neurology, Children's Hospital, Zhejiang University School of Medicine, 310018, China.
| | - Xiaoping Lai
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Jiuwen Cao
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| |
Collapse
|
7
|
Steyaert S, Pizurica M, Nagaraj D, Khandelwal P, Hernandez-Boussard T, Gentles AJ, Gevaert O. Multimodal data fusion for cancer biomarker discovery with deep learning. NAT MACH INTELL 2023; 5:351-362. [PMID: 37693852 PMCID: PMC10484010 DOI: 10.1038/s42256-023-00633-5] [Citation(s) in RCA: 48] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 02/17/2023] [Indexed: 09/12/2023]
Abstract
Technological advances now make it possible to study a patient from multiple angles with high-dimensional, high-throughput multi-scale biomedical data. In oncology, massive amounts of data are being generated ranging from molecular, histopathology, radiology to clinical records. The introduction of deep learning has significantly advanced the analysis of biomedical data. However, most approaches focus on single data modalities leading to slow progress in methods to integrate complementary data types. Development of effective multimodal fusion approaches is becoming increasingly important as a single modality might not be consistent and sufficient to capture the heterogeneity of complex diseases to tailor medical care and improve personalised medicine. Many initiatives now focus on integrating these disparate modalities to unravel the biological processes involved in multifactorial diseases such as cancer. However, many obstacles remain, including lack of usable data as well as methods for clinical validation and interpretation. Here, we cover these current challenges and reflect on opportunities through deep learning to tackle data sparsity and scarcity, multimodal interpretability, and standardisation of datasets.
Collapse
Affiliation(s)
- Sandra Steyaert
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
| | - Marija Pizurica
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
| | | | | | - Tina Hernandez-Boussard
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
- Department of Biomedical Data Science, Stanford University
| | - Andrew J Gentles
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
- Department of Biomedical Data Science, Stanford University
| | - Olivier Gevaert
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University
- Department of Biomedical Data Science, Stanford University
| |
Collapse
|
8
|
Germani E, Fromont E, Maumet C. On the benefits of self-taught learning for brain decoding. Gigascience 2022; 12:giad029. [PMID: 37132522 PMCID: PMC10155221 DOI: 10.1093/gigascience/giad029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 01/24/2023] [Accepted: 04/14/2023] [Indexed: 05/04/2023] Open
Abstract
CONTEXT We study the benefits of using a large public neuroimaging database composed of functional magnetic resonance imaging (fMRI) statistic maps, in a self-taught learning framework, for improving brain decoding on new tasks. First, we leverage the NeuroVault database to train, on a selection of relevant statistic maps, a convolutional autoencoder to reconstruct these maps. Then, we use this trained encoder to initialize a supervised convolutional neural network to classify tasks or cognitive processes of unseen statistic maps from large collections of the NeuroVault database. RESULTS We show that such a self-taught learning process always improves the performance of the classifiers, but the magnitude of the benefits strongly depends on the number of samples available both for pretraining and fine-tuning the models and on the complexity of the targeted downstream task. CONCLUSION The pretrained model improves the classification performance and displays more generalizable features, less sensitive to individual differences.
Collapse
Affiliation(s)
- Elodie Germani
- Univ Rennes, Inria, CNRS, Inserm, IRISA UMR 6074, Empenn ERL U 1228, 35000 Rennes, France
| | - Elisa Fromont
- Univ Rennes, IUF, Inria, CNRS, IRISA UMR 6074, 35000 Rennes, France
| | - Camille Maumet
- Univ Rennes, Inria, CNRS, Inserm, IRISA UMR 6074, Empenn ERL U 1228, 35000 Rennes, France
| |
Collapse
|