Li F, Yao L, Niu W, Li Z, Shi J, Zhang J, Shen C, Chi N. Feature decoupled knowledge distillation enabled lightweight image transmission through multimode fibers.
OPTICS EXPRESS 2024;
32:4201-4214. [PMID:
38297626 DOI:
10.1364/oe.516102]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 01/13/2024] [Indexed: 02/02/2024]
Abstract
Multimode fibers (MMF) show tremendous potential in transmitting high-capacity spatial information. However, the quality of multimode transmission is quite sensitive to inherent scattering characteristics of MMF and almost inevitable external perturbations. Previous research has shown that deep learning may break through this limitation, while deep neural networks are intricately designed with huge computational complexity. In this study, we propose a novel feature decoupled knowledge distillation (KD) framework for lightweight image transmission through MMF. In this framework, the frequency-principle-inspired feature decoupled module significantly improves image transmission quality and the lightweight student model can reach the performance of the sophisticated teacher model through KD. This work represents the first effort, to the best of our knowledge, that successfully applies a KD-based framework for image transmission through scattering media. Experimental results demonstrate that even with up to 93.4% reduction in model computational complexity, we can still achieve averaged Structure Similarity Index Measure (SSIM) of 0.76, 0.85, and 0.90 in Fashion-MNIST, EMNIST, and MNIST images respectively, which are very close to the performance of cumbersome teacher models. This work dramatically reduces the complexity of high-fidelity image transmission through MMF and holds broad prospects for applications in resource-constrained environments and hardware implementations.
Collapse