Deep learning-driven adaptive optics for single-molecule localization microscopy.
Nat Methods 2023;
20:1748-1758. [PMID:
37770712 PMCID:
PMC10630144 DOI:
10.1038/s41592-023-02029-0]
[Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 08/23/2023] [Indexed: 09/30/2023]
Abstract
The inhomogeneous refractive indices of biological tissues blur and distort single-molecule emission patterns generating image artifacts and decreasing the achievable resolution of single-molecule localization microscopy (SMLM). Conventional sensorless adaptive optics methods rely on iterative mirror changes and image-quality metrics. However, these metrics result in inconsistent metric responses and thus fundamentally limit their efficacy for aberration correction in tissues. To bypass iterative trial-then-evaluate processes, we developed deep learning-driven adaptive optics for SMLM to allow direct inference of wavefront distortion and near real-time compensation. Our trained deep neural network monitors the individual emission patterns from single-molecule experiments, infers their shared wavefront distortion, feeds the estimates through a dynamic filter and drives a deformable mirror to compensate sample-induced aberrations. We demonstrated that our method simultaneously estimates and compensates 28 wavefront deformation shapes and improves the resolution and fidelity of three-dimensional SMLM through >130-µm-thick brain tissue specimens.
Collapse