Recover Biological Structure from Sparse-View Diffraction Images with Neural Volumetric Prior

1University of California, Davis, 2California Institute of Technology

Abstract

Volumetric reconstruction of label-free living cells from non-destructive optical microscopic images reveals cellular metabolism in native environments. However, current optical tomography techniques require hundreds of 2D images to reconstruct a 3D volume, hindering them from intravital imaging of biological samples undergoing rapid dynamics. This poses the challenge of reconstructing the entire volume of semi-transparent biological samples from sparse views due to the restricted viewing angles of microscopes and the limited number of measurements. In this work, we develop Neural Volumetric Prior (NVP) for high-fidelity volumetric reconstruction of semi-transparent biological samples from sparse-view microscopic images. NVP integrates explicit and implicit neural representations and incorporates the physical prior of diffractive optics. We validate NVP on both simulated data and experimentally captured microscopic images. Compared to previous methods, NVP significantly reduces the required number of images by nearly 50-fold and processing time by 3-fold while maintaining state-of-the-art performance. NVP is the first technique to enable volumetric reconstruction of label-free biological samples from sparse-view microscopic images, paving the way for real-time 3D imaging of dynamically changing biological samples.

Overview

Matting Example

Overview of our method for 3D reconstruction.

a, Neural volumetric prior (NVP): Predefined 3D grids are reshaped and processed by MLPs to generate the predicted 3D RI volume $\hat{n}$. b, Multi-slice rendering equation: The multi-slice model calculates light propagation through the volumetric sample from the fluorescence sources (white spots at the bottom) by accounting for light diffraction at each slice. Each illumination configuration of the fluorescence sources (e.g., $F_1, F_i, F_n$) interacts with the volume to produce a corresponding set of rendered images ($\hat{I}_1, \hat{I}_i, \hat{I}_n$). The illumination configurations are jointly optimized with the RI volume. c, Loss functions: The predicted images $\hat{I}_{\text{pred}}$ are aligned with ground truth images $I_{\text{GT}}$ and masked based on light coherence, resulting in $\hat{I}_{\text{mask}}$ and $I_{\text{mask}}$. Loss functions, including $L1$, $L2$, and SSIM, are calculated over the masked regions of $\hat{I}_{\text{pred}}$ and $I_{\text{GT}}$, and a total variation (TV) regularizer $R(\hat{n})$ is applied.

Representation Methods

Matting Example

Illustrations of different neural representation methods.

a, Explicit neural representation: A non-parametric method in which the refractive index (RI) distribution $\hat{n}$ is directly reconstructed by the projection function $W_{\text{enr}}(x, y, z)$, which provide a one-to-one mapping from spatial coordinates to RI values \cite{Xue:22, FDT}. b,Implicit neural representation: Instead of directly reconstructing $\hat{n}$, this method optimizes the parameters of a multi-layer perceptron (MLP) $F_{\text{inr}}$, which is then used to predict $\hat{n}$ c, Triplane: A hybrid method combining non-parametric and parametric components by solving both the triplane features ${W_{xy}, W_{xz}, W_{yz}}$ and the neural model $F_{\text{tri}}$ to reconstruct $\hat{n}$ d, NVP: Our proposed hybrid approach reconstructs $\hat{n}$ by integrating an uncompressed volumetric prior ${W_{xyz}}$ into the neural model $F_{\text{nvp}}$.

Results

BibTeX



      @misc{he2025compressivefourierdomainintensitycoupling,
      title={Compressive Fourier-Domain Intensity Coupling (C-FOCUS) enables near-millimeter deep imaging in the intact mouse brain in vivo}, 
      author={Renzhi He and Yucheng Li and Brianna Urbina and Jiandi Wan and Yi Xue},
      year={2025},
      eprint={2505.21822},
      archivePrefix={arXiv},
      primaryClass={physics.optics},
      url={https://arxiv.org/abs/2505.21822},}