Reconstructing detailed images through multimode fibres presents a significant challenge in medical imaging, often requiring vast datasets and struggling with image distortion. Jawaria Maqbool and M. Imran Cheema, from the Lahore University of Management Sciences, alongside their colleagues, address this problem with HistoSpeckle-Net, a novel deep learning architecture. This system reconstructs complex anatomical images, captured via perturbed multimode fibres, with unprecedented fidelity and reduced data requirements. Unlike existing methods, HistoSpeckle-Net incorporates the underlying statistics of both the speckle patterns and the reconstructed images, employing a histogram-based mutual information loss and a unique feature refinement module. The team demonstrates that HistoSpeckle-Net outperforms established models, even with limited training data and under varying fibre conditions, bringing practical, high-resolution medical imaging with multimode fibres closer to reality.
Deep Learning Corrects Multimode Fiber Distortion
This research introduces HistoSpeckle-Net, a deep learning framework that significantly improves the robustness and clarity of images captured through multimode optical fibers. Multimode fiber imaging holds promise for minimally invasive procedures, but image distortion caused by speckle patterns and fiber bending has historically limited accuracy. HistoSpeckle-Net overcomes these challenges through innovative architectural design and a novel loss function that analyzes image histograms. Key components include a Three-Scale Feature Refinement Module, which enhances the network’s ability to capture both fine details and broader contextual information, and a histogram-based loss function that aligns the statistical properties of the reconstructed image with the original, addressing the randomness inherent in speckle patterns.
The framework demonstrates improved performance even with limited training data and under conditions of fiber bending or other perturbations, validated on the OrganAMNIST dataset, a complex benchmark for biomedical image classification. The research addresses image distortion in multimode fiber imaging, specifically caused by speckle patterns and fiber bending. HistoSpeckle-Net outperforms existing methods in reconstructing images through multimode fibers, particularly in challenging conditions, due to its histogram-based loss function aligning statistical properties and the Three-Scale Feature Refinement Module capturing both fine details and broader context. This advancement has the potential to significantly advance the field of multimode fiber imaging, bringing it closer to clinical deployment and enabling more accurate and reliable minimally invasive diagnostics and therapies, with potential applications in other imaging areas involving scattering media.
Speckle Reconstruction with Deep Learning and Organ Images
Scientists engineered a novel deep learning architecture, HistoSpeckle-Net, to reconstruct detailed medical images from speckle patterns generated within multimode fibers. To create a clinically relevant training dataset, the team developed an optical setup directing laser light through a spatial light modulator into a multimode fiber, capturing speckle patterns corresponding to input images from the OrganAMNIST dataset, which presents a significant increase in complexity compared to previous research. The core innovation lies in a distribution-aware learning strategy, recognizing the underlying statistical properties of speckle patterns and reconstructed images, employing a histogram-based mutual information loss function to enhance model robustness and reduce reliance on large training datasets. The model incorporates a dedicated histogram computation unit estimating both smooth marginal and joint histograms, providing statistical information for calculating the mutual information loss. Further enhancing image quality, the team integrated a unique Three-Scale Feature Refinement Module, facilitating the computation of multiscale Structural Similarity Index Measure (SSIM) loss. Experiments on the complex OrganAMNIST dataset demonstrate that HistoSpeckle-Net achieves higher fidelity reconstructions than baseline models, U-Net and Pix2Pix, even when trained with limited samples and under varying fiber bending conditions, bringing practical clinical deployment closer to reality.
HistoSpeckle-Net Reconstructs Images From Fiber Optics
Scientists have achieved significant advancements in medical image reconstruction using multimode fiber imaging with the development of HistoSpeckle-Net, a novel deep learning architecture. This work addresses limitations in existing methods by focusing on reconstructing structurally rich images from multimode fiber speckles, a challenge complicated by real-world imaging tasks and the need for large datasets. The team constructed a specialized setup coupling laser light through a spatial light modulator into a multimode fiber, capturing speckle patterns corresponding to input OrganAMNIST images, creating a clinically relevant dataset. Experiments demonstrate that HistoSpeckle-Net outperforms baseline models, U-Net and Pix2Pix, achieving an average Structural Similarity Index (SSIM) of 0.
7240 on unseen test images. This improvement stems from a distribution-aware learning strategy incorporating a histogram-based mutual information loss and a unique Three-Scale Feature Refinement Module, enhancing both structural fidelity and statistical alignment. The model’s ability to preserve fine details without excessive smoothing is particularly noteworthy. Further testing revealed HistoSpeckle-Net’s robustness even with limited data, maintaining an average SSIM of 0. 6652 when trained on only 15,000 samples.
To simulate real-world conditions, the team combined data from different fiber configurations, and HistoSpeckle-Net consistently achieved an average SSIM above 0. 64 for each test dataset, demonstrating its resilience to fiber perturbations. These results highlight the potential for practical deployment of multimode fiber imaging in clinical environments, even when acquiring large annotated datasets is challenging, paving the way for future applications in diverse fields like turbid fluid analysis and low-light microscopy.
HistoSpeckle-Net Reconstructs Images From Speckle Patterns
This work demonstrates a significant advancement in multimode fiber imaging through the development of HistoSpeckle-Net, a deep learning architecture that effectively reconstructs complex medical images from speckle patterns. By incorporating distribution-aware learning with mutual information loss and a novel Three-Scale Feature Refinement Module, the model achieves higher fidelity reconstructions than existing methods, even when trained with limited data and under challenging fiber bending conditions. The researchers successfully preserved fine structural details within complex OrganAMNIST images, bringing practical clinical deployment of multimode fiber imaging closer to reality. The team’s approach combines histogram-based loss functions, grounded in the physical characteristics of multimode fiber speckles, with architectural enhancements to improve robustness in challenging imaging scenarios. While the results demonstrate improved performance under typical perturbations, the authors acknowledge the need for further research to explore the model’s limitations under extreme conditions and its generalizability to other medical images and fiber geometries. Future work may extend HistoSpeckle-Net to other scattering-media imaging applications, such as turbid fluids and biological tissues, and the architectural strategies proposed could benefit diverse fields including low-light microscopy and remote sensing.
👉 More information
🗞 HistoSpeckle-Net: Mutual Information-Guided Deep Learning for high-fidelity reconstruction of complex OrganAMNIST images via perturbed Multimode Fibers
🧠 ArXiv: https://arxiv.org/abs/2511.20245
