Researchers Fuse X-ray Data for Faster, Accurate Diagnosis

Spectral computed tomography, a new imaging technique that captures X-ray data at multiple energy levels, promises improved tissue and material differentiation for more accurate diagnosis, but generates complex, multi-volume datasets that are difficult to interpret. Mohit Sharma, Emma Nilsson, and Martin Falk, from Linköping University, along with colleagues, address this challenge by presenting a novel method for fusing these datasets into a single, representative volume. Their approach leverages the relationships between different energy-level scans, using 2D histograms to identify prominent structural features and constructing an ‘extremum graph’ that captures the topological distribution of information. This topology-aware fusion significantly reduces the complexity of analysis, retaining crucial structural and material characteristics while enabling more effective visualization and segmentation of multi-energy CT scans, ultimately streamlining diagnostic workflows.

Photon-Counting Computed Tomography (PCCT) represents a novel imaging technique that simultaneously acquires volumetric data. Conventional methods often struggle to fully separate all features within individual volumes, limiting detailed analysis. Researchers therefore investigate techniques to better characterise complex data, focusing on the underlying structure of histogram density fields. This involves analysing the relationships between maxima, saddles, and minima within the data, represented as a topological landscape. To capture major ridge structures, the team constructs a network from these features, identifying the longest path which traverses key data points. A smooth line then fits this extracted path, generating a fused representation by projecting data onto it, ultimately revealing the most important features in a simplified view.

Visualising High-Dimensional Data with Topology

Current research focuses on effectively visualising data with more than three dimensions, such as that generated by spectral CT scans and complex scientific simulations. Many approaches aim to reduce the number of dimensions while preserving important features for interpretation. A significant trend is the use of topological data analysis (TDA) to understand and visualise the structure of data, employing concepts like persistent homology and Morse theory to characterise features. The goal extends beyond simply rendering data; researchers seek to reveal underlying patterns and relationships. Designing effective transfer functions, which map data values to colours and opacities, is a central challenge in direct volume rendering, and many papers focus on automating or improving this process, often in conjunction with dimensionality reduction or topological analysis.

The majority of this work applies to scientific visualisation problems, particularly in medical imaging and astrophysics, with a strong emphasis on gaining insights from complex datasets. Key techniques include parallel coordinates for visualising high-dimensional data, principal component analysis for dimensionality reduction, and modern non-linear methods like t-distributed Stochastic Neighbor Embedding (t-SNE) and UMAP. Topological data analysis employs persistent homology to identify and track topological features, Morse theory to analyse data shape, and Reeb graphs to summarise data topology. Volume rendering relies on direct volume rendering, often using ray casting, and spectral volume rendering leverages spectral information to improve visualisation.

Applications span medical imaging, particularly CT and spectral CT, where the focus is on visualising anatomical structures and detecting abnormalities. Research also extends to astrophysics, where scientists visualise the large-scale structure of the universe, and materials science, where the structure and properties of granular materials are investigated. Several tools and frameworks support this work, including Inviwo, a visualisation system with abstraction levels for customisation, and the Topology ToolKit, a library for performing topological data analysis. Widely used open-source libraries like VTK and Python libraries such as scikit-learn, NumPy, and SciPy also play a crucial role. Emerging trends include combining TDA with machine learning, developing interactive visualisation tools, leveraging GPU acceleration, and utilising cloud-based visualisation for large-scale data and collaboration.

Fusing Volumes Simplifies Tomography Data Analysis

Researchers have developed a new method for processing data from Counting Tomography, an advanced imaging technique that captures detailed information about materials by measuring their interaction with X-rays at multiple energy levels. While this technique generates rich datasets, interpreting the resulting volumes can be challenging due to their complexity and redundancy. This new approach streamlines the process by fusing multiple volumes into a single, representative volume, simplifying visualisation and analysis. The method begins by examining the relationships between different energy-level volumes using two-dimensional histograms, which reveal structural similarities and variations.

By focusing on volumes that exhibit complementary information, those that aren’t strongly correlated but still share features, the technique maximises the amount of useful data retained in the fused image. A key step involves constructing a network from these histograms, mapping prominent structural features and their connections. This network captures the underlying topology of the data, essentially creating a simplified representation of the complex relationships within the original volumes. The researchers then extract a path through this network, prioritising connections between high-density regions and key features.

This path serves as a guide for fusing the data, ensuring that important structural information is preserved in the final volume. The team found that the longest path through the network often captures a significant portion of the data, but interactive refinement allows for the inclusion of additional features or the exclusion of less relevant ones. This interactive element allows users to tailor the fused volume to their specific analytical needs. The resulting fused volume offers a significantly simplified yet information-rich representation of the original multi-energy CT data, allowing for more effective visualisation and segmentation of tissues and materials, potentially improving diagnostic accuracy and analytical capabilities. By reducing the complexity of the data, the method facilitates a clearer understanding of the underlying structures and characteristics captured by Counting Tomography, opening new avenues for research and application in fields like materials science and medical imaging.

Fusing PCCT Volumes with Topological Graphs

This research introduces a new method for processing data acquired through photon-counting computed tomography (PCCT), an imaging technique that captures volumetric data at multiple X-ray energy levels. The team developed a pipeline that fuses these multiple volumes into a single, representative volume, simplifying visualisation and analysis. This is achieved by identifying prominent structural features within pairs of volumes and constructing a topological graph that captures their relationships, ultimately reducing the complexity of the data while preserving key characteristics. The resulting fused volume is well-suited for volume rendering and segmentation.

Tests on both synthetic and real datasets demonstrate the effectiveness of this approach in creating a unified representation of complex PCCT data. The authors acknowledge that the process currently prevents direct interpretation of scalar values in the fused volume, as these values no longer directly correspond to standard radiological measurements. Future work will focus on addressing this limitation, generalising the pipeline to handle higher dimensions, and automating the fusion process to reduce user interaction, potentially through optimisation methods that maximise feature coverage.

👉 More information
🗞 Topology-Aware Volume Fusion for Spectral Computed Tomography via Histograms and Extremum Graph
🧠 ArXiv: https://arxiv.org/abs/2508.14719

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

December 29, 2025
Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

December 28, 2025
Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

December 27, 2025