Radio astronomy faces a growing challenge, as next-generation telescopes generate enormous volumes of data requiring rapid, energy-efficient processing, a task that pushes the limits of conventional computing approaches. Nicholas J. Pritchard, Andreas Wicenec, and Richard Dodson, from the International Centre for Radio Astronomy Research at the University of Western Australia, alongside Mohammed Bennamoun and Dylan R. Muir, address this problem by pioneering the use of deep Spiking Neural Networks (SNNs) deployed on specialised hardware. The team successfully constructed an end-to-end pipeline, partitioning a large pre-trained network onto SynSense Xylo chips, and demonstrated instrument-scaled inference at a remarkably low power consumption of 100mW. Importantly, their experiments reveal that smaller, un-partitioned models outperform larger, split versions, demonstrating that careful hardware co-design is crucial for achieving optimal performance and establishing a practical blueprint for future radio astronomy applications of this technology.
The team partitioned large, pre-trained SNNs onto SynSense Xylo hardware using a novel algorithm called ‘maximal splitting’, efficiently distributing the network across hardware limitations and achieving instrument-scaled inference at 100mW. Evaluation of the ‘maximal splitting’ method revealed a crucial trade-off between flexibility and efficiency.
Through these efforts, the study validated Radio Frequency Interference (RFI) detection as a demanding benchmark for advancing neuromorphic computing, highlighting the need for hardware-aware training methodologies. Scientists compared the performance of partitioned SNNs with smaller, un-partitioned models, discovering that the smaller model significantly outperformed larger, split models, underscoring the importance of hardware co-design for optimal performance. A key finding revealed that smaller, un-partitioned SNN models significantly outperformed larger models that had been split for deployment, demonstrating the paramount importance of hardware co-design for optimal performance in neuromorphic computing. The research introduces a novel algorithm, termed maximal splitting, to effectively partition the networks.
The team developed a robust formulation for RFI detection, framing it as a time-series segmentation problem with latency encoding and hardware fan-in considerations during SNN training, employing a second-order Leaky Integrate and Fire (LiF) neuron model. The loss function combines a supervised loss component with a fan-in penalty, ensuring compatibility with hardware limitations. The team also introduced a novel algorithm, termed ‘maximal splitting,’ designed to partition large pre-trained networks for deployment on such hardware, demonstrating interoperability between software frameworks and neuromorphic platforms. Experiments reveal that while partitioning allows deployment of larger models, performance significantly degrades, and smaller, directly-trained models outperform split networks, underscoring the importance of hardware co-design. Although the team achieved state-of-the-art accuracy with the original, full-width models on a synthetic dataset, they acknowledge limitations including performance degradation from splitting and hardware platform specificity. Future work should focus on validating these approaches with real-world astronomical data and exploring network architectures designed for modularity, reinforcing the potential of SNNs and neuromorphic computing for processing spectral-temporal data in demanding fields like radio astronomy.
👉 More information
🗞 Neuromorphic Astronomy: An End-to-End SNN Pipeline for RFI Detection Hardware
🧠 ArXiv: https://arxiv.org/abs/2511.16060
