Delivr Achieves Efficient Video Deraining Via Spatiotemporal Lie Bias and Differential Geometric Transformations

Rain streaks and distortions frequently plague videos captured in real-world conditions, and even minor camera movements can worsen these issues, creating noticeable visual artifacts. Shuning Sun, Jialang Lu, and Xiang Chen, working with colleagues at the University of Chinese Academy of Sciences, Nanjing University of Science and Technology, and Shandong Normal University, now present a new approach to address these challenges. Their research introduces DeLiVR, a method that leverages the mathematical principles of Lie groups to enforce consistency across video frames, achieving efficient and robust video deraining. By directly incorporating spatiotemporal Lie-group biases into the network’s attention mechanisms, DeLiVR accurately aligns frames and focuses on the direction of rain streaks, ultimately delivering clearer, more visually appealing video. This innovative technique represents a significant step forward in video restoration, offering a computationally efficient solution for removing rain and improving overall video quality.

Existing methods often struggle with accurately capturing streak direction, particularly in complex scenes, leading to blurry or incomplete results. DeLiVR addresses this challenge by representing rain streak orientations using the Lie group SO(2), ensuring mathematically valid rotations and a geometrically meaningful way to model streak direction. The method disentangles spatial and temporal biases, capturing the orientation of rain streaks within each frame as a spatial bias and modeling the consistency of these orientations across frames as a temporal bias. These biases are then injected into the attention mechanism of a transformer-based network, guiding it to focus on relevant features for rain removal. This approach achieves competitive or superior results on benchmark datasets, demonstrating its effectiveness in removing rain streaks from videos.

Lie Group Attention for Video Deraining

The study pioneers DeLiVR, a novel video deraining method that explicitly incorporates spatiotemporal Lie-group differential biases directly into the attention scores of a neural network. Recognizing the limitations of existing methods when dealing with dynamic rain streaks and slight camera movements, the team engineered a system grounded in Lie group theory to enforce spatial and temporal consistency. The core of this approach lies in representing continuous geometric transformations, allowing for precise alignment even in challenging conditions. To achieve geometry-consistent alignment, scientists developed a rotation-bounded Lie relative bias, which predicts the in-plane angle of each frame using a compact prediction module.

This module rotates normalized coordinates and compares them to base coordinates, effectively aligning frames before feature aggregation. Complementing this, a differential group displacement computes angular differences between adjacent frames to estimate velocity, combining temporal decay and attention masks to focus on inter-frame relationships and precisely match the direction of rain streaks. By leveraging the mathematical properties of Lie groups, the network accurately models rotations and corrects for subtle camera movements, distinguishing true correspondences from rain noise during cross-frame aggregation and improving deraining performance.

Lie Group Theory Enables Robust Video Deraining

Scientists have developed DeLiVR, a new video deraining method that incorporates Lie group theory to address challenges posed by rain streaks, blur, and noise in outdoor videos. The work introduces a differential spatiotemporal Lie Bias, a novel mechanism that leverages geometric priors to improve feature alignment in dynamic scenes, independent of traditional optical flow methods. The team designed two complementary bias components to encode inter-frame geometric transformations: a rotation-bounded Lie relative bias module employs a compact prediction network to estimate the in-plane rotation angle of each frame, achieving geometry-consistent coordinate alignment within a Lie-group framework, and a differential group displacement component computes angular differences between adjacent frames, estimating angular velocity and providing dynamic information about motion trends. These biases are integrated into a unified attention bias, combined with temporal decay and an attention mask, guiding the network to accurately estimate the intensity and direction of rain streaks. Experiments demonstrate the effectiveness of DeLiVR on multiple benchmarks, both synthetic and real-world, significantly boosting the spatio-temporal modeling capability of networks in complex rainy videos and providing a new paradigm for solving the feature alignment problem in dynamic scenes.

Lie Group Attention For Video Restoration

DeLiVR represents a significant advancement in video restoration, specifically addressing the challenge of removing rain streaks and noise from videos captured in real-world conditions. By leveraging Lie group theory, DeLiVR achieves geometry-consistent alignment and incorporates motion-aware temporal modeling, resulting in improved performance on challenging video datasets. Experimental results demonstrate that DeLiVR not only surpasses existing state-of-the-art methods in video deraining but also enhances the performance of downstream tasks such as object detection and semantic segmentation, highlighting the practical value of integrating geometric principles into attention-based networks for reliable video restoration. Future research will focus on extending the framework to encompass richer transformation groups and addressing computational considerations, potentially broadening the applicability and efficiency of the method.

👉 More information
🗞 DeLiVR: Differential Spatiotemporal Lie Bias for Efficient Video Deraining
🧠 ArXiv: https://arxiv.org/abs/2509.21719

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Virtual Qubit Reduces Thermodynamic Uncertainty, Enabling Enhanced Nanoscale System Performance

Virtual Qubit Reduces Thermodynamic Uncertainty, Enabling Enhanced Nanoscale System Performance

January 16, 2026
Quantum Computing Achieves 109x Gradient Variance Improvement with Novel H-EFT-VA Ansatz

Quantum Computing Achieves 109x Gradient Variance Improvement with Novel H-EFT-VA Ansatz

January 16, 2026
Reinforcement Learning Achieves Time-Constrained LLM Translation with Sand-Glass Benchmark

Reinforcement Learning Achieves Time-Constrained LLM Translation with Sand-Glass Benchmark

January 16, 2026