WTEFNet, a novel object detection framework, improves performance in low-light conditions for advanced driver-assistance systems. Integrating low-light enhancement, wavelet-based feature extraction, and adaptive fusion, it achieves state-of-the-art accuracy on benchmarks including BDD100K and nuScenes. Validation on Jetson AGX Orin confirms real-time capability, supported by the new GSN dataset.
The efficacy of advanced driver-assistance systems (ADAS) hinges on reliable environmental perception, a capability severely compromised by low-light conditions. Current object detection algorithms, typically reliant on standard RGB camera input, experience substantial performance declines when illumination is poor. Researchers are now addressing this limitation with novel frameworks designed specifically for nocturnal and dimly lit environments. Hao Wu, Junzhou Chen, Ronghui Zhang, Nengchao Lyu, Hongyu Hu, Yanyong Guo, and Tony Z. Qiu detail their work in ‘WTEFNet: Real-Time Low-Light Object Detection for Advanced Driver-Assistance Systems’, presenting a system incorporating low-light image enhancement, wavelet-based feature extraction, and adaptive fusion detection. The team also introduces a new manually annotated dataset, GSN, to facilitate training and evaluation of such systems, and demonstrate the framework’s performance on established benchmarks and an embedded platform.
WTEFNet: Real-Time Object Detection for Low-Light Conditions
Object detection is fundamental to environmental perception within advanced driver-assistance systems (ADAS), enabling vehicles to understand their surroundings and navigate safely. Current methodologies frequently rely on standard RGB cameras, which experience substantial performance reductions in low-light environments due to diminished image quality and increased noise. Researchers address this limitation with WTEFNet, a novel, real-time object detection framework specifically engineered for challenging low-light scenarios and designed for compatibility with existing detectors, promising improved safety and reliability for autonomous vehicles.
WTEFNet operates through three interconnected modules. Initially, a Low-Light Enhancement (LLE) module actively improves visibility by brightening dark regions while simultaneously suppressing overexposed areas, effectively optimising image contrast and revealing crucial details often lost in darkness. Subsequently, the Wavelet-based Feature Extraction (WFE) module applies multi-level discrete wavelet transforms, decomposing the image into different frequency components and isolating essential features while filtering out noise. Wavelet transforms are mathematical tools used to decompose a signal – in this case, an image – into different frequency bands, allowing for the isolation and analysis of specific features. Finally, the Adaptive Fusion Detection (AFFD) module integrates semantic and illumination features, creating a comprehensive representation of the scene and enabling more accurate object identification.
To facilitate training and evaluation, the authors introduce GSN, a manually annotated dataset designed to capture the complexities of nighttime driving scenarios, including both clear and rainy conditions. Extensive experimentation across benchmarks – BDD100K, SHIFT, and nuScenes – alongside GSN, confirms that WTEFNet achieves state-of-the-art performance in low-light object detection, surpassing existing methods in terms of accuracy and robustness.
Furthermore, the framework’s practicality extends to real-world deployment, as demonstrated by successful testing on an embedded platform. The Jetson AGX Orin confirms WTEFNet’s capacity for real-time operation, making it a viable solution for integration into current and future ADAS applications. This ability to process data quickly and efficiently is crucial for ensuring that the system can react in a timely manner to changing conditions, preventing accidents and protecting occupants.
WTEFNet presents a comprehensive solution to the persistent challenge of reliable object detection in low-light conditions, a critical requirement for advanced driver-assistance systems. The framework demonstrably improves performance over existing methods by directly addressing the limitations imposed by poor image quality in darkness. Through the integration of low-light enhancement, wavelet-based feature extraction, and adaptive fusion detection, WTEFNet effectively mitigates the detrimental effects of both insufficient illumination and image noise. This holistic approach ensures that the system can operate reliably in a wide range of challenging conditions, enhancing safety and improving the overall driving experience.
A key contribution lies in the Adaptive Fusion Detection module, which intelligently combines semantic information – the ‘what’ of an object – with illumination features, representing the ‘where’ and ‘how well lit’ it is. This fusion process allows the system to discern objects even when they are partially obscured or poorly illuminated, significantly improving detection accuracy and reducing false positives. By integrating these complementary sources of information, WTEFNet achieves a more comprehensive understanding of the scene, enabling it to make more informed decisions and react appropriately to changing conditions. The system’s ability to accurately identify objects in challenging lighting conditions is crucial for ensuring the safety of both the vehicle and its surroundings.
👉 More information
🗞 WTEFNet: Real-Time Low-Light Object Detection for Advanced Driver-Assistance Systems
🧠 DOI: https://doi.org/10.48550/arXiv.2505.23201
