A Physically Grounded Dataset for Nighttime Video Deraining
600K paired frames rendered in Unreal Engine 5.3 with physically grounded rain simulation.
Video diffusion model achieving 94% preference rate on real nighttime rain videos.
Nighttime rain exhibits four unique properties that make it fundamentally harder to remove than daytime rain.
Raindrops refract colored artificial light sources — neon signs, streetlamps, vehicle headlights — producing vivid, spectrally diverse rain streaks absent in daytime.
Rain visibility depends on proximity to light sources. Streaks appear bright near lamps but vanish in dark regions, creating spatially non-uniform degradation.
High-intensity flashes occur when rain passes through focused beams, creating transient bright spots that confuse motion estimation and temporal models.
Dense, wind-driven sheets of rain form volumetric curtains that occlude large regions, requiring models to hallucinate missing content behind the veil.
Physically grounded rain simulation with 3D particle systems — proper occlusion, chromatic effects, and temporal coherence. No 2D overlays.
UENR-600K helps models generalize better to real nighttime rain.
Wan 2.2 video diffusion transformer adapted with LoRA fine-tuning, flow matching objective, and unidirectional attention for single-pass video deraining.
All models tested on real nighttime rain videos from Pexels.
@article{UENR600K,
title={UENR-600K: A Physically Grounded Dataset for Nighttime Video Deraining},
author={Pei Yang and Hai Ci and Beibei Lin and Yiren Song and Mike Zheng Shou},
year={2026},
archivePrefix={arXiv},
primaryClass={cs.CV},
}