UENR-600K

A Physically Grounded Dataset for Nighttime Video Deraining

Pei Yang*1 Hai Ci*1 Beibei Lin2 Yiren Song1 Mike Zheng Shou✉1
1Show Lab, National University of Singapore   
2National University of Singapore
Paper coming soon Code coming soon Dataset coming soon Comparison
A New Dataset

600K paired frames rendered in Unreal Engine 5.3 with physically grounded rain simulation.

A New Baseline

Video diffusion model achieving 94% preference rate on real nighttime rain videos.

Nighttime Rain Is Different

Nighttime rain exhibits four unique properties that make it fundamentally harder to remove than daytime rain.

Chromatic

Raindrops refract colored artificial light sources — neon signs, streetlamps, vehicle headlights — producing vivid, spectrally diverse rain streaks absent in daytime.

Localized

Rain visibility depends on proximity to light sources. Streaks appear bright near lamps but vanish in dark regions, creating spatially non-uniform degradation.

Glimmer Effect

High-intensity flashes occur when rain passes through focused beams, creating transient bright spots that confuse motion estimation and temporal models.

Rain Curtains

Dense, wind-driven sheets of rain form volumetric curtains that occlude large regions, requiring models to hallucinate missing content behind the veil.

600,000 Frames. Rendered in Unreal Engine 5.3.

Physically grounded rain simulation with 3D particle systems — proper occlusion, chromatic effects, and temporal coherence. No 2D overlays.

600K
Frame pairs
1080p
Resolution
UE 5.3
Render engine
3D
Particle rain

UENR-600K helps models generalize better to real nighttime rain.

Our Baseline

Wan 2.2 video diffusion transformer adapted with LoRA fine-tuning, flow matching objective, and unidirectional attention for single-pass video deraining.

Model architecture diagram

All models tested on real nighttime rain videos from Pexels.

BibTeX

@article{UENR600K,
    title={UENR-600K: A Physically Grounded Dataset for Nighttime Video Deraining},
    author={Pei Yang and Hai Ci and Beibei Lin and Yiren Song and Mike Zheng Shou},
    year={2026},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
}