BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields

CVPR 2023

1Zhejiang University 2Westlake University

Given a set of severe motion blurred images, BAD-NeRF jointly learns the neural radiance fields and recovers the camera motion trajectories within exposure time. It synthesizes novel images of higher quality than prior works.


Neural Radiance Fields (NeRF) have received considerable attention recently, due to its impressive capability in photo-realistic 3D reconstruction and novel view synthesis, given a set of posed camera images. Earlier work usually assumes the input images are of good quality. However, image degradation (e.g. image motion blur in low-light conditions) can easily happen in real-world scenarios, which would further affect the rendering quality of NeRF.

In this paper, we present a novel bundle adjusted deblur Neural Radiance Fields (BAD-NeRF), which can be robust to severe motion blurred images and inaccurate camera poses. Our approach models the physical image formation process of a motion blurred image, and jointly learns the parameters of NeRF and recovers the camera motion trajectories during exposure time.

In experiments, we show that by directly modeling the real physical image formation process, BAD-NeRF achieves superior performance over prior works on both synthetic and real datasets.



Motion Blur Image Formation Model

The mathematical modeling of motion blur process involves integrating over a set of virtual sharp images, as shown by the green line in the pipeline figure.

Linear Camera Motion Trajectory Modeling

We approximate the camera motion with a linear model during exposure time which is usually small. Specifically, two camera poses in SE(3) space are parameterized, one at the beginning of the exposure Tstart andone at the end Tend. Between these two poses, we linearly interpolate poses in the Lie-algebra of SE(3), as shown by the blue line in the pipeline figure.

Complex Camera Motion Trajectory Modeling


Cubic B-Spline Formulation

Compared to a linear-spline, a more complex camera trajectory within exposure time can be controlled by four control knots in SE(3) space, denoted as T0, T1, T2 and T3. More details can be found in our supplementary materials.


We present the rendered novel view video and pose estimation results. The experimental results demonstarte that our method can effectively deblur images, render novel view images and recover the camera motion trajectories accurately within exposure time.

Novel View Synthesis Comparison

BAD-NeRF delivers superior novel view synthesis performance over prior methods when input images are motion blurred.

Trajectory Visualization


Qualitative Comparisons of estimated camera poses on Deblur-NeRF dataset. These are results on Cozy2room, Factory, Pool, Tanabata and Trolley sequences respectively. The results demonstrate that BAD-NeRF delivers reasonable camera pose estimations and performs better than both COLMAP and BARF.


      author    = {Wang, Peng and Zhao, Lingzhe and Ma, Ruijie and Liu, Peidong},
      title     = {{BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields}},
      booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
      month     = {June},
      year      = {2023},
      pages     = {4170-4179}