Neural Radiance Fields (NeRF) have received considerable attention recently, due to
its impressive capability in photo-realistic 3D reconstruction and novel view synthesis,
given a set of posed camera images. Earlier work usually assumes the input images are of good quality.
However, image degradation (e.g. image motion blur in low-light conditions) can easily happen in
real-world scenarios, which would further affect the rendering quality of NeRF.
In this paper, we present a novel bundle adjusted deblur Neural Radiance Fields (BAD-NeRF),
which can be robust to severe motion blurred images and inaccurate camera poses.
Our approach models the physical image formation process of a motion blurred image,
and jointly learns the parameters of NeRF and recovers the camera motion trajectories
during exposure time.
In experiments, we show that by directly modeling the real physical image formation process,
BAD-NeRF achieves superior performance over prior works on both synthetic and real datasets.