Skip to main content

Introducing the HDR+ Burst Photography Dataset



Burst photography is the key idea underlying the HDR+ software on Google's recent smartphones, and a fundamental computational photography technique for improving image quality. Every photo taken with HDR+ is actually a composite, generated by capturing and merging a short burst of full-resolution photos. HDR+ has helped the Pixel and the Pixel 2 earn DxO's highest mobile camera ranking for two years in a row. The new portrait mode on the Pixel 2 also relies on HDR+, both for its basic image quality and to improve the quality of its depth estimation.

Today we're pleased to announce the public release of an archive of image bursts to the research community. This provides a way for others to compare their methods to the results of Google's HDR+ software running on the same input images. This dataset consists of 3,640 bursts of full-resolution raw images, made up of 28,461 individual images, along with HDR+ intermediate and final results for comparison. The images cover a wide range of photographic situations, including variation in subject, level of motion, brightness, and dynamic range.
Using bursts to improve image quality. HDR+ starts from a burst of full-resolution raw images (left). Depending on conditions, between 2 and 10 images are aligned and merged into an intermediate raw image (middle). This merged image has reduced noise and increased dynamic range, leading to a higher quality final result (right).
Better Images with Burst Photography
Burst photography provides the benefits associated with collecting more light, including reduced noise and improved dynamic range, but it avoids the motion blur that would come from increasing exposure times. This is particularly important for small smartphone cameras, whose size otherwise limits the amount of light they can capture.

Since HDR+ was first released on Nexus 5 and 6, we've been busy improving the system. As described in our recent SIGGRAPH Asia paper, HDR+ now starts from raw images, which helps improve image quality. This also means that the image processing pipeline is fully implemented using our software. Next, we eliminated shutter lag, which makes photography feel instantaneous. The HDR+ photo you get corresponds to the moment the button was pressed. Finally, we improved processing times and power consumption, by implementing HDR+ on accelerators like the Qualcomm Hexagon DSP and the new Pixel Visual Core.
Mosaic of thumbnails illustrating the size and diversity of the HDR+ dataset. Putting a computational photography system like HDR+ into production, where users capture millions of photos per day, means that odd photographic corner cases must be handled in a robust way.
Using the Dataset
The scale and diversity of the HDR+ dataset also opens up the opportunity to apply modern machine learning methods. Our dataset has already been incorporated in a recent research paper which uses a neural network to approximate part of the HDR+ pipeline, constrained to a representation suitable for fast image processing. Several more papers that apply learning to the HDR+ dataset are currently under review.

Inspired by the Middlebury archive of stereo data, our hope is that a shared dataset will enable the community to concentrate on comparing results. This approach is intrinsically more efficient than expecting researchers to configure and run competing techniques themselves, or to implement them from scratch if the code is proprietary. The HDR+ dataset is released under a Creative Commons license (CC-BY-SA). This license is largely unencumbered, however our main intention is that the dataset be used for scientific purposes. For information about how to cite the dataset, please see the detailed description. We look forward to seeing what else researchers can do with the HDR+ dataset!

Acknowledgments
Special thanks to the photographers and subjects of the HDR+ dataset.
Twitter Facebook