Exposure Fusion
27 Jul 2008 00:57Since trinary has been experimenting with artificial photo enhancement, I spent the day experimenting with the enfuse program in the newer snapshots of hugin. According to the associated paper that goes with the program, it would seem that one feeds it a bunch of photos. It then blends the "best" parts of each picture together to create the final image. The exact formulae for weighting the "best" image for a given pixel, and blending the image data together are given in this paper.
Of course, that leaves the question--how do we get different-looking photos and line them up? The first technique is an easy one--my camera has a mode in which will automatically bracket an exposure, taking one normal photo, one underexposed photo, and one overexposed photo. These can be fed into enfuse, which picks out the most pleasing bits and puts them into the output image. I suppose at that level it doesn't matter if the image is taken in RAW or JPEG, but I suppose one ought to start with the highest quality image possible. That said...
The second technique is to take one RAW image, not three. Each raw image records exactly what the CCD senses, and the CCD can record 12 bits per channel per pixel as compared to 8 in JPEG and on the screen. RAW images are not heavily processed (unlike JPEGs) or compressed, which means that the photographer can mess with the image as he chooses. In this case, we can create the over and under exposed images by running the RAW image through filters into cooked outputs. These photos are then fed into enfuse.
Actually, I don't feed them into enfuse directly; that would cause them to be horridly misaligned in the case of camera auto-bracketing. I feed them into hugin+autopano so that it can find control points between images without me having to do it. (Note: TIFF files must end in '.tif', not '.tiff') However, hugin's default settings work badly for this application. The next step is to select all images in the "Images" tab and zero yaw, pitch, and roll, as hugin likely set them to ridiculous values. Then select all images in "Camera and Lens" and zero distortion, barrel, distortion, center shift and image shearing. If you did a good job of auto-bracketing (or were just messing with exposure values) then zeroing the last two won't hurt the output. In many cases hugin supplies ridiculously large numbers. Also make sure that the EV values are different between the three images.
Next, hop over to "Stitcher" and change the projection to "Rectilinear" and recalculate the Field of View. Then go to the "Optimizer" tab and run the "Positions (incremental starting from anchor" optimizer. Do NOT add barrel distortion optimization; more often than not it messes up the output. You can also try some of the "Exposure" tab optimizations, though I've noticed that it mostly just alters the vignetting curves unfavorably. Then go back to "Stitcher", calculate the Optimal Size, and make sure that "Blended panorama (enfuse)" is checked. Finally, stitch the image together!
So here are three images taken of my backyard after spreading bark this afternoon:
Original image
Camera's automatic bracketing and compositing
Bracketing by mucking with the RAW file exposure levels and compositing
Of the three, I think the third has the best fidelity--because there was a breeze, the leaves on the trees were blowing around and this was recorded on the under/over exposures that compose the second image. The third image's source image was only exposed once, and thus is sharper. On the other hand, the camera bracketing system actually uses the optics, rather than messing around with curves in the RAW image. Note the discoloration in the sky on the upper left side of the third image. However, it is plainly evident that both processed images appear to have more detail... at least at the levels that our eyes are best at detecting.
Though Steven raised a good point--this doesn't seem like it should be much different from altering the curves to enhance the image; a simple modification of the RGB histograms centerward seems like it should suffice(?) (Gimp does not seem to support this assertion....)
Of course, that leaves the question--how do we get different-looking photos and line them up? The first technique is an easy one--my camera has a mode in which will automatically bracket an exposure, taking one normal photo, one underexposed photo, and one overexposed photo. These can be fed into enfuse, which picks out the most pleasing bits and puts them into the output image. I suppose at that level it doesn't matter if the image is taken in RAW or JPEG, but I suppose one ought to start with the highest quality image possible. That said...
The second technique is to take one RAW image, not three. Each raw image records exactly what the CCD senses, and the CCD can record 12 bits per channel per pixel as compared to 8 in JPEG and on the screen. RAW images are not heavily processed (unlike JPEGs) or compressed, which means that the photographer can mess with the image as he chooses. In this case, we can create the over and under exposed images by running the RAW image through filters into cooked outputs. These photos are then fed into enfuse.
Actually, I don't feed them into enfuse directly; that would cause them to be horridly misaligned in the case of camera auto-bracketing. I feed them into hugin+autopano so that it can find control points between images without me having to do it. (Note: TIFF files must end in '.tif', not '.tiff') However, hugin's default settings work badly for this application. The next step is to select all images in the "Images" tab and zero yaw, pitch, and roll, as hugin likely set them to ridiculous values. Then select all images in "Camera and Lens" and zero distortion, barrel, distortion, center shift and image shearing. If you did a good job of auto-bracketing (or were just messing with exposure values) then zeroing the last two won't hurt the output. In many cases hugin supplies ridiculously large numbers. Also make sure that the EV values are different between the three images.
Next, hop over to "Stitcher" and change the projection to "Rectilinear" and recalculate the Field of View. Then go to the "Optimizer" tab and run the "Positions (incremental starting from anchor" optimizer. Do NOT add barrel distortion optimization; more often than not it messes up the output. You can also try some of the "Exposure" tab optimizations, though I've noticed that it mostly just alters the vignetting curves unfavorably. Then go back to "Stitcher", calculate the Optimal Size, and make sure that "Blended panorama (enfuse)" is checked. Finally, stitch the image together!
So here are three images taken of my backyard after spreading bark this afternoon:



Of the three, I think the third has the best fidelity--because there was a breeze, the leaves on the trees were blowing around and this was recorded on the under/over exposures that compose the second image. The third image's source image was only exposed once, and thus is sharper. On the other hand, the camera bracketing system actually uses the optics, rather than messing around with curves in the RAW image. Note the discoloration in the sky on the upper left side of the third image. However, it is plainly evident that both processed images appear to have more detail... at least at the levels that our eyes are best at detecting.
Though Steven raised a good point--this doesn't seem like it should be much different from altering the curves to enhance the image; a simple modification of the RGB histograms centerward seems like it should suffice(?) (Gimp does not seem to support this assertion....)