August 9, 2006 4:00 AM PDT
Researchers take the blur out of shaky photos
(continued from previous page)
Specifically, photos of actual scenes all have a similar amount of sharp-contrast transitions from bright-to-dark pixels on the one hand, and of smooth transitions between similar neighboring pixels on the other, Fergus said.
Blurry images, though, have a different collection of such contrast gradients. "All those crisp transitions have been smeared out," Fergus said.
The heart of the process is estimating how the camera moved based on the missing contrasts. At first, a very coarse estimate of the camera motion is calculated from a low-resolution version of the original image. The process is repeated with versions with gradually higher resolutions.
"We use the distribution of sharp gradients to guide what the true, sharp image should look like," Fergus said. "By breaking it up into smaller steps, we can successfully get out really complicated patterns that are characteristic of real camera shake."
The result is called a blur kernel, a grid that shows where the camera spent its time pointing.
That blur kernel is then used as the basis for the second phase of the technique, a process developed in the early 1970s called "deconvolution." This attempts to reverse the specific blurring effect.
Ups and downs
Overall, the process takes about 10 to 15 minutes to fix typical digital camera images. However, Fergus hasn't spent time trying to make the process as efficient as possible, so significant improvements in the lag are possible.
"In Photoshop, you want a plug-in that runs in maybe a minute, tops," Fergus said. With more time spent "doing a really careful, efficient implementation, I'm sure it would become much faster and nearer to the speed you need," he added--perhaps even fast enough to run in a camera.
A little manual input is required, as well: A person must select a rectangular region of the original image where there are edges. Too small a patch yields poor results, but too large a patch takes too long, Fergus said.
In addition, the process doesn't work well for pictures with extremely bright or dark patches--a problem known as "clipping" in image-editing parlance. For that reason, the algorithm works better on "raw" images from higher-end digital cameras, which possess a greater range of light intensities.
Another hurdle comes from noisy images, such as those taken from lower-end cameras with small image sensors--especially when they've been shot in dark conditions. The speckles of noise can look like edges to the deblurring process, Fergus said.
People should not expect to see the technology in software or cameras soon. Fergus has no pretenses about the maturity of the process. "This is a first effort. There's quite a bit of work to be done before it gets into a real application," he said.
2 commentsJoin the conversation! Add your comment