It's one of the oldest, most common problems in photography: that picture you thought would be the prize shot is out of focus.
Refocus Imaging, a Silicon Valley start-up, thinks its technology can be used to make cameras that can fix that problem--after you take the photo.
By fitting a camera's image sensor with a special lens and then processing the resulting data with new methods, Refocus Imaging's technology will let photographers fix their photos and exercise new creative control after the shutter is released, founder and Chief Executive Ren Ng said.
"There's a lot of physical stuff in the camera that is limiting its performance," Ng said. "What we're doing is to capture much more than a two-dimensional photograph inside the camera...By collecting the light, we can process it in software to do what the hardware usually has to do."
And the technology boosts some aspects of camera performance in the process, he said. Ng said he hopes to license it to camera companies, and boasts that Refocus Imaging's patent portfolio is "very, very good."
The technology, which stems from Ng's research at Stanford University, is an example of computational photography, which augments traditional image capture with computers--either in the camera or on a PC--to achieve new possibilities.
Included here are examples from Refocus Imaging that show how the technology works. The slider on the right of each graphic can be used to change the point of focus from foreground elements to those in the background, or clicking on a different area will bring it into focus.
Ng also showed the technology off at the 6sight digital-imaging conference in November.
The way Ng sees it, the Refocus Imaging technique has several possible advantages. For one thing, being able to focus images after the fact means that cameras could take a picture sooner without waiting for an autofocus mechanism to lock in. For another, because the depth of field also is adjustable along with focus, a pro photographer could fine-tune a picture to properly blur a background or get just the right amount of a subject in focus.
"One way to think of it is just a raw image, except to the nth degree," Ng said, referring to the raw images that higher-end cameras can record directly from the image sensor, leaving processing choices to the photographer. "It contains a ton more information than a raw picture today. There are all kinds of creative controls you couldn't even conceive of now."
Another advantage is that the technology works better in low light, he said. And by transforming the light's optical properties using a computer instead of relying just on the camera's lenses, a computing system can correct aberrations to improve lens sharpness, as well as heighten lens contrast and lower its manufacturing costs.
Sounds swell, right? Well, there's still no thing as a free lunch.
A lot more image processing is required, for one thing, though Ng legitimately points out that camera processors are steadily improving. Another big drawback is that the full resolution of the image sensor isn't available in the ultimate image the camera produces.
Ng isn't willing to discuss exactly how much resolution is lost in the process at this stage in the company's research. "You can get gorgeous 4x6 prints or (larger), and take those much more dependably," he said.
Refocus Imaging's ideas are related to an image sensor that can see in 3D, in a sense, that another Stanford researcher, Keith Fife, demonstrated earlier this month. That sensor also uses tiny lenses, but his are built directly into the sensor, with one lens dedicated to a particular subarray of sensor pixels.
Ng's company is one of several organizations researching the idea of the "light field," which describes all the light entering a camera, not just the subset that gets photographed with a particular camera setting. Ng offers an analogy: where a photograph is like an X-ray image, the light field is like a three-dimensional CT scan that lets a doctor effectively look at the interior of a person from any direction.
One light field research project at Stanford in the 1990s used an array of 100 cameras all taking a photo of the same subject, then compressing the resulting image data into a representation of the light field. With Refocus Imaging's technology, Ng said, "we can make that compression in a single device."