September 11, 2006 4:00 AM PDT

Are fake videos next?

Related Stories

Smoking out photo hoaxes with software

January 31, 2006

Counterfeiting for fun and profit

May 18, 2005
Dartmouth Professor Hany Farid already devised software tools to detect when someone has tampered with digital photos. His next challenge: determining whether video or audio files have been retouched.

"I thought, 'This is going to be so much easier,' but it turns out to be much harder," Farid said. "In a minute (of) video, you are talking about thousands of images. Just the sheer mass of data that you have to contend with is challenging. You have memory and run-time issues that you don't have with (still) images."

Hany Farid
Hany Farid
Dartmouth professor

The Dartmouth Image Science Group is also releasing a series of tools that will enable law enforcement officials, scientists and media outlets to detect photo fraud more easily, he said.

Faster processors, enhanced editing software and a worldwide audience have made fake and retouched photographs into a major phenomenon.

And the fakery hasn't been limited to small-time pranksters. Reuters, an international news wire service, caught heat by publishing a Beirut battle photo that contained an extra plume of smoke for dramatic effect. (Farid's software helped reveal that enhancement.) During the 2004 presidential election, fake photos disparaging both the John Kerry and George Bush camps made the rounds on the Internet. Child pornographers also employ photo retouching to skirt felony laws.

Photo trickery also has a supporting role the controversial movie "Death of a President," which opened Sunday at the Toronto Film Festival. Criticism of the fictional film has centered in part on a publicity still that purports to show President Bush being shot--digital techniques were used to superimpose the president's head on an actor's body--along with digital melding of movie actors into actual footage of the president and his staff.

Although media fraud has centered mostly on photos, there's no reason it won't migrate on a larger scale to digital audio and video streams.

"Audio is not that difficult to tamper with. Our auditory system is fairly forgiving," Farid said. "Video is very hard to tamper with. The tools to tamper with video are not as sophisticated as those for photos, but we might as well get a jump on it."

His work with video and audio files, so far, is fairly preliminary. Farid, and graduate student Weihong Wang, have published a paper on video forensics, and they have three more papers in the pipeline. It may take two years or so before software emerges that can conduct forensic tests on video.

Devil in the details
Software for detecting fraud in video or audio will likely be similar to the kind employed to smoke out photo hoaxes. Roughly speaking, the software will look for unnatural anomalies in the digital transcript. Video, for instance, is interlaced: Individual images contain only half the horizontal lines that make up a picture. The succeeding frame contains the missing lines. Run rapidly together, the brain perceives a cohesive whole.

Software that highlights hiccups in the interleaving pattern could reveal edits. Different tools could conceivably be created to ferret out inexplicable light patterns, chromatic anomalies, duplication of scenes or images, or even inconsistencies with the underlying metadata. (Was a night-shot function used? Is it consistent with the image? Was the original data subsequently altered?)

Photo hoaxes

Scene discontinuity--that is, small but inexplicable jumps--in video streams may also prove handy in detecting fraud, but so far it's been difficult to quantify continuity from one scene to another.

Similarly, unexpected patterns in background noise and duplication detection could be employed in examining audio transcripts.

Probability plays a significant role in fraud detection, but so does an underlying understanding of the hardware. Digital still cameras from different manufacturers, and often different cameras from the same manufacturer, operate under different JPEG quantization tables, Farid said. These tables determine that rate at which a camera will drop data in compressing a photograph. Farid's group has come up with software for examining the quantization tables among different cameras.

Adobe Photoshop, meanwhile, has its own distinct quantization table. As a result, the software can tell if a photo has been run through Photoshop or came from a source other than claimed.

"I can't tell you the serial number of the camera, but I can tell you this did not come from a Canon PowerShot. It came from a Nikon," he said. "You can also tell if it came through Photoshop. It won't tell you what happened to the image, but it tells you it did not directly come out of the camera."

In a recent court case, the police submitted photos that were taken with a security camera. An analysis of a photo submitted for evidence revealed that it had been run through Photoshop, according to Farid, who served as an expert witness. Farid said that it did not appear that the police tampered with the images in an objectionable way--often photos get cropped slightly with Photoshop--but the incident underscored how fraud could enter into a court case.

In a civil case involving personal injuries, an analysis of the pictures submitted by the plaintiff revealed that the photos were all produced under different quantization tables. "That's weird," he said.

Examining those photos
In the meantime, the Dartmouth group is porting its photo forensic tools to Java. This should enable a larger number of organizations to exploit them. So far, six of the tools have been ported to Java while the group is finalizing the port on two others: one that detects anomalous lighting and one that looks for unusual color distortions. The software porting work should be complete toward the end of the year.

Starting in 2007, the group will then likely begin to train police agencies and selected media outlets to use the software. The FBI forensics lab in Quantico may help conduct the training sessions, which will likely last a few days. The tools run on ImageJ, a freely distributed application.

"You really have to understand the algorithms to understand the code. If you are going to run the JPEG quantization table, you really have to know what a JPEG quantization table is," he said. "In the hands of someone who doesn't understand the algorithms, it can be dangerous, because they could make incorrect inferences."

Distribution remains a problem. Broadly disseminating the technology could help crack down on photo fraud; on the other hand, it could also help potential fraudsters spoof the safeguards. Most likely, distribution will be limited: Photo editors, but not freelance photographers, at mainstream media outlets may get the software.

"You do diminish the power the software if you make it completely, widely available," he said.

Safeguards to prevent copying will also likely be employed. Farid, however, emphasizes that neither he nor Dartmouth is seeking royalties or patents on the software.

See more CNET content tagged:
fraud, professor, video stream, photograph, president

3 comments

Join the conversation!
Add your comment
Lessons learned
Don't use jpeg if you're going to fake photos!

(Of course trying to fake it out at the raw level will be even tougher to do undetected.)

The real lesson is that you shouldn't fake photos and expect to go undetected. Experts will almost always be able to detect meaningful alterations -- they're usually alot smarter and more experienced in image and signal processing than the people doing the faking.

(Speaking as someone with 20 years of experience in that field, I doubt you could get a fake past me, assuming I had a few hours to examine the image closely.)
Posted by iameline (5 comments )
Reply Link Flag
Problem is...
You don't need to fool the experts. Most of the time with fooling the general populace your objective is achieved. Publish an authored people of Bush torturing an Arab and 90% of the people (the same ones that forward chain mails believing they are for real, for instance) are going to believe it. And even if they later read the picture was fake (they'll probably not) their sentiments will already be altered (the brain has an ability for self delusion and self justification that can be very useful in spreading and maintaining FUD).So the real learning is: teach your kids not to trust what they see, be skeptic and treat every source of information as unreliable until proven true.
Posted by herby67 (144 comments )
Link Flag
What part of "Wag the Dog" don't we understand?
Having just watched Part I of the ABC "Dramatization" of pathway to 9/11 with its disclaimers, I would say that faking videos isn't necessary. Distorted dramatizations are good enough to drive opinions. Shame on harvey Keitel for participating.
Posted by dsherr1 (28 comments )
Reply Link Flag
 

Join the conversation

Add your comment

The posting of advertisements, profanity, or personal attacks is prohibited. Click here to review our Terms of Use.

What's Hot

Discussions

Shared

RSS Feeds

Add headlines from CNET News to your homepage or feedreader.