X
Zooey Liao

How Close Is That Photo to the Truth? What to Know in the Age of AI

AI can let you lie with photos. But you don't want a photo untouched by digital processing.

It's a thorny question I've faced with thousands of my own photos, and now it's become even thornier: How much can you edit a photo before it stops becoming true?

Lightroom includes a handy AI-powered tool to select the sky, letting me darken it for more color and drama. Topaz Labs' Photo AI uses a different form of AI to zap the noise speckles that are degrading a photo of a dancing child I took inside a dark Alaskan lodge. With a sweep of my mouse, Photoshop could generate a nice patch of blue sky to replace an annoying dead tree branch cluttering my shot of luscious yellow autumn leaves. Smartphones are now making similar decisions on their own as you tap the shutter button.

My own preference, shaped by my appreciation for history and part-time work as a photojournalist, is to stick closer to reality. But even that involves a huge amount of processing.

It's tempting to think of photography as an exercise in capturing the truth, turning a fleeting moment's light into a record we can store in an album or share online. In reality, photography has always been more complex.

Decades ago, photographers steered the process with film chemistry, lens selection, shot framing and darkroom alterations. Now Photoshop, smartphone image processing and generative AI make those analog-era alterations look primitive.

These days, you'd be right to question how much truth there is in a photo. When launching the iPhone 15 in September, Apple detailed the multistage processing technology it uses to build each photo. Samsung phones recognize when they're taking a picture of the moon and make heavy modifications to the image to try to show it off. Google, a pioneer in computational photography, now boasts how its Pixel 8 Pro Magic Editor software lets you zap unwanted people out of a photo's background or how its Best Take feature lets you pluck the most flattering faces from a burst of shots to create a group photo where nobody looks like a dork. Beyond your smartphone, generative AI can quickly fabricate convincing images of, say, the pope in a puffy jacket.

But before you despair that that fakery has sucked the fun and utility out of photography, take a step back, because when you're judging photos, context matters.

It's true that you need to exercise more skepticism these days, especially for emotionally charged social media photos of provocative influencers and shocking warfare. At the same time, the photos you're more likely to care about personally — those from your friends, family and co-workers — are far more likely to be anchored in reality. And for many photos that matter, like those in an insurance claim or published by the news media, technology is arriving that can digitally build some trust into the photo itself.

Jeremy Garretson, a professional photographer in New York, is acutely aware of these context differences as he shifts among photojournalism, event photography, portraiture and landscapes. For him, truth in photography is on a sliding scale.

"To say that photography should be trusted as a whole is a disservice to photography and art," Garretson said. "If you're looking at a portrait of somebody, you should expect there to be some truth in that, but it's probably been retouched — maybe there are blemishes removed. On the photojournalism side, there's more trust. When I'm wearing my photojournalist hat, ethically I have a standard that I'm held to. And on the art side, there's no trust. Art is meant to be interpreted, not trusted."

Photos are an immensely important part of our digital lives, and after talking to dozens of experts, I'm convinced they'll remain so despite the trust problem. You probably don't want the complete rejection of AI photo processing any more than you want fakery to swamp your social media feed. So take a moment to consider some of the subtleties in this era when photography technology is in such rapid flux.

Digital photography 101: From light to JPEG

First, let's get something important out of the way. There is no such thing as a photo, digital or film, that hasn't been processed. Your camera never captured the objective truth of some scene. Every photograph is the product of decisions engineers made to try to produce optimal photos.

To appreciate this point, let's take a deeper look at how digital photos actually are taken.

The very first moment of capture occurs when photons of light reach a digital image sensor, the special-purpose chip tasked with converting that light into pixel data. Each pixel can capture either red, green or blue, but when you see a photo, each pixel must have components of all three colors. That means cameras construct the rest with "demosaicking" algorithms that make their best guess at the missing color data — for example the red and blue information in a pixel that only captured green light.

"Two thirds of the pixels are completely made up — generated by your device, not recorded," said Hany Farid, a University of California, Berkeley professor who has studied photo authenticity for decades and who once helped launch an image authenticity startup.

An illustration showing the checkerboard pattern used to govern how each pixel on an image sensor captures only red, green or blue color information

Each pixel on most image sensors captures only red, green or blue color information; a digital camera has to invent extra data so each pixel has data for all three colors. This diagram of an Apple iPhone sensor shows another level of complexity, the ability either to use 2x2 pixel groups as one larger pixel through "pixel binning" or to use each pixel individually for maximum resolution, with even more processing required.

Apple; illustration by Zooey Liao/CNET

Demosaicking has been around for decades, with gradual refinements to cope with difficult subjects like hair tangles or fabric patterns.

And demosaicking has become even more complex with pixel binning technology that can group 2x2, 3x3 or even 4x4 pixel patches together into larger virtual pixels. Google Pixel phones "remosaic" these larger pixel patches to generate finer detail, then demosaic them to produce a high-resolution photo. If you take a 50-megapixel shot on an iPhone 14 or 15 Pro, it's doing the same thing.

Samsung's Galaxy S23 Ultra, when producing 200-megapixel photos from a sensor, goes even farther. It uses AI algorithms to turn 4x4 pixel patches of uniform color into 16 individual pixels that each have red, green and blue color information.

The smaller the pixel size on an image sensor, the worse it does distinguishing between detail and noise, and the worse it handles scenes with both shadows and bright areas. That's why smartphones today composite several frames — up to 15 in the case of Google's Pixel 8 Pro's HDR technology — into one photo. Stacking multiple frames lets the camera handle shadow detail better, reduce noise, and show blue skies as blue, not washed-out white. But it also means that one photo is already a composite of multiple moments.

On top of that, cameras also make assumptions about how much to sharpen edges, pump up contrast, boost color saturation, reduce noise and compress files so they take up less storage space. Vivid, bright colors can make a photo more appealing, but plenty of phones produce almost surreally blue skies and green grass. Check out Apple log-format video, a much lauded iPhone 15 Pro ability, to see just how much editing is needed to convert what the camera sees into something that looks good.

"There's not one answer. It's whatever appeals to you," said Aswin Sankaranarayanan, a Carnegie Mellon University engineering professor specializing in imaging technology. "And every company obviously believes they do a better job than the others."

Smartphone processor maker Qualcomm spends a lot of time on photo processing, for example with AI accelerators that recognize different elements of a scene then hand off that information to a signal processor to adjust the pixels accordingly, dozens of times a second.

"We're able to determine where it's skin, where it's hair, where it's fabric, where it's grass, where it's sky," said Judd Heape, leader of Qualcomm imaging work. The latest Snapdragon 8 Gen 3 processor can identify 12 different subject categories, including pets, eyes, teeth, hair, and sky. "We can bring out texture in fabric. We can smooth skin. We can bring out more detail in hair, make the grass greener and make the sky bluer, all in real time for photos or video."

AI refers to systems that are trained to recognize patterns in real-world data, a dramatically more powerful technology for processing photos than earlier methods like mathematically analyzing images to detect edges. AI can help a camera focus on the eye of a bird or help a smartphone ensure faces are bright enough in a photo. But it also can reproduce problematic patterns in training data, like believing all roses should be bright red.

Google, which designed its Tensor line of smartphone processors so it could accelerate AI tasks like image processing, also uses AI almost immediately when taking a photo. The camera will quickly decide whether a photo has people in it or not and send it to different AI-enriched processing pipelines as a result.

The human interpretation cameras add to photos

Color is a particularly fraught subject for cameras' automated photo processing. A subject in the shade can appear blue, since it's lit chiefly by a blue sky and not direct sunlight. Should a camera compensate for that? The famous internet debate over whether a dress was white and gold or black and blue shows how hard it is to interpret scene colors.

Those spectacular photos from the James Webb Space Telescope? They're human interpretations of different layers of light data, including infrared light that's shifted to visible colors our eyes can see.

Some people have a keener appreciation than most at just how arbitrary color can be. Kevin Gill, a developer at NASA's Jet Propulsion Laboratory, also is an expert at processing photos from Mars, Jupiter, Saturn and other parts of the solar system. He has no choice but to tweak photos, since some of them are based on light humans can't see, like the infrared light that reveals Saturn's storms and bands.

"I let the data show what it wants to show," Gill said, editing raw imaging data from the spacecraft "to tell the story of what is there, as opposed to what you'd see."

Your own smartphone camera automates the same kind of storytelling job, though it starts with more ordinary light.

When you snap a selfie with your friends or photograph a beautiful landscape, it's no fun if faces are blobs of image sensor noise or if a bright sky reduces the foreground to a muddle of shadowy murk. HDR (high dynamic range) technology now baked into every smartphone combines many frames into one photo, synthesizing an acceptable version of a scene.

"Cellphone cameras used to be crappy. The noise was so large," Sankaranarayanan said.

A diagram illustrating the many different steps of Apple iPhone image processing

This Apple illustration shows many of the steps an iPhone will take to convert raw image sensor data into a finished photo. Such steps include "demosaicking" sensor data to create necessary colors for each pixel, stacking multiple frames into one HDR shot for better tonal range, recognizing faces, adjusting color balance and contrast, sharpening edges and trying to eliminate noise speckles.

Apple; illustration by Zooey Liao/CNET

But HDR introduced new problems.

"In all this movement of the industry toward HDR technology, all this boosting of the shadows turned out great for light skin. You want it to glow," said Isaac Reynolds, who leads Google's Pixel camera work. "But because the industries that make smartphones are not the most diverse, we didn't have enough people internally telling us, hang on, maybe dark skin should retain some of that darkness and color and richness."

Google has spent years trying to tune its color and exposure to improve that representation, working directly with people with darker skin to hear their complaints about Google's camera technology. "What changed this year is we actually sent the engineers to the photographers," Reynolds said.

So purists decrying the heavy processing built into smartphones should be careful. It turns out a lot of that processing often produces exactly what you want.

How much processing is too much? Is that moon photo real?

Photoshop, generative AI and other image editing technology can take a photo far beyond a camera's starting point. Just how far to go can be a contentious issue.

Google arguably goes the farthest, with AI-powered editing tools like its new Magic Editor that lets you tap on people to erase them, enlarge them or move them around a scene. Generative AI can fill in the gaps, add new skies or stylize photos with entirely new tones and moods. When you take a collection of group photos, another feature called Best Take shows you all the faces for each person and lets you pick your favorite for a newly created composite image.

An illustration of how much detail Samsung's Scene Optimizer AI technology adds to a deliberately blurred moon photo

I deliberately blurred a photo of the full moon and put it on my laptop screen, then took these photos of it with Samsung's Galaxy S23 Ultra, which is able to recognize the moon and apply special AI processing. At left is Samsung's photo without special processing. At right is the version modified by the phone's Scene Optimizer AI technology, showing much more detail than was in the original blurred photo.

Stephen Shankland/CNET; Illustration by Zooey Liao/CNET

Google explicitly reserves these heavy AI modifications for Google Photos editing actions you have to initiate yourself after taking the shot. "We wanted to make sure that people were in control of AI like this," Reynolds said.

Such modifications have long been possible with Photoshop and other image editors, but AI makes them easier, and Google building them into its Google Photos app brings them with easy reach for millions of us.

That's raised hackles. For example, my colleague Sareena Dayaram frets that Google's AI "blurs the line between reality and fantasy."

That's similar to the response that greeted Samsung's Galaxy S23 Ultra, a camera that amplifies the native abilities of its 10x camera with AI-powered image processing that kicks in when you're photographing the moon.

Sleuths detected a suspicious level of image enhancement, for example the addition of lunar texture to photos taken of a deliberately blurred photo of the moon. (I reproduced the phenomenon in my own tests.) Samsung's approach even meant that textures appeared when a patch of the moon was replaced by a featureless blank.

Samsung denied it was simply copying a higher-resolution photo of the moon. Instead, the texture stems from the camera's attempt to spot details.

AI or Not AI: Can You Spot the Real Photos?

See all photos

"The entire object has been recognized as the moon, which then was processed according to whether information from each pixel was a noise or a moon texture component. Within this process, there is a possibility for AI to have recognized the patch as a noise pixel," the company said in a statement about the photo with the blank patch. Samsung is working to improve Scene Optimizer, the phone feature that spots the moon, to reduce "confusion that may occur between the act of taking a picture of the real moon and an image of the moon."

The concern about processing and fabrication indicates that many of us have limits to how much processing we're willing to accept.

Taking photos beyond their original pixels

But zero processing is not the right answer. That would ban panoramic stitching, composites like this single shot capturing an entire footrace, HDR photos that blend multiple exposures, and photos with artistic expression.

Shaun Davey, an amateur but serious photographer who enjoys scenes of Exmoor National Park in the UK, is willing to zap distracting litter or tree branches for what he sees as a better shot. He edits his photos for color and tone, and observes that color choices are inherently somewhat arbitrary for night shots since humans see only in black and white when it's dark out.

"I like my photos to remind me of my perception of a place or thing. I want them to instill the mood of a place and how I felt at the time I stood there," he said. Photos often require editing to match a scene that's based on the superior dynamic range of our eyes and our brain's own processing.

If you want to use AI to manipulate your images, we're entering a golden age, because AI can be used to identify portions of an image the same way a human would, pinpointing hair, faces, skies and other subject matter. Google Photos is just one among many tools.

A comparison of an original photo of a bird, degraded with lots of noise speckles, and a modified version edited with AI software to remove that noise

Topaz Labs' Photo AI software uses artificial intelligence to reduce noise and sharpen details. That can clean up photos taken at high ISO sensitivity levels that are saddled with lots of noise speckles, but it also can make up detail that wasn't in the original photo.

Photo by Stephen Shankland/CNET; Illustration by Zooey Liao/CNET

Adobe's Lightroom and Photoshop offer extensive AI tools to help you with noise reduction, selecting specific people or even just parts of them like faces or teeth, and erasing elements of a scene. Skylum's Luminar Neo image editing software uses AI to let photographers swap out skies, add fog, enhance the appearance of lakes and rivers, and otherwise dramatically change photos. Retouch4me sells one-click AI plugins to smooth skin, whiten teeth and zap blood vessels from the whites of subjects' eyes. And the ability to synthesize entirely new subjects and backgrounds with generative AI tools like Dall-E, Midjourney and Adobe Firefly adds an entirely new dimension.

Topaz Labs rose to prominence with AI tools that reduce bad image noise that can degrade bird photos shot at very high shutter speeds. The company's Photo AI software also sharpens photos, expands their resolution and gets rid of some blur. Photo AI is designed to stay true to the original photo, but it also makes up image data based on an analysis of the photo and its AI training data.

"We have to generate a bit of detail for it to look natural," Chief Executive Eric Yang said. 

Nobody wants to see a blank patch on the side of a bird where there should be some feather texture, so Photo AI adds it even if the camera initially couldn't discern it. But in Yang's view, the essence of the photo — the scene the photographer saw — remains intact.

"None of our products will meaningfully change your photo in any way," Yang said. "I view our software, and eventually AI in general, as being able to remove distractions from the core purpose of enhancing the reality and the memory of the photo."

A comparison of photos with naturally and artificially blurred backgrounds in a portrait of a child sitting in a chair.

High-end lenses on traditional cameras can blur backgrounds for a better portrait photo, as in the case of the shot at left taken with a Canon R5 camera and an f1.4 lens. At right, an example of Apple's iPhone 14 Pro blurring backgrounds artificially with its AI-powered portrait mode.

Stephen Shankland/CNET; Illustration by Zooey Liao/CNET

Trusting photos from friends and family

To hear some tell it, AI photo editing on Google's Pixel is "destroying humanity" because "we are waging a war right now to defend the very concept of truth from those who would obliterate it."

But such fears miss a major practical point: Who is taking these photos, and who is looking at them?

When you're sharing photos with people you know in real life, there's still a strong social contract among you. Sharing a faked photo with your friends can be a form of lying. The same ethical rules against it apply, and you'll face the same consequences if you're caught. Pants on fire and all that. Photographer Garretson dislikes fake photos purporting to be real strongly enough that he calls perpetrators of such shots "sociopaths."

How far you take your photo fiddling depends on this kind of context. If you delete some distracting people from a photo's background, the morality police probably won't come after you. Blurring a background with AI for a more focused portrait conveys the same intent as blurring a background with a high-end camera lens with a shallow depth of field. A photo is a form of communication, and many photos simply convey that you and your friends and family were together on some occasion that was notable, at least to you. Striking a flattering pose is nothing new, nor is picking the group photo in which you look your best. Nobody expects every photo to capture you warts and all.

But for some more substantial alteration — moving that grizzly bear you saw in Yellowstone dangerously closer to your sister, say, or dropping yourself into a photo of the Eiffel Tower when you weren't really in France — think about how you'd have to explain yourself. If you're conveying something shocking, dramatic or gossip-worthy, be aware that the truth of a photo matters a lot more.

The same applies to your friends and family members sending photos to you. It's entirely fair to pass judgment if somebody isn't being straight with their photography. Maybe you should think twice before using generative AI to add new scenery to family photos, as Adobe suggests.

And for your own photos, you can always just be honest about what you did.

"I don't have a problem with those that use AI and or other more extreme ways of enhancing photos, but it would be great if we could simply have that declared," said Tropical Birding wildlife tour guide and photographer Keith Barnes.

Strangers on social media can easily create and share photos that are convincing but fake.

Zooey Liao/CNET

Beware social media photos

Things change dramatically when photo sharing is among strangers. That's why photos on social media are so much more fraught.

"People that post on social media — it's not their true life. It's the life that they want others to perceive," said Rise Above Research analyst Ed Lee. "It's a story."

When you're scrolling through photos on "for you" pages that are algorithmically generated, beware that there's very little accountability. And there are strong incentives to create viral posts that often pack an emotional punch, are surprising or are edited to get a lot of attention. Influencers gonna influence. Sometimes that means a shocking war image from Gaza, and sometimes that means a person looking like a celebrity.

Sure, you can try to vet the account to assess a photo's veracity, but plenty of viral posts are copied from elsewhere with little or no attribution or authenticity check. And the hassle of evaluating posts can outweigh the benefit.

"What's going to happen is at some point, that cognitive load becomes unbearable and we'll just say, 'You know what? I don't trust anything,'" Berkeley's Farid said. His advice: "Delete Twitter, delete Facebook, delete Instagram, delete TikTok."

That may not be practical or desirable for you. But at least try to employ more skepticism when you're looking at that shot of an explosion in some war zone or a stunning nature scene. Google's new "about this image" service, which delves into the company's own years-deep records of the entire web, can help.

AI makes things worse. Deepfakes — videos or photos that can convincingly reproduce celebrities, politicians, even schoolmates — are becoming steadily better. And social media is where they spread.

Some tools, like OpenAI's Dall-E and Adobe's Firefly, explicitly prohibit the creation of images with known celebrities and politicians, and especially if the AI training data excludes that in the first place. But there are open-source AI models now that can bypass such restrictions, and some services are more liberal. One, Midjourney, was used to fabricate mostly convincing images of the pope blinged out in a puffy jacket and former President Donald Trump being arrested.

Facebook will require political ads to disclose generative AI use, and tools already exist to try to cut down on deepfakes. Even as Skylum CEO Ivan Kutanin sells AI-powered photo editing software, he's also aware of the dangers. His company is based in Ukraine, where bogus imagery can be a matter of life and death after Russia's invasion.

"The amount of fake news and fake photos during the war is just enormous," he said. He knows of three Ukrainian companies that offer algorithmic tools to show how likely it is that photos are true or fake. OpenAI also announced in October it has an internal tool designed to spot images made by Dall-E.

Of course, as AI gets better at spotting fakes, other AI will get better at evading the checkers. "These products will be helpful, but you've got to remember for every tool that comes out, there's somebody reverse engineering it and saying, well, they didn't check for this loophole," said Karen Panetta, a fellow of the Institute of Electrical and Electronics Engineers (IEEE) and dean for graduate education at Tufts University.

Zooey Liao/CNET

Building trust into photos when you need it

As with spam, malware and other forms of computer abuse, it'll be tough to stop bad actors from creating misleading photos. But serious efforts are underway to help good actors offer some assurances that their photos can be trusted. It takes the form of "content credentials" that can be digitally attached to a photo like an easy-to-read nutrition label.

That's important for situations like photojournalism, crime scene photos or other evidence used in court cases, and pictures you might send to your insurance company when making a claim. It won't necessarily validate that quick snap someone takes of a plane crash, but it can help.

The effort, founded at Adobe, includes the Coalition for Content Provenance and Authenticity (C2PA) that's developing the content credentials technology and the Content Authenticity Initiative (CAI) that's encouraging its adoption. Companies involved include camera makers like Canon, Nikon and Sony; media companies like the BBC, The Associated Press, The Wall Street Journal and The New York Times; chip designers like Arm and Intel; and tech companies like Microsoft, Fastly and Akamai. That's a solid foothold among organizations with enough clout to help the technology catch on in the real world.

"We believe the answer to combating misinformation is empowering trust and transparency," said Santiago Lyon, lead advocate for CAI at Adobe. "Instead of trying to catch everything that's fake, give everyone a way to show their work so they can prove what's real."

The CAI technology builds a log tracking changes to a photo, like brightening exposure, smoothing a subject's skin or compositing in an AI-generated flock of birds. The changes on the way are made by parties with cryptographic signatures — one for the camera, perhaps, another for the image editing software, and another for a newspaper that cropped the photo before publishing.

If you're evaluating a photo, you can upload a photo to the content credentials website that shows the history, but backers hope we'll eventually get a little "cr" tag on the photo that'll reveal the credential with a single tap or click on the photo itself.

The changes are cryptographically signed by the organizations making the changes, embedded with supporting software like Photoshop, and if somebody else makes a change, it's evident in the credentials. 

Leica in October announced the first camera that can write content credentials directly into a photo file at the moment of capture, the $9,195 M11-P. In November, Sony announced its A9 III camera, a $5,999 but still more mainstream product, also will support C2PA credentials. A firmware update will retrofit C2PA to the existing A1 and A7S III cameras, too.

TruePic, which has been working on photo authenticity software for years, has worked with smartphone chipmaker Qualcomm to run content credentials directly on a Snapdragon 865's trusted execution environment, an indication that the technology could someday be an option on ordinary smartphones.

Truepic's technology is used in a web app you can use to take photos for insurance claims. It logs associated data like the time, location, and directional orientation of each photo as it's taken and uploaded. By controlling the capture technology, it can assure a photo is authentic.

An illustration of a Sony A9 III camera adding content credentials to a photo of a fighter jet at the moment of capture

Content credentials technology is now built into a few cameras, including the Sony A9 III, helping photojournalists or others prove their photos are authentic.

Zooey Liao/CNET

In the future, cameras will simply have a content credentials option built in, easy to toggle on and off, predicts Truepic public affairs leader Mounir Ibrahim. It won't be easy building the necessary ecosystem to add, maintain and display content credentials, but it'll be worth it: "This is the best and most scalable option we have for authenticity," he said.

Some potential allies aren't on board. Google adds disclosures to image files when its Google Photos app or Bard generative AI service edit or create images. But it skips content credentials, instead writing the information as textual metadata. That could change, perhaps.

"We'll continue to fine-tune our approach over time to help create more transparency," Reynolds said.

Old-school photo trust techniques

Even without high-tech tracking, there are old-school ways of verifying photos and videos, and that's important to news organizations for whom trust is critical. Publications like The New York Times often work hard to verify imagery purportedly posted from the war zone in Ukraine, and CBS News Chief Executive Wendy McMahon reviewed thousands of photos and videos from the armed conflict between Israel and Hamas at Gaza. Only 10% were reliable enough to use, she said.

National Geographic, famed for decades of photojournalism work, goes to great lengths to ensure authenticity. Photo editors review every single photo taken in raw form — usually 30,000 to 50,000 shots for each assignment, but sometimes as many as 120,000 — said Sadie Quarrier, deputy director of photography.

"That's not to say that we actually question our National Geographic photographers and their truthfulness, but it allows us to also see how a photographer works," Quarrier said. And for something like a panoramic image stitching together multiple frames, the magazine discloses what's been done to create the photo. "We have spent more than 135 years maintaining this trusted brand."

The company is conservative about editing latitude, especially when it comes to shifting colors in a photo, but generally is guided by the idea that it's OK to reveal information originally captured in the raw file.

And she knows a thing or two about pushing things too far, for example while judging a wildlife photography contest in 2022, when she and other judges requested raw files to see how far photographers had taken their edits.

"We did have to disqualify some pictures," she said. "It was clear that they photo-manipulated once we saw the raw."

It's great when a news organization invests time in validating imagery, but we'll inevitably encounter other photos that'll force us to exercise similar judgment, especially on social media, where trust can be in short supply.

Critically assessing that photo

Assessing a photographer's motives can help you determine a photo's trustworthiness. Figuring out that motivation is easier with personal shots from people you know and from professionals publishing photos in newspapers.

If you don't or can't know the motives, it's time to treat the photo more skeptically before believing it or sharing it yourself. That's especially true for social media photos that are shocking, emotionally punchy or outrageous.

That's not all you can do to help push back against fake photos. Exercise some restraint with your own photo editing. If you do something significant, like swapping in a dramatic new sky or using generative AI to expand a landscape, don't be afraid of mentioning your artistic license in your caption. If you get that option for using content credentials, consider using them for photos with documentary value.

And you can complain to social media sites — and the regulators who govern them — if they aren't doing enough to keep misinformation and fakery at bay.

"I want to value truth and honesty and integrity and decency and civility," Farid said. "We need for the public to start saying you know what? I'm sick of being lied to."

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.


Visual Designer | Zooey Liao

Video | Chris Pavey, John Kim, Celso Bulgatti

Senior Project Manager | Danielle Ramirez

Director of Content | Jonathan Skillings