X
CNET logo Why You Can Trust CNET

Our expert, award-winning staff selects the products we cover and rigorously researches and tests our top picks. If you buy through our links, we may get a commission. Reviews ethics statement

AI and You: No to OpenAI Scraping, Don't Eat Those Mushrooms, Prompt Jobs

Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.

Connie Guglielmo SVP, AI Edit Strategy
Connie Guglielmo is a senior vice president focused on AI edit strategy for CNET, a Red Ventures company. Previously, she was editor in chief of CNET, overseeing an award-winning team of reporters, editors and photojournalists producing original content about what's new, different and worth your attention. A veteran business-tech journalist, she's worked at MacWeek, Wired, Upside, Interactive Week, Bloomberg News and Forbes covering Apple and the big tech companies. She covets her original nail from the HP garage, a Mac the Knife mug from MacWEEK, her pre-Version 1.0 iPod, a desk chair from Next Computer and a tie-dyed BMUG T-shirt. She believes facts matter.
Expertise I've been fortunate to work my entire career in Silicon Valley, from the early days of the Mac to the boom/bust dot-com era to the current age of the internet, and interviewed notable executives including Steve Jobs. Credentials
  • Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media
Connie Guglielmo
8 min read
Photoshopped double exposure showing a human head filled with streams of data. It has a futuristic, cyber feel, with neon colors and glowing highlights.
Eugene Mymrin/Getty Images

The New York Times isn't the only publisher, or company, saying no to OpenAI scraping its websites to help train the large language model, or LLM, that powers ChatGPT. 

In August, the Times updated its terms of service to say outsiders can't scrape any of its copyrighted content to train a machine learning or AI system without permission. Like many copyright owners, the Times is justifiably concerned that chatbots like ChatGPT, Google Bard and Microsoft Bing might be trained on its work without permission or compensation. That situation has been described as the copyright "sword" hanging over AI software companies.

Now add CNN, Reuters, the Chicago Tribune and a few news sites in Australia to the list of publishers that also opted in August to block OpenAI's web crawler, known as GPTBot, from scanning their pages, The Guardian reported.

"Because intellectual property is the lifeblood of our business, it is imperative that we protect the copyright of our content," a Reuters spokesperson told The Guardian. 

Why is this all happening now if the copyright owners have been concerned with Open AI and other AI companies for a while? Because in August OpenAI started letting website operators block its web crawler from slurping up information. OpenAI made that offer even as it said, "Allowing GPTBot to access your site can help AI models become more accurate and improve their general capabilities and safety." 

Interestingly, it isn't just media companies that don't want to be crawled. OriginalityAI, a company The Guardian said "checks for the presence of AI content," is tracking which of the world's top 1,000 websites are blocking OpenAI's GPTBot. As of Aug. 29, the list of companies saying no includes Amazon, Shutterstock, Quora, Wikihow and Indeed. 

Here are some other doings in AI worth your attention.

ChatGPT leads in AI share of voice

Even with concerns over how OpenAI may be training its model – and despite or because of an investigation by the US Federal Trade Commission into how ChatGPT handles individuals' privacy – ChatGPT "dominates" conversations about LLMs on social media platforms, according to GlobalData. 

The research firm says ChatGPT has an "impressive 89.9% share of voice on platforms like Twitter and Reddit." The other six most mentioned LLMs were Google's Bard (5.7%); Meta's LLaMA (1.6%); Anthropic's Claude (1.1%); and Google's PaLM (0.8%), BERT (0.5%) and LaMDA (0.4%).  

Social sentiment around ChatGPT was also generally positive, even though influencers "highlighted the need for ethical oversight, fact-checking, and input from social scientists and ethicists to align AI systems with human values. Meanwhile, some influencers argue that hallucinations in AI could lead to creativity and understanding human conversation," GlobalData said. "Influencers have discussed how the GPT has the potential to redefine productivity and squeeze the bell curve, as it can bridge the gap between cognition and expression, allowing for creative expression."  

Google's Workspace gets an AI boost in challenge to Microsoft  

Enterprise or business customers interested in paying for AI tools have a few choices.

Google this week said users who rely on its popular productivity tools at work – including Gmail, Google Docs, Slides, Sheets and Meet – can now get an AI boost with Duet AI tools for Google Workspace

After a 14-day free trial, the toolset costs $30 per user per month for big businesses, with pricing details for consumers and smaller businesses to be shared in coming months, CNET's Stephen Shankland reports: "The AI tools are designed to build new smarts into some of Google's most widely used services. With a text prompt, you can instruct Duet to prepare a resume template in Google Docs, draft a birthday party invitation in Gmail, add illustrations to a presentation in Slides or create a custom form in Sheets."

Duet AI was announced in May at Google's I/O Developer Conference and has been tested by more than 1 million people, the company said in a blog post. It's available to all 3 billion Google Workspace accounts.

At $30 for the monthly enterprise license, Google's offering is on par with Microsoft's rival Office 365 Copilot tools, which add AI tech from ChatGPT into Office applications like Word and Excel. 

OpenAI this week also announced a version of ChatGPT to license to large businesses (even as it will compete with Microsoft's offering). Called ChatGPT Enterprise, it "offers unlimited access to GPT-4 at faster speeds. It also includes extended context windows for processing longer texts, encryption, enterprise-grade security and privacy, and group account management features," Ars Technica reports. Pricing info is still to come. OpenAI said ChatGPT is already being used "in over 80% of Fortune 500 companies." 

AI-powered misinformation is warping reality

Though many of us know you can't believe everything you see on the internet, polls and surveys that show how many people buy into misinformation and disinformation (like the lie that the 2020 presidential election was rife with fraud) suggest it's hard to parse fact from fiction online. That problem is only going to get worse, thanks to how AI tools are already being used to inexpensively create misleading and outright false content, according to reporting by CNET's Oscar Gonzalez

"Political ads aren't the only place we're seeing misinformation pop up via AI-generated images and writing. And they won't always carry a warning label," Gonzalez found. "Fake images of Pope Francis wearing a stylish puffer jacket, for instance, went viral in March, suggesting incorrectly that the religious leader was modeling an outfit from luxury fashion brand Balenciaga. A TikTok video of Paris streets littered with trash amassed more than 400,000 views this month, and all the images were completely fake." 

So what can you do to spot AI fakery? It starts by being skeptical of the content you're looking at. It also means looking for weird AI artifacts, which could be "odd phrases, irrelevant tangents or sentences that don't quite fit the overall narrative," Gonzalez said. "With images and videos, changes in lighting, strange facial movements or odd blending of the background can be indicators that it was made with AI."

But most importantly, it means doing a periodic reality check outside your filter bubble – fact-check across multiple, reliable sources and vet the publisher. Otherwise, deepfakes, propaganda and disinformation will continue to "manipulate public opinion and disrupt democratic processes," Wasim Khaled, CEO and co-founder of Blackbird.AI, a company that provides artificial intelligence-powered narrative and risk intelligence to businesses, told Gonzalez. 

Khaled added: "This warping of reality threatens to undermine public trust and poses significant societal and ethical challenges." 

Please don't eat those mushrooms

Speaking of bad information, foragers and mycologists – aka mushroom hunters – are raising alarm that AI-generated books on foraging "could actually kill people if they eat the wrong mushroom because a guidebook written by an AI prompt said it was safe," according to reporting by Samantha Cole for 404 Media. Cole found many AI-generated books on mushroom foraging on Amazon that target beginners, and she heard from experts that these books could end up leading to someone's death.

How real a concern is it? The New York Mycological Society, or NYMS, warned on social media that the proliferation of AI-generated foraging books could "mean life or death," Cole said. She tried to track down whether the authors of these fungi guides were real or AIs and found, using AI text detection tool ZeroGPT, that a lot of the content was being written by an AI.  

Amazon deleted some of the suspicious AI books after being contacted by Cole and gave her this statement: "All publishers in the store must adhere to our content guidelines, regardless of how the content was created. We invest significant time and resources to ensure our guidelines are followed, and remove books that do not adhere to these guidelines. We're committed to providing a safe shopping and reading experience for our customers and we take matters like this seriously."

OK.

In the meantime, if you're interested in mushrooms, I highly recommend a book my family owns: Mushrooms Demystified by David Arora. Originally published in 1986 by Ten Speed Press, it's been described as an "encyclopedia of mushroom facts and lore." 

Sit, stay and chase that squirrel

Who says you can't teach a robot dog new tricks? Apparently not the researchers at Google Deepmind, who designed a large language model called SayTap that's "capable of translating a variety of commands a human might give to a dog into a format a Quadrupedal dog-like robot can understand," Gizmodo reports. "That model resulted in the dog-like robot being able to understand basic commands like walking forward and backward as well as more situational, complex concepts like catching a squirrel or running quickly over a hot surface."

The takeaway – other than the fact that cleaning up after a robot dog should be less messy than it is with a real dog – is that researchers were able to have SayTap process unstructured and vague instructions, which apparently is very hard and a step forward when it comes to machine learning. 

Once again, I'll turn to Gizmodo to translate: "By just providing the model with a brief hint, the researchers were able to successfully command the robotic dogs to jump up and down when ... told 'we are going on a picnic.' In another test, the robot dog knew to start running quickly after the promoter told it to 'act as if the ground is very hot.' In maybe the funniest example, the dog even slowly backpedaled after being told to get away from a squirrel. Many real dog owners would beg for that level of obedience."

AI resources worth a bookmark 

If you're anxious to learn even more about how generative AI is changing the world, let me point you to AI hubs set up by prominent research firms. You'll find plenty of reading material on new tools and the issues surrounding AI adoption, including how jobs will change, at McKinsey, Forrester, IDC and Gartner.  

If you're looking for a guide to AI tools, check out the AI Tool Directory.

And if you're interested in learning how to talk to a generative AI chatbot, try this guide from CNET sister site ZDNET on how to write better ChatGPT prompts.

AI word of the week: Prompt engineering

Having a meaningful conversation, one that gives you productive and effective answers, with a genAI engine takes practice. It starts with asking the right questions, with your questions known as prompts. If your prompts aren't great, chances are the answers you get back won't be either – or that you'll find the whole interaction a bit frustrating. That's why prompt engineering has been called a key job. This week's definition comes courtesy of NBC News and its AI glossary of "the words and terms to know about the booming industry."

"Prompt Engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAI's ChatGPT, describing the tasks users feed into the algorithm (e.g. "Give me five popular baby names")."

Of course, the Harvard Business Review has already challenged the conventional wisdom that prompt engineering will be a big deal and instead calls out the need for AI experts to be good at "problem formulation." In its story "AI Prompt Engineering Isn't the Future," it describes problem formulation as "the ability to identify, analyze, and delineate problems."

Happy reading.

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.