X

Portable AI Devices Aren't Ready for Prime Time

Commentary: Buzzy new products like the Rabbit R1 and Humane AI Pin aren't yet worth your money, CNET experts say. Plus, more developments from the world of AI.

Connie Guglielmo SVP, AI Edit Strategy
Connie Guglielmo is a senior vice president focused on AI edit strategy for CNET, a Red Ventures company. Previously, she was editor in chief of CNET, overseeing an award-winning team of reporters, editors and photojournalists producing original content about what's new, different and worth your attention. A veteran business-tech journalist, she's worked at MacWeek, Wired, Upside, Interactive Week, Bloomberg News and Forbes covering Apple and the big tech companies. She covets her original nail from the HP garage, a Mac the Knife mug from MacWEEK, her pre-Version 1.0 iPod, a desk chair from Next Computer and a tie-dyed BMUG T-shirt. She believes facts matter.
Expertise I've been fortunate to work my entire career in Silicon Valley, from the early days of the Mac to the boom/bust dot-com era to the current age of the internet, and interviewed notable executives including Steve Jobs. Credentials
  • Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media
Connie Guglielmo
11 min read
rabbitr1cms

The Rabbit R1 is a gen-AI device that makes big claims.

John Kim/CNET

Personal devices that aim to bring generative AI into our lives have met with limited success so far. The Humane AI Pin is a Star Trek-inspired communicator that uses gen-AI software, a laser and a camera to answer questions and help you get things done. It's been criticized by CNET's Scott Stein and other reviewers for its limited functionality and high price (it starts at $699, with an additional $24 a month service plan). "It's too frustrating for everyday use," says Stein.

Ray-Ban and Meta last year teamed up to design a pair of AI glasses that are now in public beta, according to a Meta blog post with details of the new Ray-Ban options you can choose from. (The Skyler frames offer an "iconic" cat-eye design.) Stein has been living with the glasses for a few months and says while they're half the price of the Humane AI pin (the Ray-Ban/|Meta glasses start at $299), they're not quite ready for prime time. 

"The generative AI features, which use what's called "multimodal" AI, can react to voice prompts and also take snapshots with the onboard camera to analyze real-world things," Stein said in an updated hands-on last week. "Based on my attempts over the last few months, it's a hit-and-miss process to find when the AI is helpful and when it's not."

AI Atlas art badge tag

Now comes the Rabbit R1, a $199 handheld that's "about half the size of a phone. It has a 2.8-inch screen, a scroll wheel for navigation, an 8-megapixel camera, 128GB of storage, GPS and an accelerometer and gyroscope sensors for motion sensing," says CNET reviewer Lisa Eadicicco.

"The Rabbit R1, despite its tiny size and simple design, claims to do a lot of things. It can call an Uber, order dinner from Doordash, translate conversations, record voice memos, play songs from Spotify and more. Your phone can already do all of those things, but [the company] is promoting the Rabbit R1 as a faster and more natural way to do so," Eadicicco says after spending a day with the device. The company's website says you can order it now and get it in June. 

But the bottom line is that the device still "also feels a bit like a novelty so far," Eadicicco says. "The Rabbit R1 feels fun, fresh and interesting, but also frustrating at times. It intrigues me, but it also hasn't convinced me yet that there's room for another gadget in my life."  

So if you're in the market for an AI device, the takeaway is you may want to save your money -- at least for now. But as a longtime tech reporter, I know there are many early adopters out there who can't wait to be beta testers for tech companies. If that's the case for you, all I can say is have fun, play nice.

On a different note, if you're interested in getting CNET's expert take on AI products already on the market, including reviews of Microsoft's Copilot, OpenAI's ChatGPT and Google's Gemini, check out CNET's AI Atlas, a new consumer hub that also offers news, how-tos, explainers and other resources to get you up to speed on gen AI. Plus, you can sign up at AI Atlas to get this column via email every week. 

Here are the other doings in AI this week worth your attention.

Microsoft text-to-video AI tool too powerful to share with the public

When OpenAI in February demoed Sora, a text-to-video converter that creates images that are so photorealistic, it prompted one commentator to ask, "What is real anymore?" Noted Wall Street Journal reviewer Joanna Stern cautioned that the results are "good enough to freak us out." Why? In part because it portends the elimination of jobs for many visual creators, but also because it showcases how easy it will be for bad actors to create deepfakes. 

Sora, which means "sky" in Japanese, can "create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions." See for yourself in this 10-minute demo reel that OpenAI posted on YouTube and that's already been viewed more than 2.8 million times. 

Microsoft has demonstrated a "framework for creating videos of people talking from a still image, audio sample, and text script, and claims -- rightly -- it's too dangerous to be released to the public," notes The Register. These fake AI videos "in which people can be convincingly animated to speak scripted words in a cloned voice, are just the sort of thing the US Federal Trade Commission warned about last month, after previously proposing a rule to prevent AI technology from being used for impersonation fraud."

Microsoft described the framework in an online post introducing VASA-1, which it describes as "lifelike audio-driven talking faces generated in real time." 

The tech "is capable of not only producing lip movements that are exquisitely synchronized with the audio, but also capturing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness," the Microsoft researchers write. They say they have no plans to release VASA-1, citing ethical concerns. 

"We are exploring visual affective skill generation for virtual, interactive characters, not impersonating any person in the real world. This is only a research demonstration and there's no product or API release plan," they said. "Our research focuses on generating visual affective skills for virtual AI avatars, aiming for positive applications. It is not intended to create content that is used to mislead or deceive. However, like other related content generation techniques, it could still potentially be misused for impersonating humans."

Watch the minute-long, real-time demo to determine for yourself if this is another thing we should all be freaking out about. 

Sora, by the way, remains in development, with no release date yet announced, according to the company. But an OpenAI  exec told the WSJ that Sora would become available to the public later this year

Hollywood agencies creating digital clones of top name actors 

Speaking of AI fakes, "attack of the clones" may soon be a phrase that doesn't just refer to Star Wars: Episode II. Creative Artists Agency, the talent organization that manages stars including Brad Pitt, Tom Hanks and Reese Witherspoon, has been working on AI clones of its top-tier clients, according to tech insider site The Information.

"Last fall, the talent agency started testing a new initiative called CAA Vault with certain A-listers that allows them to make a digital double of themselves," The Information reports. That includes a 3D scan of their body and face, as well as capturing their voice, to make an AI clone. 

In fact, the Wall Street Journal in December did a video deep dive into what the CAA Vault is doing to create those digital twins. "There's no way to imagine a future world without this," CAA CEO Bryan Lourd told the WSJ. You should watch the 7.5-minute video on YouTube.

This work comes after Hollywood and creatives came to an agreement last year outlining ways that film and production companies might use AI while addressing concerns that they might exploit actors and writers by taking their intellectual property and then replacing them with AI works.

While the terms of their agreement puts "historic AI guardrails in place," Wired noted, there's also "a provision allowing for the creation of digital replicas and synthetic performers [that] could, critics argue, decrease the number of jobs available to both performers and crew. This, in turn, could allow big-name stars -- and their AI-generated clones -- to feature in multiple projects at once, pushing out emerging actors as Hollywood becomes awash with synthetic performers."

CAA isn't the only company experimenting with ways to prepare their most valuable clients for an AI future. James Earl Jones, the voice of Darth Vader, signed over digital rights to his Vader voice two years ago so it could be generated by AI in future productions. 

And AI has been touted as a way for movie makers to streamline content creation by automating production instead of relying on humans, such as animators, to do the work, as Dreamworks co-founder Jeffrey Katzenberg has already noted. In February, Tyler Perry paused a planned $800 million expansion of his studio in Atlanta. He told The Hollywood Reporter that AI technology, like OpenAI's Sora text-to-video model, may cause a rethink on how his productions are made, suggesting that "productions might not have to travel to locations" since they can build digital sets with the AI tech.   

Are digital twins of famous actors a good or bad thing? Like anything, I think it depends on how those clones are being used, which comes down to the creative work itself. A badly written film is a badly written film, no matter who stars in it.  

Apple details new AI models that can run on smartphones

AI will definitely be one of the most talked about subjects at Apple's Worldwide Developers Conference in June, based on research papers, startup acquisitions and even comments shared by Apple CEO Tim Cook, according to CNET's previews of the event

But for developers and researchers interested in the nitty gritty, let's take a look at two recent research reports. Last month, Apple published MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training, a paper describing the company's work training large language models, the systems that power gen-AI chatbots. 

This past week, the company "introduced a set of tiny source-available AI language models called OpenELM that are small enough to run directly on a smartphone. They're mostly proof-of-concept research models for now, but they could form the basis of future on-device AI offerings from Apple," reports Ars Technica 

OpenELM refers to "a family of Open-source Efficient Language Models," Apple's researchers say.  

The company "also released the code for CoreNet, a library it used to train OpenELM -- and it also included reproducible training recipes that allow the weights (neural network files) to be replicated, which is unusual for a major tech company so far," Ars Technica added. 

WWDC runs from June 10 through June 14.

The Olympics will lean into AI in a big way

The International Olympic Committee this month shared its "AI Agenda" for how the technology could help sports at the upcoming games in Paris and beyond, including "being used to help identify promising athletes, personalize training methods and make the games fairer by improving judging," according to the Associated Press.

"Unlike other sectors of society, we in sport are not confronted with the existential question of whether AI will replace human beings. In sport, the performances will always have to be delivered by the athletes," the IOC said in an April 19 statement endorsed by IOC President Thomas Bach. 

"AI can help to identify athletes and talent in every corner of the world. AI can provide more athletes with access to personalised training methods, superior sports equipment and more individualised programmes to stay fit and healthy," the IOC said. "Beyond sporting performance, AI can revolutionise judging and refereeing, thereby strengthening fairness in sport. AI can improve safeguarding in sport. AI will make organising sporting events extremely efficient, transform sports broadcasting and make the spectator experience much more individualised and immersive."

The IOC's AI plans also include using AI to help protect athletes from online harassment, the AP added. 

As for how it can help identify athletes in overlooked countries, the IOC paired up with Intel and visited "five villages and analyzed the athletic ability of a thousand children, by measuring how high they could jump and how fast they could react," the AP said. AI was used to analyze the result and identify 40 kids with athletic potential. "The shortlisted kids' results were then run through an algorithm that recommended what sports they'd be good at," Christoph Schell, Intel's chief commercial officer, said.

Not sure the AI takes into account personalities, work ethics, views about sports and other non biomechanical attributes that may contribute to athletic success. I guess we'll see. 

The summer Olympics will be held in Paris from July 26 through Aug. 11. Police in France will also be using AI as part of their surveillance efforts, including tracking people moving into designated areas and searching for unattended bags.

Consumers embrace AI at work, social buzz is on ChatGPT

If you're looking for the latest research on AI, let me point you to three reports.

First, the Boston Consulting Group surveyed 21,000 consumers from 21 countries and found that most were open and receptive to AI at work, with 70% excited about gen AI in the workplace compared to 43% enthusiastic for its impact on daily life.   

BCG also identified three reasons why consumers said they were leaning into gen AI: "Comfort, which improves personal well-being by helping with health, financial and other goals. Customization, meaning assistance in finding the right product or service or meeting personal objectives. Convenience, to reduce friction and effort. The survey found that 28% of respondents have used AI-powered visual search to find products that match or resemble items they want to buy."  

The second set of research comes from Adobe, which said it found that "almost half of consumers surveyed are more inclined to shop with brands that use generative AI on their website and 58 percent believe generative AI has already improved their online shopping experience." 

Last, GlobalData turned to social media to see which of today's popular gen AI chatbots generated the most buzz. Using data from October 2023 through March 2024, the research firm said OpenAI's ChatGPT garnered the largest share of voice among the Top 10 Large Language Models, with a 69.8% share.

Not surprising since ChatGPT became the face of natural language chatbots when it was released in November 2022. But what's surprising is how far of a lead it still holds. The remaining nine most mentioned LLMs were Google's Gemini (14.0%), Anthropic's Claude (5.3%), Grok (3.4%), Meta's LLaMA** (2.9%), Mixtral (2.9%), Google's PaLM (0.6%), LaMDA (0.5%), Falcon (0.4%) and Qwen (0.3%), GlobalData said.

Dove's AI playbook combats harmful AI images of women, girls 

Dove this month did a study to "understand the state of beauty around the world" and found that nearly 9 in 10 women and girls said they've been exposed to "harmful beauty content online." Why? Artificial intelligence.

"With 90% of the content online expected to be AI-generated by 2025, the rise of AI is a threat to women's wellbeing, with one in 3 women [feeling] pressure to alter their appearance because of what they see online, even when they know it's fake or AI generated," Dove found.   

"While AI has the potential to foster creativity and access to beauty, with 1 in 4 women (24%) and almost 2 in 5 girls (41%) in the US agreeing that being able to create different versions of yourself using AI is empowering, there is still an urgent need for greater representation and transparency in content created by AI." 

To address the issue, Dove has pledged that it won't ever "use AI to create or distort women's images." But it aims to educate those who are using AI because "when prompting to generate images of women and female identifying individuals, the results are often over-sexualised, lacking diversity, non-inclusive and a reflection of narrow definitions of beauty."

To help solve for those problematic images, the company created a 72-page AI Playbook, called the Real Beauty Prompt Playbook, and is sharing the PDF as a free download. It says the guide is designed for creators, parents and anyone interested in understanding how AI prompts should be inclusive and generate realistic AI imagery. 

"To help set new digital standards of representation, Dove has worked together with AI experts to create the Real Beauty Prompt Playbook, sharing easy-to-use guidance on how to create images that are representative of Real Beauty on the most popular generative-AI (GenAI) tools."  

Check it out.  

Editors' note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you're reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.