This Week in AI: Apple won’t say how the sausage gets made

Date:

ChicMe WW
Lilicloth WW
Kinguin WW

Hiya, folks, and welcome to TechCrunch’s regular AI newsletter.

This week in AI, Apple stole the highlight.

At the corporate’s Worldwide Developers Conference (WWDC) in Cupertino, Apple unveiled Apple Intelligence, its long-awaited, ecosystem-wide push into generative AI. Apple Intelligence powers a complete host of features, from an upgraded Siri to AI-generated emoji to photo-editing tools that remove unwanted people and objects from photos.

The corporate promised Apple Intelligence is being built with safety at its core, together with highly personalized experiences.

“It has to know you and be grounded in your personal context, like your routine, your relationships, your communications and more,” CEO Tim Cook noted through the keynote on Monday. “All of this goes beyond artificial intelligence. It’s personal intelligence, and it’s the following big step for Apple.”

Apple Intelligence is classically Apple: It conceals the nitty-gritty tech behind obviously, intuitively useful features. (Not once did Cook utter the phrase “large language model.”) But as someone who writes in regards to the underbelly of AI for a living, I wish Apple were more transparent — just this once — about how the sausage was made.

Take, for instance, Apple’s model training practices. Apple revealed in a blog post that it trains the AI models that power Apple Intelligence on a mixture of licensed datasets and the general public web. Publishers have the choice of opting out of future training. But what in case you’re an artist interested by whether your work was swept up in Apple’s initial training? Tough luck — mum’s the word.

The secrecy might be for competitive reasons. But I think it’s also to shield Apple from legal challenges — specifically challenges pertaining to copyright. The courts have yet to make your mind up whether vendors like Apple have a right to coach on public data without compensating or crediting the creators of that data — in other words, whether fair use doctrine applies to generative AI.

It’s a bit disappointing to see Apple, which regularly paints itself as a champion of commonsensical tech policy, implicitly embrace the fair use argument. Shrouded behind the veil of promoting, Apple can claim to be taking a responsible and measured approach to AI while it could thoroughly have trained on creators’ works without permission.

Somewhat explanation would go a good distance. It’s a shame we haven’t gotten one — and I’m not hopeful we’ll anytime soon, barring a lawsuit (or two).

News

Apple’s top AI features: Yours truly rounded up the highest AI features Apple announced through the WWDC keynote this week, from the upgraded Siri to deep integrations with OpenAI’s ChatGPT.

OpenAI hires execs: OpenAI this week hired Sarah Friar, the previous CEO of hyperlocal social network Nextdoor, to function its chief financial officer, and Kevin Weil, who previously led product development at Instagram and Twitter, as its chief product officer.

Mail, now with more AI: This week, Yahoo (TechCrunch’s parent company) updated Yahoo Mail with recent AI capabilities, including AI-generated summaries of emails. Google introduced the same generative summarization feature recently — however it’s behind a paywall.

Controversial views: A recent study from Carnegie Mellon finds that not all generative AI models are created equal — particularly in relation to how they treat polarizing material.

Sound generator: Stability AI, the startup behind the AI-powered art generator Stable Diffusion, has released an open AI model for generating sounds and songs that it claims was trained exclusively on royalty-free recordings.

Research paper of the week

Google thinks it may well construct a generative AI model for private health — or a minimum of take preliminary steps in that direction.

In a brand new paper featured on the official Google AI blog, researchers at Google pull back the curtain on Personal Health Large Language Model, or PH-LLM for brief — a fine-tuned version of certainly one of Google’s Gemini models. PH-LLM is designed to present recommendations to enhance sleep and fitness, partly by reading heart and respiratory rate data from wearables like smartwatches.

To check PH-LLM’s ability to present useful health suggestions, the researchers created near 900 case studies of sleep and fitness involving U.S.-based subjects. They found that PH-LLM gave sleep recommendations that were near — but not quite pretty much as good as — recommendations given by human sleep experts.

The researchers say that PH-LLM could help to contextualize physiological data for “personal health applications.” Google Fit involves mind; I wouldn’t be surprised to see PH-LLM eventually power some recent feature in a fitness-focused Google app, Fit or otherwise.

Model of the week

Apple devoted quite a little bit of blog copy detailing its recent on-device and cloud-bound generative AI models that make up its Apple Intelligence suite. Yet despite how long this post is, it reveals precious little in regards to the models’ capabilities. Here’s our greatest attempt at parsing it:

The nameless on-device model Apple highlights is small in size, little question so it may well run offline on Apple devices just like the iPhone 15 Pro and Pro Max. It incorporates 3 billion parameters — “parameters” being the parts of the model that essentially define its skill on an issue, like generating text — making it comparable to Google’s on-device Gemini model Gemini Nano, which is available in 1.8-billion-parameter and three.25-billion-parameter sizes.

The server model, meanwhile, is larger (how much larger, Apple won’t say precisely). What we do know is that it’s more capable than the on-device model. While the on-device model performs on par with models like Microsoft’s Phi-3-mini, Mistral’s Mistral 7B and Google’s Gemma 7B on the benchmarks Apple lists, the server model “compares favorably” to OpenAI’s older flagship model GPT-3.5 Turbo, Apple claims.

Apple also says that each the on-device model and server model are less more likely to go off the rails (i.e., spout toxicity) than models of comparable sizes. Which may be so — but this author is reserving judgment until we get a probability to place Apple Intelligence to the test.

Grab bag

This week marked the sixth anniversary of the discharge of GPT-1, the progenitor of GPT-4o, OpenAI’s latest flagship generative AI model. And while deep learning is perhaps hitting a wall, it’s incredible how far the sphere’s come.

Consider that it took a month to coach GPT-1 on a dataset of 4.5 gigabytes of text (the BookCorpus, containing ~7,000 unpublished fiction books). GPT-3, which is almost 1,500x the dimensions of GPT-1 by parameter count and significantly more sophisticated within the prose that it may well generate and analyze, took 34 days to coach. How’s that for scaling?

What made GPT-1 groundbreaking was its approach to training. Previous techniques relied on vast amounts of manually labeled data, limiting their usefulness. (Manually labeling data is time-consuming — and laborious.) But GPT-1 didn’t; it trained totally on unlabeled data to “learn” the right way to perform a spread of tasks (e.g., writing essays).

Many experts imagine that we won’t see a paradigm shift as meaningful as GPT-1’s anytime soon. But nevertheless, the world didn’t see GPT-1’s coming, either.

Share post:

High Performance VPS Hosting

Popular

More like this
Related

Cal Boyington, TV Agent and Producer, Dies at 53

Veteran TV agent and producer Michael Carlton “Cal” Boyington,...

Make Your Training, Facilitation, and Presentations Relevant

This website uses cookies in order that...

Recent Windows 11 tool can fix devices that won’t boot remotely – Computerworld

Microsoft is working on a brand new Windows feature,...