Apple

Apple Intelligence .1 Review: A small start of something big? – Six Colors


With the release of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, Apple is hopping aboard the generative AI train. Apple Intelligence is a suite of disparate features, first announced earlier this year at the company’s Worldwide Developers Conference, that the company is gradually rolling out over the course of several software updates in the next months.

The first round of these features includes a few different capabilities, most prominently a systemwide set of Writing Tools; summaries of notifications and email messages; minor changes to Siri (with more coming later); and tools in Photos that led you remove unwanted elements or create themed movies with just a text prompt.

It’s unquestionable that Apple is putting its weight behind these efforts, but what’s been less clear is just how effective and useful these tools will be. Perhaps unsurprisingly, for anybody who has used similar generative AI tools, the answer is a definite maybe.

A collection of Writing Tools

One of the more complete and functional pieces of the first wave of Apple Intelligence is Writing Tools, a set of language-model-based tools that are designed to refine, summarize, and reformat things you’ve written.

Writing Tools Proofreading
Writing Tools’ Proofread can be useful, but it’s also inconsistent.

The Proofread feature is meant to offer a set of intuitive corrections that go beyond a standard spelling or grammar checker (both of which are already offered by Apple’s built-in text system). It’s sort of an inbuilt version of something like Grammarly Pro, which costs $144 a year. When you choose Proofread, an animation represents Apple Intelligence “scanning” your document. (Apple Intelligence is full of this sort of razzle-dazzle. You can decide for yourself if it’s whimsical or phony.) When it’s done, a floating palette appears, letting you navigate between suggested corrections and choosing whether to approve or undo the changes.

Unfortunately, I found Proofread’s interface to be pretty inconsistent. Sometimes the “scanning” animation never ended. Other times, it would say that it had found numerous corrections, but didn’t show them to me or let me navigate to them. When it did work, the results were also spotty. I introduced five common mistakes (ones I make myself, all the time, like dropped words) to a document and Proofread found three. Better than nothing, but Grammarly would’ve aced the same test. Building a feature like Proofread into the OS is a great idea, but this particular feature needs more polish.

Writing Tools dialog
All Writing Tools are collected in a single view.

The Rewrite command lets you use the power of Apple’s LLM to change your writing into one of three different styles: Friendly, Professional, and Concise. Now, let me be honest: I am not the target audience for this feature, because I’m someone who writes stuff for a living. The goal of Rewrite is to provide output that is superior to content provided by someone who struggles with writing, with getting their points across, with getting the right tone. When I wrote sample paragraphs that were halting and a bit confused and messy, Rewrite did a pretty good job of making them serviceable.

When I turned it on my own writing, I saw it do what LLMs tend to do—change a bunch of the words to synonyms, add two-dollar words when simpler ones would do, and drain the entire thing of personality. Look, LLMs are all about finding the middle ground, embracing clichés, and stamping out individualism. If you’re a professional writer, your job is to avoid writing samey stuff. But if you’re someone who struggles to get your point across in writing, the middle ground is exactly where you dream of being. This set of tools will be a boon for people who struggle with writing. If you’re already an average writer, though, they’ll be of no use to you.

LLMs are great at churning through content and then summarizing it, and Apple has embraced that by adding two different summarization features to Writing Tools: Summary and Key Points. Summary takes your selected text and generates a one-paragraph summary in a floating box. You can choose to have the summary replace your text, or put it on your Clipboard to use elsewhere.

Key Points is a little more expansive. It creates a bulleted list of the points in your content—basically it’s an executive summary, for the kind of people who prefer to view the world as a series of bulleted lists. Your boss might be one of those people. Key Points will be able to help you out—though I found some of its suggestions a little iffy. As with so many LLM-generated bits of content, my advice is to use Key Points and then edit it a bit to match your own understanding of the key points. LLMs are best used in conjunction with a human brain, as an aid, not as a replacement.

The last two Writing Tools are really reformatting tools, and I like them unreservedly. List takes your selection and turns it into a list—great for quickly converting a sentence that lists a bunch of stuff (separated by commas) into a bulleted list. And Table will take text that would probably be better formatted as a fancy table, and turns it into that table! Computers exist to remove drudgery from our lives, not create more of it. If an LLM can be harnessed to save you a minute of formatting, or four minutes of remembering how to make a table in your text editor, that’s the right thing to do. —Jason Snell

Photos gets cleaned up

Clean Up before and after images
Background items to be cleaned up will be highlighted (left), and then intelligently replaced with generated background imagery.

After a bunch of major interface changes in iOS 18 and macOS Sequoia, Apple’s Photos app gains a few new features courtesy of Apple Intelligence.

The major addition is Clean Up, which allows you to remove unwanted items from your photographs. To use it, you need to enter editing mode and then click or tap on the Clean Up icon. Apple Intelligence will scan your photo for potentially distracting items and will highlight them with a shimmering effect. Tapping shimmering items will remove them, but you can try to remove anything at all in a given image. The best way to remove an item is probably to swipe across it as if you’re blotting it out, but you can also try just circling an item to indicate that you want it gone.

Apple Intelligence will look at the context of your entire photo and then attempt to replace the erased item with something that matches the rest of its surroundings. In my tests, it worked quite well, though occasionally it failed. Most of the failures seem to have been when there was a complex background and just not enough additional information for it to make a good guess about what should be back there.

I just wish Apple hadn’t waited until 2024 to launch this feature—and limit it to devices powerful enough to run Apple Intelligence—since you could use the Photomator app to do this on the iPad five years ago. It’s great that it’s here, but I don’t understand the wait.

The other major Apple Intelligence feature in Photos is the ability to build a Memory—a movie and collection of photos—based entirely on a typed prompt. (This feature appears to not have yet been made available on macOS, but Apple says it’s coming later.) Click or tap to enter the Memories collection and then tap Create to bring up an Apple Intelligence prompt, and then just type in whatever you’d like to see. Apple Intelligence will then scan your entire Photos collection—and all the while, a fun animation gives you the impression that you’re watching it do its magical A.I. work. All of these Memories are saved and are editable, just like any other Memory.

When I prompted Photos to create a Memory of “Evie on walks on trails,” it obliged—every photo and video it showed me in the resulting Memory was of my (dear departed) dog walking on various hiking trails. A prompt of “Myke smiling” generated a very smile-forward montage of my friend Myke and various times he smiled for the camera. The results aren’t perfect, but they can be impressive. —JS

Siri: New look, mostly same old intelligence

Siri's new look

Despite Apple’s marketing of a new and improved Siri, the voice assistant hasn’t changed that much with this first set of Apple Intelligence capabilities. The most obvious “new feature” is actually a new look: on iOS and iPadOS, instead of the little glowing orb that used to indicate Siri had been activated, you’ll now see a colorful wash over the entire screen, accompanied by a “ripple” effect. When you speak, the rainbow colors around the edge will pulse with the rhythm of your speech, a pseudo-waveform. As Siri recognizes your words, you’ll see the transcription appear in a gray bubble at the top of the screen.

The effect is in your face and utterly unmissable. Whether that makes it better, well, that’s in the eye of the beholder. Personally, I think it looks fresh and new, but there is something distracting about it, as if it’s suggesting that rather than Siri being something that lives in your phone, your entire device is nothing more than a vessel for the assistant. (The effect is similar on iPad and even CarPlay, though it’s more subtle on the Mac.)

That said, one actual advantage of the new design is that you can continue to use your device while the Siri interface is active, in case you need to open an app to refer to something. Siri, in other words, is more persistent. Similarly, it’s also more patient. If you stumble over your words, or you realize you’ve made a mistake and hasten to correct it, Siri will now be far more forgiving. So, for example, if you say “What’s the weather going to be in Cupertino, I mean San Francisco” you’ll get the San Francisco weather, not Cupertino or, worse, some message about how it didn’t understand you.

In general, Siri isn’t significantly more knowledgeable in this incarnation than previously. Don’t expect it to suddenly be able to answer queries that it didn’t before. There is, however, one exception: information about Apple’s own products. Apple has fed the virtual assistant its help documents, allowing you to ask questions about how to use features—within reason. You can say “How do I turn on Screen Sharing” and it will print out a list of steps. But that’s limited by the information Apple has given it—a question about how to turn on Screen Sharing via the command line yielded the same steps. Asking it how to find out how much RAM my Mac had gave me instructions for finding out how much RAM my Mac was currently using—close, but not quite right. “How to set up Time Machine” fared worse: Siri just told me it couldn’t do that.

All of that functionality raises a separate question: it’s great that Siri can tell you how to do these things, but why can’t it take things a step further and actually do them? Why do I have to ask Siri how to turn on Screen Sharing when it should be able to at least open System Settings to the right pane and show me the checkbox? (The ability to perform actions in apps is one that Apple is promising for the future, though it’s unclear when it will arrive.)

Siri Suggestions on Mac
macOS Sequoia’s Type to Siri option.

Finally, Siri also gains easier to access “type to Siri” options across the line. On the iPhone and iPad you can quickly trigger this by double tapping the Home bar at the bottom of the screen. This will pop up a keyboard and dialogue box, backed by the same undulating colors. On the Mac, you can configure a keyboard shortcut to bring up a glowing dialogue box that you can drag around the screen. As you type, it’ll suggest actions to you, including shortcuts you’ve created.

I can see the appeal to a multimodal approach for Siri, but in the past weeks that I’ve had this feature available I’m not sure I’ve ever used it beyond testing. To my mind, Siri is still a primarily voice-driven interface; if there’s information that I want to retrieve by typing it, I’m still more likely to type it into a web browser or open the relevant app. Some of that is muscle memory, to be sure, but some of it is just a general uncertainty that remains with Siri: whether it will understand the query, whether it will return the information I’m looking for, or whether it can handle the task at all. Why waste my time with that when I know exactly where to go for the information I want? Or, frankly, type the same query into Google?

Apple has promised a smarter Siri as part of Apple Intelligence, though it seems likely those improvements will not arrive until next year at the earliest. But utility remains Siri’s biggest hurdle: as the old expression goes, if you want something done right, do it yourself. Whether Apple’s changes can overcome more than a decade of users getting accustomed to Siri’s limitation remains to be seen. —Dan Moren

Check what’s in the Mail

Try as we might, we haven’t been able to get rid of email. As someone who still wades through a not insignificant number of messages every day, I was excited that Apple seemed to actually be bringing some attention to bear on Mail with three Apple Intelligence features: message prioritization, summaries, and Smart Reply. Unfortunately, they’ve been mostly underwhelming in my experience.

Apple Intelligence Priority Message
Messages from my mom are, of course, high priority…
Apple Intelligence Priority Message
…but it’s hilarious when Mail thinks messages about my Dungeons & Dragons campaign also reach that threshold.

Priority messages has been the most disappointing for me. The promise is that Apple’s algorithms are analyzing messages to let you know when there’s something important you need to see; it then moves it into a special highlighted section at the top of your inbox so you can’t miss it. Unfortunately, the analysis has been less than great; most days I don’t even see a priority message in my inbox (which I guess means maybe my mail isn’t important enough?). The messages it has shown me have included an emailed verification code for a login. Which I guess is important, but given that Mail will already autofill that code for me, doesn’t actually seem like it needs to be highlighted? I’ve seen reports of other users getting spam messages getting priortized too.

Meanwhile, actual messages from, say, my literary agent or guests on my podcasts, get mixed in with all the other mail. So what are we doing here? Right now this feature feels tremendously uneven, and rather than helpfully highlighting mail you really should see, it’s mostly providing opportunities for eyerolling and occasional amusement. It’s my hope that the results will get better as Apple improves its model.

Smart Reply, meanwhile, is an extension of autocomplete, offering you automatically generated replies to an email. This feature appears to, not unlike Priority Messages, be highly contextual: I’ve mainly seen it offered on occasions when the original message asked a specific question. (A similar feature appears in Messages.) For example, a request for my availability for a meeting might have prompts for “Yes…” and “No…”; tapping either would quickly generate reply of the likes:

“Hi Jason,
Yes, I am available for a meeting at that time.

Best,”

Or

“Hi Jason,
Thanks for reaching out. No, I am not available for a meeting at that time.

Best,”

The responses are perhaps a bit anodyne, likely best suited for professional interactions. I appreciate that it lets me at least sign my name, as though waiting for my imprimatur.

As with the Writing Tools, I find myself at a bit of a loss when it comes to evaluating this feature. Frankly, I’m not sure that I will ever use it—I balk even at the cases when Messages wants to offer me a “Sounds good!” reply, a thing I might actually type!—because of some inherent resistance to letting a machine speak for me. Not to mention that if I am ultimately going to have to edit an automatically generated reply anyway, am I really saving any time?

But I also know that my opinions aren’t necessarily universal. I like writing. It’s what I do for a living. I’m not saying I handcraft each email I write with the same attention to detail that I put into a novel or a review like this, but I do take pride in how I convey myself in writing. Not everybody wants to spend their time and energy on that, and definitely not everybody enjoys it. If there are people who are stressed or harried by having to write responses to lots of email, Smart Replies can certainly be a timesaver, but like any generative AI, it probably bears a human eye reviewing it—we’re not yet in “hit a button and have it send the reply for you” territory.

Finally, summaries. Much has already been written about this frequently used aspect of AI technology, so let me…sum up.

An Apple Intelligence summary
Sometimes Apple Intelligence’s summary of mail messages is useful…
Apple Intelligence summarizes a spam message
…and sometimes it summarizes spam messages.

Apple’s got a strong argument going for it that showing a summary of a message in the inbox is more useful than just a preview of the first line or two. Of course, that’s highly dependent on the contents of the message. I’ve had the summary feature summarize a two-line message, in which case, I would ask: is that actually more valuable than seeing the first one or two lines of a message? On the other end of the spectrum, it’s gamely tried to provide a summary of the New York Times morning newsletter, which…why? Providing a summary has little impact to me, personally, on whether or not I’m going to read a message. Sometimes messages don’t get summarized, or summaries show up later.

I’ve also had it summarize spam messages with the same level of enthusiasm as actual mail, because—and this is key for all of these AI language models, not just Apple’s—it has no way of telling the difference between what’s real and what’s a scam, anymore than it can detect humor, tone, or subtext.

Similarly, much of the challenge with these summaries, as with their appearances elsewhere, is whether or not you can trust them. I have a daily report that gets emailed to me for an automated task running on my Synology, and on at least one occasion, the summary interpreted part of that report in exactly the opposite fashion, saying no new update was available when one was. Fortunately, it was not a critical situation by any means, but it doesn’t exactly make me confident that the summaries are going to reflect reality—and at that point, again, I’m not sure that makes this feature better than not having it.

Can summaries save you time? Sure, they can. I would argue that Apple coudl save me more time by doing a better job of identifying spam and not letting it get to my inbox, but ah well. Perhaps the forthcoming mail categorization features that Apple announced back at WWDC will improve upon the mail triage experience. —DM

Summary judgment

Apple Intelligence Newsroom summary
Thanks, Apple, for summarizing your web page where you post press releases.

Apple Intelligence summaries pop up in a few other places around the operating systems in these updates, including in Safari, where they’re accessible via what used to be the controls for Reader mode in the address bar. That result means that Reader is a little trickier to access, since it’s now just a button within that menu, but you can alo tap/click and hold on the button to jump right to the mode.

The summaries are, like summaries elsewhere in the OSes, of mixed utility. One can argue they do exactly what they claim to on the tin by summarizing. Summaries are one thing that generative AI are reasonably good at.

Here, for example, is the summary of Jason’s Wikipedia page

The article discusses Jason Snell, an American writer, editor, and podcaster. Snell has had a successful career in technology and pop culture journalism, covering Apple products and services extensively. He is known for his early Internet publishing work, including creating the fiction journal InterText and editing several other early Internet magazines and websites.

As a summary this is…fine. It’s straightforward, identifying broad highlights, but ultimately bloodless and I’d argue it elides certain important details (such as, for example, which magazine he spent most of his career working for?). It remains unclear to me under what circumstances most people want this summary info, and it’s hard to shake the feeling that if Apple felt like this feature was actually compelling, it wouldn’t be buried in the Reader mode, a feature designed to enhance people’s experience of reading an article. More than anything, this feature seems to exist as a preventive measure for people using summary tools from other generative AI services.

Apple Intelligence summary error
Sometimes the summarize feature warns you it’s not designed for this, but you can still go ahead anyway.

The feature has also been somewhat inconsistent in terms of when it is available. I’ve had web pages of articles where the Summarize option isn’t present at all. Sometimes summaries are automatically generated, albeit in potentially humorous places, such as Apple’s Newsroom site. Other times, I’ve tried to get a summary only to have the system warn me that the feature “isn’t designed to handle this type of content.”

Summaries are also utilized by notifications in order to collect groups of alerts that you might get from an app: for example, if you receive several text messages while you’re away from your devices, it theoeretically provides a quick way to catch up when you return. Sometimes this feature is very handy, but it also often yields hilarious and strange results, misinterpreting who is commenting or even, in some cases, providing a summary that is the opposite of what actually happened. Other times the summaries seem to highlight the “machine” nature of machine learning, such as in the now viral tale of the man who got dumped via text message: the summary may have been accurate, but it was delivered in an inhumane fashion that belied any actual understanding of the message’s import.

Apple Intelligence summarizes Home notifications
Summaries of Home notifications are more useful and less prone to error than, say, text messages.

That said, there is still some utility. For example, repeated messages from the Home app about the status of certain sensors can be grouped together with the most recent information highlighted. For these kinds of rote messages, summaries seem much handier; one gets the most relevant detail at a glance, which is what summaries should do, with little risk of misinterpreting someone’s meaning.

The idea behind notification summaries is a sound one; its usefulness seems to come down entirely to the accuracy of Apple’s models. Given how endemic that issue seems to be amongst all LLMs, it seems as though there is an uphill climb in front of the industry in general.—DM

Conclusion

Of course, these updates represent only the first set of Apple Intelligence powered tools. The second wave, including Apple’s image-focused features, is already rolling out in developer betas, and are likely to arrive as public betas in the not too distant future. It’s probably fair to say that the features get more ambitious as they go; Apple was slow to ramp up its generative AI efforts, and this first set is more about table stakes than anything truly transformative.

The other big question is about how Apple Intelligence will improve over time. Rivals like OpenAI have made a point of regularly rolling out better models; it seems reasonable to assume Apple would do the same, though that the ultimately depends on how important they truly consider these features. So there is certainly every possibility that our results for these features are in the earliest of days. But just as you should never buy hardware based on the promise of future software updates, you should never trust software updates to improve software either: there’s always the chance that what you see now is what you get.

If you appreciate articles like this one, support us by becoming a Six Colors subscriber. Subscribers get access to an exclusive podcast, members-only stories, and a special community.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.