Apple

One year with the Vision Pro – Six Colors


Vision Pro

It’s been a year since the Vision Pro arrived, and its impact on the world has been nearly nonexistent. Is this a surprise? At the time, I wrote that it was “speculative and impractical.” That judgment still stands. The Vision Pro is simultaneously one of the most impressive and impractical products Apple has ever developed. A year on, I can’t in good conscience recommend that anyone buy one. It’s a glimpse of a potential future and a developer kit for potential future Apple platforms, but that’s about it.

Since the Vision Pro is a product about the future, let’s talk about the future, then. Where is this thing going, if anywhere? But first, it’s worth also considering how we got here.

It’s better than before

I’ve got plenty of complaints about how Apple has handled the Vision Pro rollout. There was too much hype for a product like this, it hasn’t produced enough immersive video content despite immersive video being the product’s most eye-popping feature, and in general it has failed to attract enough eager developers ready to build the next big thing.

However, I have to compliment Apple on doggedly improving the product itself. VisionOS offered a bunch of improvements, including new environments, useful new gestures, and keyboard breakthrough. Apple also added spatial personas, which are perhaps the product’s most astounding feature. Personas were, when the product shipped, kind of a disaster. But after a bunch of upgrades and the introduction of free-floating personas in a 3-D space in FaceTime, they’ve gone from being a joke to being brilliant.

Another great feature, Mac Virtual Display, also got a major update with the addition of support for virtual widescreen views that let you use a Mac with an enormous virtual monitor wherever you go.

Over the last year, visionOS has felt like a platform that is moving forward… slowly, but still moving. That’s good, because there’s much more to be done.

What Vision Pro is good at

Let’s consider our assets, shall we?

Vision Pro is a tremendous video player. I’ve watched college football games on enormous floating TV screens while in a tiny hotel room. I’ve watched 3-D movies in environments superior to any dimly-lit 3-D screening room. I’ve watched Star Trek episodes playing on a virtual television sitting on a table as if it were a real object. And I’ve watched every single second of immersive video Apple has released.

Of course, all of these things I’ve done alone, not with my family, because Vision Pro is a solo device. That’s just part of the deal, and so for me it’s more of an entertainment device for travel or when I’m home alone, not part of any routine. If you live alone, you may feel differently.

3-D movies can be good. Immersive video is great, though, and has the potential to be a real game-changer. And yet it seems that the existence of devices like the Vision Pro wasn’t the only factor limiting the creation of next-generation video content. Apple and its partners have struggled mightily to generate anything of substance in its 180-degree immersive video format. The addition of cameras like the one from Blackmagic that can shoot immersive video should potentially open things up, but it’s clear that shooting and doing post-production on immersive stuff is just brutally hard, or expensive, or slow, or maybe all of those things.

If there’s a single feature that would actually sell Vision Pros, it would be the creation of some sort of killer immersive video content. It could be a series of recordings of live theater performances, or sporting events, or… I don’t know what else. But I know that a motivated theater fan or sports fan with a comfy bank account wouldn’t think twice about paying $3500 for a device that actually enabled an ongoing series of immersive experiences. None of that stuff exists right now, though. It feels like a series of tech demos, though I really did like Edward Berger’s short immersive film “Submerged”.

I’ve also appreciated Sandwich Video’s experiments with the platform. The Television app is really fun, since it plays videos on augmented-reality TV sets you can place in your environment. I had hoped there would be more apps like this, that allow me to populate my world with things that look like physical objects but are actually just software. (Maybe someday.) And the Theater app is trying to bring a broad spectrum of internet content into a virtual movie theater environment.

Beyond video, I’ve found Vision Pro to be an excellent tool for shifting my own personal context. When I’m feeling frustrated or distracted and need to buckle down and get to work, I have frequently put on the Vision Pro, popped in my AirPods Pro, and dialed in an immersive environment (Joshua Tree is my favorite) so I can work with zero distractions.

I wish there were more productivity options in visionOS—I’m mostly still writing using Markdown in the fairly simple Runestone editor—but it gets the job done, and I can always break out my Mac and use a virtual display while sitting on a bed or couch and getting out of my usual desk routine. (I wrote a good deal of this article sitting on the couch, typing into a MacBook Pro, while inside Joshua Tree.)

And, yes, Mac Virtual Display is a winner. It’s not perfect—the video quality of the Vision Pro display is a little fuzzier than a real Retina Display—but it lets me use my laptop in any context, in any space. Laptops are actually kind of bad for you ergonomically since the keyboard is physically close to the display. In Virtual Display mode, I can float the display higher up, allowing me to view it at a more comfortable angle. And thanks to Universal Control, I can operate visionOS apps from my Mac keyboard and trackpad as well. The new wide and ultrawide settings add even more screen space, which is a huge deal for many users of multiple monitors.

And Vision Pro excels as a remote collaboration tool. While so few people have these devices that it’s hard to test, I’m fortunate to know a few people in my line of work who have bought them. We set up an every-two-weeks meeting inside FaceTime using Spatial Personas, and it is legitimately the next best thing to being there in person. Spatial Personas exist in real space, casting shadows. They’ve got heads and shoulders and arms and hands so that you can see facial expressions and hand gestures. When someone gets up and walks around in real space, their persona does so in virtual space. The audio is perfectly spatial and adds to the effect.

Throw in SharePlay, which also works surprisingly well in this context, and you’ve got the start of a real remote collaboration opportunity for people who work together but are very far apart. And just on a personal level, it’s special to be able to spend time with my far-flung group of friends and feel as if we’re just hanging out and shooting the breeze. That’s a thread for Apple to follow here as it envisions the future of this platform.

I also want to generally cite how good I think visionOS is as a computing platform. This is Apple trying to build an entirely new “spatial computing” metaphor on the back of the work it has done in other areas, and I think it’s a great start. There’s more refinement to be done everywhere, but I legitimately love moving windows around and resizing things, and opening apps in visionOS. It’s delightful. I just wish I was doing more with that. I’ve got a sleek designer hammer; now I need nails.

What I’ve learned

In the early days of trying out the Vision Pro, I used it a lot. I wrote in my original review that “I have been able to use it for around six hours a day without discomfort.” It’s true! It can be done! But I never use it for marathon sessions anymore. Thanks to the Belkin Head Strap that Apple should’ve included in the original Vision Pro package, I’m at the most comfortable I’ve ever felt inside the Vision Pro. But really, it’s more of a two- or three-hour session at most.

The problem is that I rarely find myself needing to use the Vision Pro. It’s not that I don’t enjoy using it… in fact, every time I put it on, I find myself wanting to give myself additional reasons to keep on using it because it’s so much fun in there! But the impetus to find a safe place to sit, take off my glasses, slip on a VR headset, and jack into cyberspace doesn’t come along that often. There’s enough of a barrier there that it only happens maybe once or twice a week, at most. (This is also true of my poor, neglected Meta Quest headset, which is super fun to use to play games… but which I also use way too rarely.)

This is the current challenge of Vision Pro: It needs to give users reasons to use it, and it needs to have enough of them to keep them using it. Right now, putting on the Vision Pro tends to lead to a short session in which I do the one thing I wanted to do in there, and then I cast about trying to find reasons to stay… and then I give up. I need more reasons to stay—more apps, more experiences, more use cases. And I need reasons to put it on.

What needs to happen next

I have no idea how long this Vision Pro hardware is going to remain current, but I hope Apple keeps pushing its software forward. Once Siri has been upgraded and App Intents support has been added to the package, Apple Intelligence could potentially be a major upgrade to visionOS. I’m surprised how little I use Siri in visionOS, despite the fact that it’s responsive and always understands what I’m saying (since I’m wearing the device I’m talking to).

Virtual environments are a winning part of the Vision Pro experience, and Apple needs to keep pushing on that front. Environments constructed for use inside individual apps should be able to be used anywhere in the OS. (I’d love to write an article while sitting atop Avengers Tower from the Disney+ app, for example.) Apple should also build some more boring environments that might be more conducive to some people’s focus. They can’t all be Bora Bora, folks.

Most obviously, Apple needs to reach out to developers and encourage visionOS as a platform for experimentation. That probably means throwing out the rulebook and doing things like funding the development of apps, supplying free developer kits, and generally investing in the platform if they want it to succeed. I still believe there’s a killer app or two out there to be discovered, but Apple needs to use some of its plentiful financial resources to encourage developers to spend their time on a platform that’s still in its infancy. It’ll be worth it.

I also keep wondering how much more useful the Vision Pro would be if it could run (for example) all iPad apps, all iPhone apps, and maybe even all Mac apps (without an intermediary). The lack of availability of compatible apps is a black eye for the platform, and Apple needs to make greater efforts to get developers on board, even if it’s via compatibility with their existing apps.

On the hardware side, Apple needs to push as hard as it can to get the price of this device down. I couldn’t believe the early rumors that it would cost over $2000, and look at where we ended up. I actually don’t have any complaints about the specs of the existing device—Apple really did spare no expense. The issue is that it’s all so expensive. So Apple needs to figure out what hardware isn’t actually necessary and get the price down above all else. Cutting the price of the Vision Pro in half won’t make it a hit or even a big seller, but every single dollar that gets lopped off the price of a future version will increase the number of devices sold.

Is this the future?

Here’s the big question: Does the Vision Pro represent a product that’s on a pathway to the future of computing, or is it a dead end?

The Vision Pro represents something inevitable: At some point, we will wear things over our eyes that annotate the world around us. The Vision Pro is Apple’s first attempt to create a product that will lead in that direction. It’s big, clumsy, and expensive because that’s the best anyone can do right now. That’s okay, so long as this is just the beginning of a longer process of evolution.

I don’t know if Apple’s vision of “spatial computing,” of a multi-windowed gestural interface, will be the final form such a device will take, or if it’ll be more of a voice-guided interface, or if everything will be intuited by an AI who’s tracking your eyes and monitoring your heart rate, or what. I do think it’s a fruitful path to explore for now, but there will probably come a time when Apple has to decide if the spatial computer that does everything an iPad, iPhone, or Mac can do is the same product as the wearable appliance that annotates your world as you walk around in it. That might be two products, or it might be that only one of those is a product people want.

Still, I can’t believe that in 2040, there won’t be something you can optionally wear on your face, like a pair of glasses that will mark up the world, whisper in your ear, listen to your speech, and shoot video of everything around you. The Vision Pro is the first step down that path. Apple needs to keep walking it. Let’s see where it leads.

If you appreciate articles like this one, support us by becoming a Six Colors subscriber. Subscribers get access to an exclusive podcast, members-only stories, and a special community.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.