The Better Apple In My Eye? – VR, AR or Digital Twin

The Better Apple In My Eye? – VR, AR or Digital Twin

Surreal image of VR goggles on an earth-like planet

With their new Vision Pro goggles offering, has Apple placed a too large bet in the consumer Augmented Reality (AR) space? Is there a really a consumer market for AR and/or can they make this one happen?

AR vs VR

First, let’s talk about AR. We see some big differences between AR and Virtual Reality (VR) solutions. Warren and I discussed the potentially vast benefits of AR versus the unreality of VR many times on the old TechTop6 podcast. Simplistically, VR takes you out of this world and inserts you into a generated one whereas AR layers up constructs on your real world. Augmentation can be anything from slight image annotations to overlaid guidance to a complete embedding of application interfaces in 3D.

Apple Vision Pro

With Apple Vision Pro, the obvious “sell” starts primarily with thrilling 3D entertainment experiences that could be seen as a “passive” VR immersion in some respects, but the Vision Pro approach is not placing you as an avatar into a generated world. It is really overlaying phone-style (i.e., IOS) 3D apps and views onto your actual world view. From our definitions, this is more AR than VR.

Now the underlying technology approach is still questionable as it technically inserts a computer screen between your face and the outside world. The augmentation is not so much really “overlaid” as it is “computer generated” in both directions – so technically double-direction VR. Outward facing 3D cameras project the world “through” into your vision on internal screens, while inward cameras project your face (well, your eyes) back out into the world on external screens (both on the goggles externally here in meatspace, and if you are on FaceTime, through your FaceTime avatar). Still, if you pretend you don’t see the screens themselves, the net psychological (and perceived physiological?) effect is one of augmentation versus virtualization.

This will likely lead to arguments between AR defenders about good AR being built upon and evolving past VR-only solutions, whilst VR defenders will point to the total freedom to create anything within a completely virtual space. But both technologies will evolve, and someday we probably will have holographic display technology coming back around as a direct overlay AR solution and perhaps complete VR generation indistinguishable (at least visually) from this real world.

I do think Apple has thought about timing and made a good bet now on the AR approach, eschewing the total VR immersion path (hey Meta, bet’ya wish you’d thought this out a bit past video gaming!). I will refrain from rehashing our crotchety old-man critiques of VR but there are plenty of relevant dystopian SciFi stories out there warning of its potential ills. Still, Apple’s current Vision Pro is quite immersive in practice, in many of the bad ways we project for VR. You have to be plugged in. You only get a couple of hours of charge. You will get goggle face. When immersed in 3D entertainment you will be more of a total couch potato than participant. In the goggles, you won’t really be looking someone in the face eye-to-eye. On-line, you will be a projected avatar when others are using “real” cameras (avatars are so VR!).

Don’t get me wrong, I really want to try them out (ignoring the price for the moment – I’m sure that will come down over time). But even Apple’s flashy splashy launch video finishes with a short quick shot of someone putting down their Vision Pro and picking up a jacket to go play outside with a child IRL. Apple fully knows the societal dangers here, and is already laying the groundwork for future damage control (“see, we told you right up front at the start”). I think much like the Surgeon General’s warning on cigarettes.

Digital Twin v.s. AR

Now if I was making my own future bed to lay in (lie in?), I’d be looking at an approach to augmenting reality rooted not in hacked VR tech, but in enhancing in-the-world IoT thing solutions so they actively interact with my online worlds. In other words give real world things an online presence, not bring generated online things into my real world. (Isn’t that the very definition of hallucination? Self-induced or not, how can that lead to a better real world?)

Consider the automobile. One type of AR would be to add a heads-up-display (HUD) to cars like in fighter jets. This has been tried with less than glorious success (e.g., displaying current speed/mph on the front window). But another approach is to model your car into your online app, as in Waze when you see your little car icon rolling across an online map as you drive down the real road in real life.

Maybe this is better described as a type of reverse AR – reality augmenting virtual space. Maybe we call this “mirror AR”? The reality augmentation here is a real-life IoT thing instrumentation that gets reflected back into some kind of digital twin application. We can interact with both the real thing and the digital twin in some parallel way that augments our current real world experience without requiring immersion in the generated twin space.

What actually got me thinking about this was an online gaming aid I just ran across. No, not a toy gun with laser sights, but an instrumented physical d20 1 (from Particula) that you can physically roll in real life, and the resulting rolled number is automatically uploaded into your online shared gaming platform. Here we see reality augmented in a way that preserves real life experience (kinesthetically rolling dice), while enhancing and enriching our online presence. Instead of mashing together the least common denominator as envisioned through Vision Pro avatar eyes, perhaps here aligning the best of both worlds?

Augment Reality or Reality Augmentation?

Let’s face it (Do you get it Apple? Real faces!), people evolved in the real world and aren’t going to easily adapt to living inside a digital one, no matter how seamless we try to make it. From overt “doom sickness” to what I imagine could be far more subtle long-term induced psychosis, we simply haven’t evolved to thrive in a generated space. How many senses does Vision Pro engage? Even with haptic feedback body suits coming along, we still don’t have smell or taste (much less digestive feedback) at the ready. And the feelings of pressure on a suit spot isn’t going to soon reproduce all the sensations of motion, temperature and touch we experience in the real world, from feeling the soft caress of a light wind to the cold onrush of a storm front to falling off a log to getting poked in the eye, bitten by a mosquito, or suffering the itch of poison ivy (yes, I’ve had a tough, although very real, week!).

So I’ll just be staying right here in the real world for a bit longer.

1This is an affiliate link of sorts and if you buy one, I’ll likely get something for the referral. Feel free to edit the affiliate code out of the URL and refresh if you don’t want to share the love.