Amid the Animoji and face recognition fanfare of this week’s iPhone X launch, the announcement that got me really excited was the augmented reality engine baked into iOS11. With 400 million iOS users worldwide, this gives Apple an instant mass-market AR-ready customer base overnight.
iPhone X also has a dual front camera that sits vertically – this will allow AR experiences when the phone is positioned landscape, either by hands or a headset. Pimped up edge-to-edge display, portrait lighting and ‘wireless’ (hmmm) charging aside, I think the real winner is Apple’s ability to roll out updates to devices, especially when compared with Android. Noone I know with an Android phone ever seems to be running the latest OS, whereas everyone except your Gran keeps their iOS updated.
Apple’s platform for AR development is called ARKit which is a toolkit that allows you to create AR experiences for iPhone and iPad. Unlike WebVR, ARKit seems complicated to me with lots of SDKs and bits to download I don’t really understand. As part of my journey I need to take some time to unpick this and find out how it works. There are some fun, useful and just plain awesome examples of AR on the Made with ARKit Twitter feed.
Even with my limited knowledge of AR however, it’s clear that Apple is onto a winner. An AR engine that is built into the OS, ready to be called by iOS apps makes for an attractive development option.
The only thing that sort of intrigues me is that all the AR applications highlighted in the iPhone launch involve holding your phone up and seeing some sort of augmented novelty. For AR to have any value for me it has to be viewed through a lens which takes your phone out the interaction. Holding your phone screen up to view something through it seems like such a broken form of HCI, for me the whole point of AR is to free up my hands and stop myself falling over on the street!
Away, to unpick the complexities of ARKit. I’m starting with this Apple Developers video (despite only really understanding about 20% of it :()