• Augmented Reality in your pocket, overnight

    Amid the Animoji and face recognition fanfare of this week’s iPhone X launch, the announcement that got me really excited was the augmented reality engine baked into iOS11. With 400 million iOS users worldwide, this gives Apple an instant mass-market AR-ready customer base overnight.

    iPhone X also has a dual front camera that sits vertically – this will allow AR experiences when the phone is positioned landscape, either by hands or a headset. Pimped up edge-to-edge display, portrait lighting and ‘wireless’ (hmmm) charging aside, I think the real winner is Apple’s ability to roll out updates to devices, especially when compared with Android. Noone I know with an Android phone ever seems to be running the latest OS, whereas everyone except your Gran keeps their iOS updated.

    Apple’s platform for AR development is called ARKit which is a toolkit that allows you to create AR experiences for iPhone and iPad. Unlike WebVR, ARKit seems complicated to me with lots of SDKs and bits to download I don’t really understand. As part of my journey I need to take some time to unpick this and find out how it works. There are some fun, useful and just plain awesome examples of AR on the Made with ARKit Twitter feed.

    Even with my limited knowledge of AR however, it’s clear that Apple is onto a winner. An AR engine that is built into the OS, ready to be called by iOS apps makes for an attractive development option.

    The only thing that sort of intrigues me is that all the AR applications highlighted in the iPhone launch involve holding your phone up and seeing some sort of augmented novelty. For AR to have any value for me it has to be viewed through a lens which takes your phone out the interaction. Holding your phone screen up to view something through it seems like such a broken form of HCI, for me the whole point of AR is to free up my hands and stop myself falling over on the street!

    Away, to unpick the complexities of ARKit. I’m starting with this Apple Developers video (despite only really understanding about 20% of it :()

  • Where we’re going, we won’t need radio buttons

    This week I’ve been playing with this fun adaptation of Google’s Material Design by Etienne Pinchon. Lovely work, looks great through a headset and scales well.

    Material design goes VR

    Material design goes VR.

    The more I played with this though, the more I wondered what sort of application I would give it. Will we really need old-school radio buttons in VR? Do they need to look like the Google UI we’ve grown up with? As a culture who are still using a floppy disc icon to visually represent ‘save’ this is not as silly as it sounds. Applications will need this sort of functionality (sliders, selectors, buttons) but perhaps in VR they should be represented by a 3D artefact that is more suited to the environment?

    Food for thought, and certainly a fun play!

  • Intro to VR reading

    I’m just back from a summer holiday where I’ve spent my down-time reading about VR, rather than making. The best find was this site, a list of VR and AR resources curated by Max Glenister. It’s the motherload of top resources:

    uxofvr.com

    I’ve picked out my top six:

    Get started with VR: user experience design by Adrienne Hunter
    Excellent starting point for a complete beginner.

    Practical VR Design by Ryan Betts
    One of the most comprehensive guides I’ve read on starting out in VR design. Links out to just about everything you need to get starting. I’ll be revisiting this.

    From product design to virtual reality – Jean-Marc Denis
    A product designer’s journey into designing from VR. I would love to eventually make this leap, this is a thoughtful read.

    Immersive Design – Learning to let go of the screen by Matt Sundstrom
    How to think outside the square, rather than porting traditional 2D screen-based designs to VR. I’m regularly overwhelmed by VR jargon and liked the way this article explained concepts with simple illustrations.

    Must-watch vids by Mike Alger:

    I’m slowly accumulating so many tweets and interesting reads – I’ve set up a page to keep track of them here: AR/VR links

  • Can I use A-Frame for AR?

    While A-frame WebVR was fun and certainly easier to code than I expected, my ultimate aim is to create an augmented reality or AR experience. Could A-Frame help with this?

    After a few false starts I stumbled upon this Github repo with a JS file promising to turn my A-Frame experiment into augmented reality with just “10 lines of code”. Surely too good to be true? In addition to the basic A-frame code, you need to reference the aframe-ar.js file and change the scene tag to specify ARToolkit (an open-source library which I presume targets your phone’s camera?). Lastly you need to add a hiro marker which acts as a real-world ‘marker’ on which your AR creation will appear.

    Below is the standard Hiro marker but you can apparently design your own and ‘train’ ARToolKit to recognise it. Super cool! Hiro markers have specific criteria like they must be flat and have a certain amount of contrast (the first one I experimented with was cut out too small and it didn’t work).

    Standard issue Hiro card

    Standard issue Hiro marker

    Here’s the code you end up with:

    <script src=”https://aframe.io/releases/0.5.0/aframe.min.js”></script>
    <script src=”https://jeromeetienne.github.io/AR.js/aframe/build/aframe-ar.js”></script>
    <body style=’margin : 0px; overflow: hidden;’>
    <a-scene embedded artoolkit=’sourceType: webcam;’>
    <a-box position=’0 0 0.5′ color=”#whatever colour you fancy” material=’opacity: 0.5;’></a-box>
    <a-marker-camera preset=’hiro’></a-marker-camera>
    </a-scene>
    </body>

    I was amazed it worked first time, a magical 3D cube appeared on the Hiro marker. Small children gathered to look, impresed with my bogus AR wizardry. Connect to your iPhone (as in previous post), pop your phone into a Google cardboard and you have a 3D cube balanced in your hand. I revisited my file, played with the colour and opacity then got different shapes working.

    A-frame AR - magical!

    A-frame AR – magical!

    Still not much closer to working out how to make my AR prototype but certainly learning lots along the way. Appreciating this process will be more about the journey than the destination (and I’m cool with that).

  • ‘Imperfect VR’ with A-Frame

    I decided to start my journey into the VR unknown with A-Frame. A-Frame is an open-source web framework for building VR experiences, maintained by Mozilla and the WebVR Community.

    With my current skillset this seemed to offer the lowest barrier to entry as A-Frame can be developed from plain HTML files without having to install anything. I can code basic front-end so this seemed a logical first step; the WebVR community also seemed a kind and welcoming place for a clueless newb.

    To kick-start my making I did a short workshop called Imperfect VR, an A-Frame course by games designer and creative coder Michael Straeubig. The session offered the chance to make “a small, quirky VR experience with A-Frame and Google Cardboard” aimed at complete beginners. Perfect! The most valuable part for me was the setup and how to get started; downloading sample code, viewing this on a live server and the magical bit – getting it to work on your phone to experience through Google cardboard. Michael provided a basic HTML setup which we quickly transformed into our first ‘hello VR world’ experience.

    I thought I’d document the process here before I forget all the steps. Not sure if anyone else actually cares, after a sunny afternoon spent inside a dark gallery space even I feel like I’ve hit peak geek.

    1. Install and set up:

    • Download some sample code like this ‘hello world’ boilerplate from Github.
    • Install a code editor if you don’t have one already. We used Atom.io which is free and easy to use.
    • Drag your boilerplate folder into Atom and open the index.html file.
    • Next you need to add live server functionality to Atom by installing a package. Stay with me, it only sounds technical. Go to:
      Atom > Preferences > + Install and search for live-server.
      Select atom-live-server from the list. Then in the top nav choose
      Packages > atom-live-server > Start server. This will open your index.html file in your browser.
    atom-install-live-server

    Installing live server with Atom

    atom start live server

    Start live server to view your experience in the browser

    2. Get tinkering:

    If you know some HTML then A-Frame is fairly muppet-proof. Here’s the basic setup –

    <a-scene>

    <a-sphere></a-sphere> Gives you a sphere
    <a-box></a-box> You guessed it: a box
    <a-cylinder></a-cylinder> Righto
    <a-plane></a-plane> The ground objects sit on
    <a-sky></a-sky> The background that wraps around the whole 360.
    <a-entity position=” “><a-camera></a-camera></a-entity> Where you (the camera) are initially viewing the scene from.

    </a-scene>

    A-Frame elements

    A-Frame elements. Source: aframe.io

    You then build this out with various attributes as you would with HTML. For eg –
    <a-cylinder position=”1 0.7 1″ radius=”0.5″ height=”1.5″ color=”#FFC65D”></a-cylinder>

    You can also add images which wrap around your 3D objects or background. To add a background to the whole piece, download an image into the same folder and add an src like you would a regular image –
    <a-sky src=”clouds.jpg”></a-sky>

    basic code to started

    Basic code to started

    3. View it on your phone:

    • Ensure both your phone and computer are on the same wifi network.
    • Find your IP address. On a mac hold down Alt and click the wifi icon in your mac’s menu bar, your IP address will display in the wifi dropdown.
    • Type your IP address into your phone’s browser, followed by :3000. The 3000 needs to correspond to the port number in your browser’s address bar. If the number is different change accordingly.
    • Hang around for a bit, this can take a few minutes to work.
    View on your phone

    Our objective was ‘VR face’ like the dude at the bottom

    Et voila, VR on your phone! Pop into a Google cardboard and admire your handiwork.

    View on Google cardboard

    Feel like a real coder (rather than a shabby VR imposter).

    A brilliant workshop, would highly recommend. Now to get experimenting. Onwards!

  • Setting parameters

    ‘Making something in virtual reality’ sounds sort of vague so I decided to set my project some parameters. While I can see a fully immersive VR experience getting serious traction in gaming and entertainment, it’s too isolating an experience to hold much appeal for me.

    VR date

    I can’t wait to go on dates like this in the future. Source: Shitty stock photo

    What interests me is Augmented Reality or AR; where a computer-generated image displays over a user’s view of the real world through glasses or a lens.

    AR has some potentially exciting uses. This mind-map provided a good starting point for my thinking:

    How we might use Augmented Reality

    How we might use Augmented Reality (click to enlarge). Source: Mogens Skjold

    The problem I would like to solve is accessing information safely and easily while on the move. Throughout the day I extract my phone from my bag to read emails and messages, check my calendar, locate things and navigate around town. This typically involves me rummaging through my giant work bag, locating my phone, unlocking it and then trying to read from the small screen while walking. Living in central London I’ve fallen over countless times and had three theft attempts by the guys on mopeds who ride onto the pavement and make a grab for your phone (try harder next time, losers). If this information could be displayed in front of my face it would improve my safety, my work-life and my commute.

    Augmented reality would involve wearing a lens or headset, which practically speaking would need to be remembered, charged and worn each day. I’m excited to see a number of VR glasses currently in development, (though these are the only pair I’ve seen so far that don’t make you look like a glasshole).

    Luke Wroblewski recently posted on the ‘value to pain’ ratio of wearing an AR headset to establish at which point the headset’s value outweighed the pain of charging and wearing one. He suggested some examples which could prove compelling enough to make wearing an AR headset worthwhile for the user.

    What would augment reality? Source: Luke Wroblewski

    Twitter responded with some cool suggestions for AR prompts which provided me with more inspiration. Some made me laugh (I love you internets):

    With this in mind, here are the top five things I would be willing to wear an AR headset for:

    1. Email/Whatsapp/text notifications. These could either disappear after a couple of seconds or display full text when promoted by a tap or voice command. I receive message notifications on my Apple watch but it doesn’t allow me to tap through to full text (I find this frustrating!).
    2. Display calendar notifications, with reminders set to my preference.
    3. Contextual commute notifications telling me my tube station is closed or suggesting a faster route.
    4. Maps and directions to the nearest [whatever]. Usually the bathroom in my case.
    5. Location-based prompts that tell me to do something at a particular place (ie. buy milk when I’m standing outside the shop).

    There are also a few other ‘nice to haves’ which would improve my quality of life:

    • Estimated waiting time/availability in coffee shops/restaurants.
    • Product reviews while buying from high street stores. I regularly pull out my phone in shops and check Amazon reviews before purchasing in-store.
    • Price comparisons to show whether I am getting a ‘good deal’ on goods/services. From petrol to currency exchange it would be nice to see if the store a few doors down offered you a better deal.
    • Finding where you need to go in a crowd, whether your seat in the theatre or your friends at a festival.

    Think I need to learn to run before I can walk though!

    Who is the user?

    My starting point for any product design process is to define the end user – in this case it’s ME. It’s rare to be able to design something for myself, this seems gloriously self-indulgent. To profile myself:

    • Tech level: High, early adopter. Macbook for work, iPad on the sofa, iPhone on the move.
    • Channels: Twitter for work/tech, Facebook for friends/family.
    • Motivations: Making commute and work-time fast and efficient to free up more time to spend with my family.
    • Frustrations: Freelancer/contractor so always on the go. Often travel to client offices and project meetings. Impatient, time-poor, appreciate and willing to pay more for fast service.
    • Needs: Efficient access to latest information on the go.

    I’m all set. Now just to work out where on earth to start…

  • Hello virtual world!

    I’ve started this blog to document my adventures learning to design for virtual reality.

    I’ve been designing online for the past 15 years; starting as a generalist web designer then niching down into UI/UX as design roles became more specialised. I’ve seen the web through the horrors of skeuomorphism, animated flash pre-loaders and long shadows but always viewed through a rectangular screen.

    Through my early career screens got bigger, then with the advent of the smartphone got smaller. Liberating the internet from our desks was an exciting development but in doing so we have inadvertently created a terrible form of human-computer interaction. Phones were designed to hold to your ear and to speak into, not for inputting a 50 character password with your finger while walking down the street. Thousands die each year distracted by their smartphone, yet our generation of compulsive over-sharers seem completely accepting of the awkward exchange between human and screen.

    Surely a lens that sits over face your face and displays the information you need without compromising your safety or security is the solution to this?

    That’s why I’m excited to learn to design for VR, more specifically augmented reality (where a computer-generated image displays over a user’s view of the real world). 

    But where to start?

    • Do I need to buy new kit, or can I experiment using my regular MacBook and iPhone?
    • Proper headset or Google cardboard?
    • What platform should I use?
    • What is Mozilla A-Frame? Daydream VR? Unity VR? Can I access these tools with my limited coding knowledge? Help! 
    • What input methods can I rely on?
    • Can my regular UI design tools like Sketch help me?
    • What is my canvas size? Can I even assign 360 degrees a pixel value? Am I really stupid for asking?

    I thought I’d start a blog to document my journey, or at least to form a repository for the articles and guides I keep finding and forgetting. Interested in designing outside the rectangle? Come along for the ride!

     
    I have no idea what I'm doing