We’re living in the future. Isn’t it great? A lot of the sci-fi fantasies of the past are now becoming a reality, and we can start to enjoy them today. And, perhaps more importantly, our kids will get to enjoy them as they grow up too! The problem is that while technology has advanced at an exponential pace in the last five or so years, user experience design has not kept up with it.

It’s been stuck in a time warp where designs from ten years ago are considered modern and innovative today. We have all this wonderful technology available but are still using similar design patterns to those used decades ago on desktops and laptops. That’s why it was exciting to see Unity recently launch their AR/VR UI project – because it was another step towards realizing just how much UI is still stuck in the past when compared to other fields like hardware design or even web development.

Over the last few months I had been helping out on a project for a client developing some user interfaces for Unity 3D that would be used on smart devices built around AR/VR technologies. While creating these interfaces I became aware of several key issues that will have to be addressed if we want these smart UI devices to become mainstream: Interaction methods. There is no standard method of interaction for these devices. In the case of AR, we have a variety of different options – gaze, voice, hand gestures and more.

VR has also made big advances in this area as well with motion controllers like Touch and Wand now becoming available to developers (or soon will be). This means that we will need to create new ways to incorporate these additional methods into UI design solutions. It’s also possible that new “standards” could emerge over time – like AR/VR native mice in the same way that there are now touch sensitive track pads on laptops.

The majority of modern day UI design patterns were created with traditional 2D interfaces in mind – desktops or laptops primarily used on desk surfaces or lap desks by people sitting down at a desk or table using a mouse or trackpad while looking at the screen directly from a comfortable distance away.

Even many 3D UIs follow similar principles because they are designed for relatively flat/immovable display surfaces rather than those which can be placed anywhere within our field of vision regardless of where we are positioned ourselves. We need to start thinking about how to turn this on its head and apply user experience design methods to VR/AR, taking into account the possibilities of a user moving around freely in their environment while still being able to interact with the UI – or even having it follow them around. Interaction models need to be rethought for these interfaces.

Most traditional 2D UIs are based on the idea that we “click” on an item of interest using an input device like a mouse or touch pad. This then either does something directly (like closing an image) or opens up another screen which allows further interaction and decision making (like choosing options from a drop down list). AR/VR interfaces will not allow for direct interaction as such because there will be no display surface for the interface element to appear on. It might not even be possible to “click” anywhere in the environment as it could be too dangerous or hard to do so (e.g. swinging a sword at your boss isn’t really recommended).

So we probably need a different approach here – one that doesn’t rely so much on physical interaction with some form of input device but instead focuses more on gaze and voice recognition. The device needs some time alone with you when you first use it. The majority of current smart devices require us to go through some kind of onboarding process where they helpfully walk us through all their features via menus or tutorials before allowing us any real freedom to use them how we want.

But this isn’t really what we want from our technologies; rather we just want them “on hand” ready for when we need them without any additional fuss required – especially if they are always required and can stay visible as part of our daily routine without being removed from view e.g. a smart watch or similar. However, this would mean different interaction models might be required – for example, perhaps only minimal instructions need to be provided and most of the UI is learnt by using it rather than reading about how it works.

So these are all issues that need to be addressed if we want AR/VR UIs to become mainstream and not just something that is confined to science fiction novels and movies like Minority Report (which was set in the year 2054…). The good news is that many of these problems can probably be solved with a little bit of UX research, user testing and iterative design over time as we gain more experience with them. But perhaps even more importantly it means there are interesting times ahead for user experience designers! So let’s go forward and embrace the future, and see where we end up!

Marco Lopes

Excessive Crafter of Things


Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *