First time VR challenges
A room-scale VR hardware setup introduces a lot of new parts that many people have never used before, including hand-held controllers. Minimizing the shock of using these new devices is key to having a positive experience.
When we first started doing demos, we were relatively new to VR ourselves, but had become used to the hardware through using it for development on a daily basis. The problem was, the things that had become trivial to us – such as putting the headset on and grabbing the controllers – were challenging for new users.
Because of this, we started developing new methods for navigating and interacting with VR scenes. While those new methods include external tablet controllers and hand tracking integration, in this post I’m going to focus on an input method that presented the biggest challenge: hand-held controllers.
The 567 Clarke & Como VR Experience
Our most recent VR project is 567 Clarke & Como. It’s an experience meant to feature a Penthouse layout in an upcoming development, set up for potential purchasers in a sales environment in Vancouver. This project is the culmination of two years of VR development experience, and it provides a variety of features now standard in our room-scale VR applications:
Multiple Movement Controls
Our movement controls allow for teleporting to a predefined list of locations on demand, or aiming to a spot using an “Arc” teleport to move freely anywhere in the scene.
Adjustable Environment Lighting
This feature allows users to change both the interior lighting and exterior lighting in harmony with the unit’s exterior view, between both day and night imagery.
Users can switch between a “light” and “dark” material palette for the space.
These may sound simple, but when you’ve never held a VR controller before, and you’re being told “the following eight buttons do the following things”, it can be pretty overwhelming.
After some in-house user testing, it became clear we needed to simplify the input method, significantly. Our client wanted to keep controllers in the mix, so we scaled them back to do only one thing each:
- Left Controller: List teleport, select from a list of options where you’d like to go in the scene.
- Right Controller: Arc teleport, hold the trigger, and aim the landing pad where you’d like to go in the scene.
That’s it, no more input options on the controllers themselves! That sounds simple enough, but what about the rest of the scene options like material options and lighting? And what if someone didn’t want to use the controllers to navigate the space? Well, we realised that we needed to add a new, intuitive way for interacting with the scene.
We’ve used voice commands in applications before in our HoloLens scenes, but for VR, voice commands always took a backseat to controller-based input. We felt that voice commands in our VR scenes might be the key to simplifying the experience without going back to using the input options on the controllers, so we implemented our Hololens voice commands. They had been developed as ways to quickly toggle scene options, but never in an intuitive way, with our internal team members often forgetting what the exact command verbiage was to change such a value. It didn’t help that they weren’t standardised. Using “go to” for moving, and “change to” for lighting controls led to confusion. We needed something that would work as well, and was easy to remember.
We settled on a common and simple phrase: “Show me”. With just those two words, you could trigger a helper UI that lists all the voice commands available to you, allowing for control of every aspect of the scene, including teleporting!
“Show me” was a hit with the team, so much so that people elected not to use the controllers at all. In the future, we plan to expand and streamline the voice input system, and tie it in more closely to other input methods like hand tracking and tablet guides. More on that to come in future blog posts.