Scale for VR
At Designstor we pride ourselves on the level of immersion experienced in our VR solutions. Our walkable VR in particular gives the end user a very real sense that they are not just looking at an image but that they are actually present in a space. There are many things that we do to try and make our virtual experiences as natural and immersive as possible, and the foundation of every experience is the correct scale.
When scale is implemented correctly it is not noticed. An end user will just feel that they are in a space and not question it much more beyond that. It does not matter if we are creating a massing model or a photreal scene like 567 Clarke & Como; if the scale is incorrect it throws the entire experience off.
What determines scale?
There are many factors that determine scale and the perception of scale in 3D. A large number of the ways we all determine scale are heavily ingrained in us and we tend not to be aware of them.
Everything we make at designstor is made to a real-world size. A table is the right height, a mug is the proper dimensions and a ceiling is exactly the distance above your head that it should be. This is not unique in the architectural visualisation industry, but it is quite rare still in the world of computer graphics and video games. Virtual reality devices were initially targeted for video games. A few years ago, when we first started testing spaces in VR using the HTC Vive, the very first thing we noticed was that nothing was the right scale. It was close but just a little…off. We knew our source models were correct but something got lost along the way to Unity. Luckily, in VR there is a really simple way to fix this. We got out a real tape measure and put our VR controllers that are present in both the virtual and physical world to heights of virtual chairs, door frames and countertops. We then measured and compared these to their real world counterparts. We quickly came up with a scale factor and applied it to our scene, everything then ‘snapped’ into place.
It is not uncommon for us to test VR games (purely for research purposes of course). It can be jarring when things are not quite the right size. One huge advantage to working in architectural visualisation is that we essentially get the base scale for free. It is something that everyone here does every day while working. Scale is also the cornerstone of our physically based workflow. We have been doing this for more than 15 years and know intimately well all the traps that one can fall into when trying to massage the size of something, especially for artistic effect.
Familiar scale is not just limited to relative object sizes. It can include anything that we gather context from in the real world and take visual cues from. We can tamper with these known quantities in order to change perceived scale.
This 3D render demonstrates adjusting simple properties that are used to gather familiar scale to make an image look like a shot of a physical model.
- Camera placement – The camera is not inside the suite or from a typical photographers perspective. It is beyond the bounds of the suite and looking down. This angle is the way we would naturally look at a physical model.
- Camera Lens – This is shot with a 35 mm lens but the field of view still encompasses most of the suite.
- Lighting – Studio lighting has been employed in this scene. Lights are not from the windows and potlights but appear from outside the bounds of the suite.
- Materials – A soft semi translucent plastic material has been used throughout. Light is absorbed into the material and naturally softens the edges. This gives the appearance that it is significantly smaller in scale.
- Depth of field – The middle of the image is in focus and the edges are very blurred. This is very common in macro photography when a photographer is trying to get close to subject matter.
Adjusting all of these parameters creates a convincing representation of a scale model even though our source 3D data is the same that is used in the walkable VR.
Micro movements and Motion Parallax
There is a common misconception that humans determine distance solely from having two eyes and relying on stereopsis to perceive depth. This is not true, however. As demonstrated, there are many more visual cues that our brains process to ascertain perceived object distances and scale.
Surprisingly, even when standing still in one of our walkable VR scenes, there is a sense of presence. When people stand still there are a lot of subtle movements taking place. Breathing in and out, shifting weight from one foot to another or even tilting your head ever so slightly when looking at an object. The specifics of these movements are unique to every individual but these unique movements are all captured when using a walkable VR headset.
When one moves around objects in the foreground will visually move more quickly than objects in the background. This is called parallax motion and has been used a very long time in everything from old cartoons to video games to fake the distance that an object is away from a camera.
In VR as in real life, micro movements help map out a 3D space and quickly determine what is far away and what is very close to you.
Video demonstrating parallax motion
In this example the camera is getting rotated back forth. This movement gives a strong sense of 3D space.
Stereopsis is the perception of depth inferred from having two eyes. From two points of vision it is possible to judge how far away an object is from a static position in 3D space. Interestingly, once the viewing distance becomes great enough, stereopsis no longer works. Although there is debate on the subject, generally after 200m to 300m it becomes incredibly difficult to discern an object’s distance based on stereopsis alone. Once this distance threshold is reached, other techniques such as familiar scale will take over.
When stereopsis breaks down, we do not lose our ability to gauge distance. Instead we switch to other methods. Knowing the rough height of a human and then seeing another human a few hundred meters away, it is possible to extrapolate the size of objects around that human. We all do this without thinking about it. This technique is used all the time in traditional painting. Adding a human will instantly give everything implied scale.
These are just some of the techniques we use regularly – often without really thinking about them – to help convey the proper sense of scale in our VR work. There are many more, such as colour and relative motion, that perhaps can be covered in a future blog post if there is interest. The key to making convincing scenes that observe correct scale is knowing these embedded mechanisms and utilising them to their full effect. It is a topic that we have a passion for here at Designstor and one that we are still continuously learning!