Stereo reflections

This section of the guide shows you how to implement stereo planar reflections in Unity VR. To do this in your VR game, you must make a few adjustments to the non-VR game code.

In a non-VR game, there is only one camera viewpoint. In VR, there is one camera viewpoint for each eye. This means that the reflection must be computed individually for each eye.

If both eyes are shown the same reflection, users quickly notice that there is no depth in the reflections. this lack of depth is inconsistent with their expectations and can break their sense of immersion, negatively affecting the quality of the VR experience.

To correct this problem, two reflections must be calculated. These two reflections must be shown with the correct adjustment for the position of each eye while the user looks around in the game.

To implement stereo reflections the Ice Cave demo uses:

  • Two reflection textures for planar reflections from dynamic objects
  • Two different local corrected reflection vectors, to fetch the texture from a single local cubemap for static object reflections

Reflections can be for either dynamic objects or static objects. Each type of reflection requires a different set of changes to work in VR.

Implement stereo planar reflections in Unity VR

Before following the code in this section, you must ensure that you have enabled support for virtual reality in Unity. To check this, follow these steps:

  1. Select Edit.
  2. Select Project Settings.
  3. Select Player.
  4. Select XR Settings.
  5. Select the checkbox for Virtual Reality Supported.
Dynamic stereo planar reflections

Dynamic reflections require some changes to produce a correct result for two eyes.

The following code shows how to set up two new cameras that both have a target texture to render to. You must disable both cameras so that their rendering is executed programmatically:

void OnPreRender(){ 
	SetUpReflectionCamera(); 
	// Invert winding 
	GL.invertCulling = true; 
} 
void OnPostRender(){ 
	// Restore winding 
	GL.invertCulling = false; 
} 

This script places and orients the reflection camera using the position and orientation of the main camera. To do this, the code calls the SetUpReflectionCamera() function just before the left and right reflection cameras render. The following code shows how SetUpReflectionCamera() is implemented:

public GameObject reflCam; 
public float clipPlaneOffset ; 
… 
private void SetUpReflectionCamera(){ 
	// Find out the reflection plane: position and normal in world space 
	Vector3 pos = gameObject.transform.position; 

	// Reflection plane normal in the direction of Y axis 
	Vector3 normal = Vector3.up; 
	float d = -Vector3.Dot(normal, pos) - clipPlaneOffset; 
	Vector4 reflPlane = new Vector4(normal.x, normal.y, normal.z, d); 
	Matrix4x4 reflection = Matrix4x4.zero; 
	CalculateReflectionMatrix(ref reflection, reflPlane); 
	
	// Update reflection camera considering main camera position and orientation 
	// Set view matrix 
	Matrix4x4 m = Camera.main.worldToCameraMatrix * reflection; 
	reflCam.GetComponent().worldToCameraMatrix = m; 

	// Set projection matrix 
	reflCam.GetComponent().projectionMatrix = Camera.main.projectionMatrix; 
} 

SetUpReflectionCamera() calculates the view and projection matrices of the reflection camera. SetUpReflectionCamera() determines that the reflection transformation worldToCameraMatrix must be applied to the view matrix of the main camera.

To set the position of the cameras for each eye, add the following code after the line Matrix4x4 m = Camera.main.worldToCameraMatrix * reflection:

Left eye   Right eye
m[12] += stereoSeparation;
m[12] -= stereoSeparation;

The shift value stereoSeparation is 0.011. The stereoSeparation value is half the eye separation value.

Attach another script to the main camera, to control the rendering of the left and right reflection cameras. The following code shows the Ice Cave implementation of this script:

public class RenderStereoReflections : MonoBehaviour 
{
public GameObject reflectiveObj;
public GameObject leftReflCamera; 
public GameObject rightReflCamera; 
int eyeIndex = 0; 

void OnPreRender(){
	if (eyeIndex == 0){ 
		// Render Left camera
		leftReflCamera.GetComponent().Render();
		reflectiveObj.GetComponent().material.SetTexture("_DynReflTex",leftReflCamera.GetComponent().targetTexture); 
	}
 	else{ 
		// Render right camera 
		rightReflCamera.GetComponent().Render(); 
		reflectiveObj.GetComponent().material.SetTexture("_DynReflTex", rightReflCamera.GetComponent().targetTexture); 
	} 
	eyeIndex = 1 - eyeIndex;
   } 
} 

This script handles the rendering of the left and right reflection cameras in the OnPreRender() callback function of the main camera. This script is called once for the left eye and once for the right eye. The eyeIndex variable assigns the correct render order for each reflection camera and applies the correct reflection to each eye of the main camera. The first time the callback function is called, eyeIndex specifies that rendering is performed for the left eye. This matches the order that Unity calls the OnPreRender() method.

Check that different textures are in use for each eye

Checking whether the script is correctly producing a different render texture for each eye is important.

To test whether the correct texture is being shown for each eye, follow these steps:

  1. Change the script so that it passes the eyeIndex value to the shader as a uniform.
  2. Use two colors for the reflection textures, one for each eyeIndex value.

If your script is working correctly, the output should look similar to the following screenshot. This screenshot shows two different well-defined left and right textures on the platform. This means that, when the shader is used to render with the left camera, the correct left reflection texture is used, this is also the case when the shader is used to render the right camera.

Static stereo reflections

You can use cubemaps to efficiently create stereo reflections from static objects. In this case, you must use two different reflection vectors to fetch the texels from the cubemap, one for the left camera and one for the right camera. Unity provides a built-in value to access the camera position in world coordinates, in the shader: _WorldSpaceCameraPos.

However, in VR, the position of the left and right cameras is required. _WorldSpaceCameraPos cannot provide the positions of the left and right cameras. This means that you must use a script to calculate the position of the left and right cameras, and to pass the results to the shader as a single uniform.

The following code shows how to declare a new uniform in the shader that can pass the information for the camera positions

uniform float3 _StereoCamPosWorld; 

The best place to calculate the left and right camera positions is in the script that is attached to the main camera. This script gives easy access to the main camera view matrix. The following code shows how to do this for the eyeIndex = 0 case.

The code modifies the view matrix of the main camera to set the position of the left eye in local coordinates. The left eye position is required in world coordinates, so that the inverse matrix is found. The left eye camera position is passed to the shader through the uniform _StereoCamPosWorld

Matrix4x4 mWorldToCamera = gameObject.GetComponent().worldToCameraMatrix; 
mWorldToCamera[12] += stereoSeparation; 
Matrix4x4 mCameraToWorld = mWorldToCamera.inverse;
Vector3 mainStereoCamPos = new Vector3(mCameraToWorld[12], mCameraToWorld[13], 	mCameraToWorld[14]); 
reflectiveObj.GetComponent().material.SetVector("_StereoCamPosWorld", new Vector3 (mainStereoCamPos.x, mainStereoCamPos.y, mainStereoCamPos.z));

The code is the same for the right eye, except that the stereo separation is subtracted from mWorldToCamera[12], instead of being added to mWorldToCamera[12].

In the vertex shader, you must find the following line of code, which is responsible for calculating the view vector:

output.viewDirInWorld = vertexWorld.xyz - _WorldSpaceCameraPos; 

When the stereo reflection is implemented, the stereo reflection is visible when the application runs in editor mode. This is because the reflection texture flickers as it repeatedly changes from the left eye to the right eye. This flickering is not visible in the VR device, because a different texture is used for each eye.

Optimize stereo reflections

Without further optimizations, the stereo reflection implementations run all the time. This means that processing time is wasted on reflections when they are not visible.

The following code checks whether a reflective surface is visible, before any work is performed on the reflections themselves:

public class IsReflectiveObjectVisible : MonoBehaviour 
{
	public bool reflObjIsVisible;
 
	void Start(){ 
		reflObjIsVisible = false; 
	}
 
	void OnBecameVisible(){ 
		reflObjIsVisible = true; 
	} 

	void OnBecameInvisible(){
		reflObjIsVisible = false; 
	}
} 

After defining the IsReflectiveObjectVisible class, use the following if statement in the script that is attached to the main camera. This statement allows the calculations for stereo reflections to only execute when the reflective object is visible:

void OnPreRender(){ 
	if (reflectiveObjetc.GetComponent().reflObjIsVisible){ 
	… 
	} 
} 

The rest of the code goes inside the if statement. The if statement uses the class IsReflectiveObjectVisible to check whether the reflective object is visible. If it is not visible, then the reflection is not calculated.

Previous Next