You copied the Doc URL to your clipboard.

Stereo reflections

In a non-VR game, there is only one camera viewpoint. In VR, there is one camera viewpoint for each eye. This means that the reflection must be computed for each eye individually.

If both eyes are shown the same reflection, then users quickly notice that there is no depth in the reflections. This is inconsistent with their expectations and can break their sense of immersion, negatively affecting the quality of the VR experience.

To correct this problem, two reflections must be calculated and shown with the correct adjustment for the position of each eye as the user looks around in the game.

To implement these reflections in the Ice Cave demo, it uses two reflection textures for planar reflections from dynamic objects, and two different local corrected reflection vectors to fetch the texture from a single local cubemap for static object reflections.

Reflections can be of either dynamic or static objects. Each type of reflection requires a different set of changes to work in VR.

Implementing stereo planar reflections in Unity VR

Implementing stereo reflections in your VR game requires a few adjustments to the non-VR game code.

Before starting, ensure that you have enabled support for virtual reality in Unity. To do this, select Build Settings > Player Settings > Other Settings and select the checkbox for Virtual Reality Supported.

Dynamic stereo planar reflections

Dynamic reflections require some changes to produce a correct result for two eyes.

You must create two cameras, and a target texture for each camera to render to. Disable both cameras so that their rendering is executed programmatically. Then, attach the following script to both.

void OnPreRender(){
        SetUpReflectionCamera();
        // Invert winding
        GL.invertCulling = true;
}
void OnPostRender(){
        // Restore winding
        GL.invertCulling = false;
}

This script places and orients the reflection camera using the position and orientation of the main camera. To do this, it calls the SetUpReflectionCamera() function just before the left and right reflection cameras render. The following code shows how this function is implemented.

public  GameObject reflCam;
public  float clipPlaneOffset ;
…
private void SetUpReflectionCamera(){
        // Find out the reflection plane: position and normal in world space
        Vector3 pos = gameObject.transform.position;

        // Reflection plane normal in the direction of Y axis
        Vector3 normal = Vector3.up;
        float d = -Vector3.Dot(normal, pos) - clipPlaneOffset;
        Vector4 reflPlane = new Vector4(normal.x, normal.y, normal.z, d);
        Matrix4x4 reflection = Matrix4x4.zero;
        CalculateReflectionMatrix(ref reflection, reflPlane);

        // Update reflection camera considering main camera position and orientation
        // Set view matrix
        Matrix4x4 m = Camera.main.worldToCameraMatrix * reflection;
        reflCam.GetComponent<Camera>().worldToCameraMatrix = m;

        // Set projection matrix
        reflCam.GetComponent<Camera>().projectionMatrix = Camera.main.projectionMatrix;
}

This function calculates the view and projection matrices of the reflection camera. It determines the reflection transformation to apply to the view matrix of the main camera, worldToCameraMatrix.

To set the position of the cameras for each eye, add the following code after the line Matrix4x4 m = Camera.main.worldToCameraMatrix * reflection;:

For the left eye
m[12] += stereoSeparation;
For the right eye
m[12] -= stereoSeparation;

The shift value stereoSeparation is 0.011. The stereoSeparation value is half the eye separation value.

Attach another script to the main camera to control the rendering of the left and right reflection cameras. The following code shows the Ice Cave implementation of this script.

public class RenderStereoReflections : MonoBehaviour
{
    public GameObject reflectiveObj;
    public GameObject leftReflCamera;
    public GameObject rightReflCamera;
    int eyeIndex = 0;
 
    void OnPreRender(){
        if (eyeIndex == 0){
            // Render Left camera
            leftReflCamera.GetComponent<Camera>().Render();
            reflectiveObj.GetComponent<Renderer>().material.SetTexture(
                "_DynReflTex", leftReflCamera.GetComponent<Camera>().targetTexture);
        }
        else{
            // Render right camera
            rightReflCamera.GetComponent<Camera>().Render();
            reflectiveObj.GetComponent<Renderer>().material.SetTexture(
			    "_DynReflTex", rightReflCamera.GetComponent<Camera>().targetTexture);
        }
        eyeIndex = 1 - eyeIndex;
    }
}

This script handles the rendering of the left and right reflection cameras in the OnPreRender() callback function of the main camera. This script is called once for the left eye and then once for the right eye. The eyeIndex variable assigns the correct render order for each reflection camera and applies the correct reflection to each eye of the main camera. The first time the callback function is called it assumes that it is for the left eye. This is the order that Unity calls the OnPreRender() method.

Checking that different textures are in use for each eye

Checking whether the script is producing a different render texture for each eye correctly is important.

To test whether the correct texture is being shown for each eye:

Procedure

  1. Change the script so that it passes the eyeIndex value to the shader as a uniform.
  2. Use two colors for the reflection textures, one for each eyeIndex value.
If your script is working correctly, the output is similar to the following figure that shows a screenshot where the two different stable reflections are visible.

Figure 6-6 An example of a correct reflection texture output check

An example of the correct output from the reflection texture check

Static stereo reflections

You can create stereo reflections from static objects efficiently, by using cubemaps. The only difference is that you must use two reflections vectors to fetch the texels from the cubemap, one for each eye.

Unity provides a built-in value for accessing the camera position in world coordinates, in the shader:

_WorldSpaceCameraPos.

However, in VR, the position of the left and right cameras are required. _WorldSpaceCameraPos cannot provide the positions of the left and right cameras. So you must use a script to calculate the position of the left and right cameras, and pass the results to the shader as a single uniform.

Declare a new uniform in the shader that can pass the information for the camera positions:

uniform float3 _StereoCamPosWorld;

The best place to calculate the left and right camera positions is in the script that is attached to the main camera, because this gives easy access to main camera view matrix. The following code shows how to do this for the eyeIndex = 0 case.

The code modifies the view matrix of the main camera to set the position of the left eye in local coordinates. The left eye position is required in world coordinates, so the inverse matrix is found. The left eye camera position is passed to the shader through the uniform _StereoCamPosWorld.

Matrix4x4 mWorldToCamera = gameObject.GetComponent<Camera>().worldToCameraMatrix;
mWorldToCamera[12] += stereoSeparation;
Matrix4x4 mCameraToWorld = mWorldToCamera.inverse;
Vector3 mainStereoCamPos = new Vector3(mCameraToWorld[12], mCameraToWorld[13], 
          mCameraToWorld[14]);
reflectiveObj.GetComponent<Renderer>().material.SetVector("_StereoCamPosWorld", 
          new Vector3 (mainStereoCamPos.x, mainStereoCamPos.y, mainStereoCamPos.z));

The code is the same for the right eye, except the stereo separation is subtracted from mWorldToCamera[12] instead of added.

In the vertex shader you must find the following line, it is responsible for calculating the view vector:

output.viewDirInWorld = vertexWorld.xyz - _WorldSpaceCameraPos;

Replace this with the following line that uses the new left and right eye camera positions in world coordinates:

output.viewDirInWorld = vertexWorld.xyz - _ StereoCamPosWorld;

When the stereo reflection is implemented it is visible when the application runs in editor mode, because the reflection texture flickers as it repeatedly changes from the left eye to the right eye. This flickering is not visible in the VR device because a different texture is used for each eye.

Optimizing stereo reflections

Without further optimizations, the stereo reflection implementations run all the time. This means that processing time is wasted on reflections when they are not visible.

Insert code that checks whether a reflective surface is visible, before any work is performed on the reflections themselves. To do this, attach code similar to the following code example to the reflective object.

public class IsReflectiveObjectVisible : MonoBehaviour
{
        public bool reflObjIsVisible;
 
        void Start(){
                reflObjIsVisible = false;
        }
 
        void OnBecameVisible(){
                reflObjIsVisible = true;
        }
 
        void OnBecameInvisible(){
                reflObjIsVisible = false;
        }
}

After defining this class, use the following if statement in the script attached to the main camera so that the calculations for stereo reflections are only executed when the reflective object is visible.

void OnPreRender(){
        if (reflectiveObjetc.GetComponent<IsReflectiveObjectVisible>().reflObjIsVisible){
        …
        }
}

The rest of the code goes inside this if statement. This if statement uses the class IsReflectiveObjectVisible to check whether the reflective object is visible. If it is not visible, then the reflection is not calculated.

Was this page helpful? Yes No