Combine reflections

This section shows you how to combine reflections. For example, reflections based on local cubemap techniques enable you to render high quality, efficient reflections based on static local cubemaps. However, if objects are dynamic, the static local cubemap is no longer valid and the technique does not work. You can solve this by combining static reflections with dynamically generated reflections. This is shown in the following image:

If the reflective surface is planar, you can generate dynamic reflections with a mirrored camera. To create a mirrored camera, calculate the position and orientation of the main camera that renders the reflections at runtime. Mirror the position and orientation of the main camera, relative to the reflective plane.

The following image shows the mirrored camera technique:

In the mirroring process, the new reflection camera ends up with its axis in the opposite orientation. In the same way that a physical mirror works, reflections from the left and right are inverted. Therefore, the reflection camera renders the geometry with an opposite winding.

To render the geometry correctly, you must invert the winding of the geometry before rendering the reflections. When you have finished rendering the reflections, restore the original winding.

The following steps image shows what is required to set up the mirrored camera and render the reflections:

  1. Calculate reflection matrix ReflMat relative to reflection plane
  2. Calculate the position of the reflection camera:
    reflCam.Pos= mainCam.Pos * ReflMat;
  3. Build world to camera matric for reflection camera:
    reflCam.WorldToCam = mainCam.WorldToCam * ReflMat;
  4. Set projection matrix for reflection camera:
    reflCam.ProjMat = mainCam.ProjMat;
  5. Set render texture:
    reflCam.SetRenderTex(reflTex);
        reflMar.SetTex(_ReflTex, reflTex);
  6. Render reflections:
    GL.ReverseBackFacing(true);
    reflCam.Render();
    GL.ReverseBackFacing(false);

Build the mirror reflection transformation matrix. Use this matrix to calculate the position, and the world-to-camera transformation matrix, of the reflection camera.

The following maths equation shows the mirror reflection transformation matrix:

You can apply the reflection matrix transformation to the position and world-to-camera matrix of the main camera. This provides you with the position and world-to-camera matrix of the reflection camera.

The projection matrix of the reflection camera must be the same as the projection matrix of the main camera.

The reflection camera renders reflections to a texture.

For good results, you must set up this texture properly before rendering:

  • Use mipmaps
  • Set the filtering mode to trilinear
  • Use multisampling

Ensure the texture size is proportional to the area of the reflective surface. The larger the texture is, the less pixelated the reflections are.

Note: Here is an example script for a mirrored camera.

Combine reflections shader implementation

This section shows you how to combine reflections in the shaders. You can combine static environment reflections with dynamic planar reflections in a shader. To combine reflections in the shaders, you must modify the shader code that we provided in Shader implementation.

The shader must incorporate the planar reflections that are rendered at runtime with the reflection camera. Therefore, the texture _ReflectionTex from the reflection camera, passes to the fragment shader as a uniform. The texture then combines with the planar reflection result using a lerp() function.

In addition to the data related to the local correction, the vertex shader also calculates the screen coordinates of the vertex using the built-in function ComputeScreenPos(). It passes these coordinates to the fragment shader, as you can see in the following code:

vertexOutput vert(vertexInput input)
{
	vertexOutput output;
	output.tex = input.texcoord;
	// Transform vertex coordinates from local to world.
	float4 vertexWorld = mul(_Object2World, input.vertex);
	// Transform normal to world coordinates.
	float4 normalWorld = mul(float4(input.normal,0.0), _World2Object);
	// Final vertex output position.
	output.pos = mul(UNITY_MATRIX_MVP, input.vertex);
	// ----------- Local correction ------------
	output.vertexInWorld = vertexWorld.xyz;
	output.viewDirInWorld = vertexWorld.xyz - _WorldSpaceCameraPos;
	output.normalInWorld = normalWorld.xyz;
	// ----------- Planar reflections ------------
	output.vertexInScreenCoords = ComputeScreenPos(output.pos);
	return output;
}

The planar reflections are rendered to a texture, so the fragment shader can access the screen coordinates of the fragment. To enable texture rendering, pass the vertex screen coordinates to the fragment shader as a varying.

In the fragment shader:

  • Apply the local correction to the reflection vector.
  • Retrieve the color of the environment reflections staticReflColor from the local cubemap.

The following code shows how to combine static environment reflections. This code uses the local cubemap technique, with dynamic planar reflections, that are rendered at runtime using the mirrored camera technique:

float4 frag(vertexOutput input) : COLOR
{
	float4 staticReflColor = float4(1, 1, 1, 1);
	
	// Find reflected vector in WS.
	float3 viewDirWS = normalize(input.viewDirInWorld);
	float3 normalWS = normalize(input.normalInWorld);
	float3 reflDirWS = reflect(viewDirWS, normalWS);

	// Working in World Coordinate System.
	float3 localPosWS = input.vertexInWorld;
	float3 intersectMaxPointPlanes = (_BBoxMax - localPosWS) / reflDirWS;
	float3 intersectMinPointPlanes = (_BBoxMin - localPosWS) / reflDirWS;

	// Look only for intersections in the forward direction of the ray.
	float3 largestParams = max(intersectMaxPointPlanes, intersectMinPointPlanes);

	// Smallest value of the ray parameters gives us the intersection.
	float distToIntersect = min(min(largestParams.x, largestParams.y), largestParams.z);

	// Find the position of the intersection point.
	float3 intersectPositionWS = localPosWS + reflDirWS * distToIntersect;

	// Get local corrected reflection vector.
	float3 localCorrReflDirWS = intersectPositionWS - _EnviCubeMapPos;

	// Lookup the environment reflection texture with the right vector.
	float4 staticReflColor = texCUBE(_Cube, localCorrReflDirWS);

	// Lookup the planar runtime texture
	float4 dynReflColor = tex2Dproj(_ReflectionTex,
	UNITY_PROJ_COORD(input.vertexInScreenCoords);

	//Revert the blending with the background color of the reflection camera
	dynReflColor.rgb /= (dynReflColor.a < 0.00392)?1:dynReflColor.a;

	// Combine static environment reflections with dynamic planar reflections
	float4 combinedRefl = lerp(staticReflColor.rgb, dynReflColor.rgb, dynReflColor.a);

	// Lookup the texture color.
	float4 texColor = tex2D(_MainTex, input.tex);
	return _AmbientColor + texColor * _ReflAmount * combinedRefl;
}

The code performs the following operations:

  • Extract the texture color dynReflColor from the planar run-time reflection texture _ReflectionTex.
  • Declare _ReflectionTex as a uniform in the shader. If you also declare _ReflectionTex in the Property Block, you can see how it looks at runtime. This can assist you with debugging while you are developing your game.
  • For the texture lookup, project the texture coordinates. You can use the Unity built-in function UNITY_PROJ_COORD(). This function divides the texture coordinates by the last component of the coordinate vector.
  • Use the lerp() function to combine the static environment reflections and the dynamic planar reflections. For example, you could combine the following:
    • The reflection color
    • The texture color of the reflective surface
    • The ambient color component

Combine reflections from a distant environment

This section shows you to combine reflections from a distant environment. When you render reflections from static and dynamic objects, you might also have to consider reflections from a distant environment. An example of a distant reflection is reflections from the sky that are visible through a window in your local environment.

Using the preceding example, you must combine three different types of reflections:

  • Reflections from the static environment using the local cubemap technique.
  • Planar reflections from dynamic objects using the mirrored camera technique.
  • Reflections from the skybox using the standard cubemap technique. The reflection vector does not require a correction before fetching the texture from the cubemap.

For the skybox, we wish to ensure that it is only visible from the windows. To do this, render the transparency of the scene in the alpha channel when you are baking the static cubemap for reflections. Assign a value of one to opaque geometry and a value of zero where there is no geometry, or the geometry is fully transparent. For example, render the pixels that correspond to the windows with zero in the alpha channel.

Once we get to the shader code, we want to pass the skybox cubemap texture into the shaders as a uniform. We have called the skybox cubemap texture _Skybox in the code below.

To incorporate the reflections from a skybox, use the reflection vector reflDirWS to fetch the texel from the skybox cubemap.

Note: Do not apply a local correction.

In the fragment shader code that we show in Combine reflections shader implementation, find the following comment in the fragment shader code:

// Lookup the planar runtime texture

Insert the following lines immediately before the preceding comment:

float4 skyboxReflColor = texCUBE(_Skybox, reflDirWS);
staticReflColor = lerp(skyboxReflColor.rgb, staticReflColor.rgb, staticReflColor.a);

This code combines the static reflections with reflections from the skybox.

The following image shows the result of combining different types of reflections:

Previous Next