Archive for Renderman Shaders

3D Surfaces and Light (Part 2)

Continuing my series on 3D Surfaces:

In 3D Surfaces and Light (Part 1), I talked about the physics of what diffuse, specular, and ambient settings in a surface shader are trying to simulate. In this portion of my discussion on 3D Surfaces and Light, I’ll talk about how those settings / dials you see are actually used in the shader / rendering code.

It Depends!

Ok, first I have to issue a caveat. Since much of how a surface is defined in a 3D program is dependent on the way that the shader code is written, the actual math of how things work can vary widely. You don’t have to look any further than the difference in how Poser and DAZ Studio handle the ambient channel to see an example of this.

In the default surface shaders, Poser treats the calculation of the ambient contribution to the color of the surface independent of the diffuse and specular settings. In DAZ Studio, the default shader blends ambient and diffuse together. This means that although both programs can use the same definition of the surface settings, the results that each program creates can be significantly different.

I’ve seen some folks call this a problem between Poser and DAZ Studio. This is inaccurate. The difference is in the shader code for the default surfaces. It isn’t in the rendering engine itself. And, neither is “correct” or “wrong” in how they do it, they are simply different.

Shady shaders?

A brief primer on what a ‘shader’ is. In order to make general 3D rendering engines as flexible as possible, very little about how 3D objects and surfaces are handled is hard coded into the engine. Some rendering engines (especially real time engines such as for games) may break this rule in the interest of speed, but most general purpose rendering engines use shaders to define how an object looks in the final result.

Shaders are bits of code which tell the rendering engine things like “When light strikes this surface, this is how you should calculate the effect that it has on the image.” Most things in a 3D engine are actually defined as shaders. This includes surfaces and lights, even cameras.

Affecting the Effect

Ok, enough caveats and general thoughts, let’s get to the meat of things. For this discussion, I had to pick a basic shader to use my framework. I’ve chosen Renderman’s “plastic” shader. This is a standard reference which is often used for the basis for other more advanced surfaces. For example, DAZ Studio’s standard surface shader is an advanced version of this shader.

Warning: I’m about to get into some math and programming discussions, but I’ll do my best to make it easy to follow!

The code for Pixar’s reference “plastic” shader would look something like this…

Color =
(
Dcolor *
(
(
Astrength * Acolor
) +
Diffuse(N, (1,1,1), Dstrength)
)
) +
Specular(N, -I, Scolor, Sstrength, Sroughness)

Where the following are true…

  • Dcolor = the color setting for the diffuse channel
  • Dstrength = the strength setting for the diffuse channel
  • Acolor = the color setting for the ambient channel
  • Astrength = the strength setting for the ambient channel
  • Scolor = the color setting for the diffuse channel
  • Sstrength = the strength setting for the diffuse channel
  • Sroughness = the roughness setting for the specular channel (if the shader uses the term “glossiness”, then roughness is usually 1 – glossiness)

To work it from the inside parts out…

Acontrib = (Astrength * Acolor) – multiply the Ambient strength by the Ambient color to the get the contribution that the Ambient channel is providing.

Color =
(
Dcolor *
(
Acontrib +
Diffuse(N, (1,1,1), Dstrength)
)
) +
Specular(N, -I, Scolor, Sstrength, Sroughness)

Dwhite = Diffuse(N, (1,1,1), Dstrength) – Call the built-in Diffuse function to calculate a diffuse value at the current location based on a pure white surface and the provided Diffuse Strength value. This gets used to “wash out” the ambient setting (see the next step). If there is no ambient setting, this will also provide the shader with what is needed to calculate the strength of the Diffuse component in the surface later in the function.

Color =
(
Dcolor *
(
Acontrib + Dwhite
)
) +
Specular(N, -I, Scolor, Sstrength, Sroughness)

DAcontrib = Acontrib + Dwhite – This basically “washes out” the ambient contribution by adding the ambient contribution to the white diffuse setting calculated above. This is why when you have anything above a zero in the Diffuse Strength setting, the ambient component seems to be lessened. If ambient strength had been set to zero, this factor would end up equaling the diffuse strength value.

Why Link Them?

This is where Poser and DAZ Studio diverge in how they calculate this. The example I’m using is the Pixar Renderman reference and is what the standard surface shader in DAZ Studio is based on. Poser does not combine ambient and diffuse in this way.

The question often arises, why does DAZ Studio link ambient and diffuse together and Poser doesn’t? Remember that in Part 1, I talked about how the ambient channel was an attempt to represent that in the real world, light bounces around much more than we can actually simulate in a rendering engine. So in this shader code, the programmer was trying to say that if no other light touches this part of the surface, the ambient setting should be used to represent this indirect light. However, if there is a light source on this part of the surface, that light should be stronger than the indirect lighting.

The ambient channel in Poser can also be used this way, however it ends up being up to the 3D artist to find the correct balance between ambient and diffuse lighting strengths. For this reason, in many cases the content creators for Poser use the ambient channel for special effects (like glowing patterns) rather than for the indirect lighting factor that it was designed for.

Again, neither implementation is “correct” or “wrong”, just different. And this difference is why you’ll see a change in how a surface looks in each program even with the same values in the channel settings. Back to the plastic shader breakdown…

Color =
(
Dcolor * DAcontrib
) +
Specular(N, -I, Scolor, Sstrength, Sroughness)

Dcontrib = Dcolor * DAcontrib – multiply the resulting Diffuse color by the washed out ambient contribution.

Color =
Dcontrib +
Specular(N, -I, Scolor, Sstrength, Sroughness)

Scontrib = Specular(N, -I, Scolor, Sstrength, Sroughness) – call the built-in function to calculate the Specular contribution based on the color, strength, and roughness settings.

Color = Dcontrib + Scontrib

Finally, add the diffuse contribution to the specular contribution to get the final color for the surface at this location.

Material Differences

As I mentioned at the beginning, this is an example of how the Renderman Plastic reference shader from Pixar works. Other surface shaders may use completely different math. For instance, consider the following code for the metal shader.

 Color =
Scolor *
(
Astrength * ambient() +
Sstrength * specular(N,V,Sroughness)
)

This code only uses ambient and specular, ignoring the diffuse settings completely. This would strengthen the specular effect, but only the ambient channel would add any other color.

Wrapped Up

I hope this made some sense to people, but if there are any questions, please feel free to ask them! In the final segment of this tutorial, I’ll talk in more detail about how texture maps and colors (both surface and light) combine to affect the look of your surface.

Advertisements

Project: Renderman Shaders

I’ve been working with the Shader Builder in DAZ Studio lately. To learn more about how it works and using it to write shader code, I’ve been converting some of the reference shaders from the Renderman Companion site and Fundza into Shader Builder Networks. Here are some of the results so far…

Gooch

MK Gooch shader from Renderman Companion.

MK Gooch shader from Renderman Companion.

Screen

Uses the Screen shader from Renderman Companion.

Uses the Screen shader from Renderman Companion.

I’m struggling some with the screen. You can see on Aiko that the bands of the screen aren’t aligning properly. So I tried using the transform function to change the coordinate space that is used for determining the gridlines. Here are the results using each transform…

S & T coordinates translated to Camera space.

S & T coordinates translated to Camera space.

S & T coordinates translated to Object space.

S & T coordinates translated to Object space.

S & T coordinates translated to Shader space.

S & T coordinates translated to Shader space.

S & T coordinates translated to World space.

S & T coordinates translated to World space.

S & T coordinates translated to Screen space.

S & T coordinates translated to Screen space.

S & T coordinates translated to Raster space.

S & T coordinates translated to Raster space.

S & T coordinates translated to Normalized Device Coordinates (NDC) space.

S & T coordinates translated to Normalized Device Coordinates (NDC) space.

 

It seems to me that the Object space works best to remove the seams, but still isn’t perfect; so I implemented the “Show ST” shader which shows the S & T coordinates for the models.

ShowST

You can see from the render here that Aiko 4 has some seams in her UV maps that may be hard to get rid of.

Uses the Show ST shader from Renderman Companion.

Uses the Show ST shader from Renderman Companion.