Archive for 3D Surfaces

Additional points about SSS

Since I opened the discussion about subsurface scattering (SSS) with my Light and SSS and SSS – Why Should I Care? posts, I’ve received some good feedback / additional information. I wanted to capture those here.

Other Uses

First, the point has been made that although we think of SSS as adding realism to surfaces which don’t reflect 100% of the light that strikes them, the effect of SSS can be used for other purposes. It can add some depth to the surfaces for toon style rendering, and can even completely change the look of an object. For some examples, see the following product pages at the DAZ 3D store.

Note: I don’t get any commission if you choose to buy any of these products. 🙂 I’m actually referencing them because they have example images that show the effects.

DAZ Studio – SSS Shader

We’ve had a couple of good discussions about the Subsurface Shader Base that is available for free for DAZ Studio. These discussions have largely been about how the shader works. It was actually one of these discussions which spawned my initial blog posts. I wanted to capture a couple of important points here.

Pre or Post?

The first point was asking for some clarification about how the selection of either Pre or Post processing of the SSS effect changes the resulting calculations. Age of Armour (Will) was kind enough to provide us with some information in this thread on the DAZ 3D forums.

The choice to Pre or Post application of the SSS effect has to do with how the surface values are calculated. For the Pre option, the calculation is:

(Diffuse map * Diffuse Color * Diffuse strength)
* Lighting
) +
(Subsurface Calculation * Lighting)

This basically means that the Diffuse surface color is calculated, then the SSS effect is added to the result.

When chosing the Post option for the SSS effect, the calculation looks significantly different.

(Subsurface Calculation * Lighting)
* (Diffuse map * Diffuse Color)
) +
(Diffuse Map * Diffuse Color * Diffuse strength)
* Lighting

In this case, there are two calculations that use the Diffuse surface settings. In the first part, the SSS effect is multiplied by the diffuse color. Note that the diffuse strength is not factored in at this point, it is simply creates a version of the diffuse color which is tinted by the subsurface effect. The second part of that equation is a standard diffuse surface calculation. The two diffuse colors are then added together to arrive at the final color for the surface.

The Origins of SSS

The ideas and concepts around subsurface scattering for the purpose of computer graphics were first described in a paper titled “A Practical Model for Subsurface Light Transport” presented to the ACM Siggraph conference by Henrik Wann Jensen, Stephen R. Marschner, Marc Levoy, and Pat Hanrahan. Warning for those who seek to understand SSS at that level, this is NOT trivial mathematics by any stretch. I cannot be held responsible for any damage to your brain from trying to read the paper.

SSS – Why Should I Care?

Markus Ba (one of the members of our DAZ Studio Artists group on Facebook) raised a question following the posting of my SSS and Light tutorial. “This is interesting, but why should I care about this?” It’s valid question and one that I’ll try to address here. But, first several caveats!

You Might Not Care!

I can’t tell you for certain that you should care about subsurface scattering. Depending on the visual style you are shooting for, the content you are using, etc. adding SSS effects to your surface shaders may not help your final image at all.

However, for accurate representation of surfaces other than hard plastic or metal, subsurface scattering is an important part of how the material interacts with light. Standard surface shaders using only diffuse, specular and ambient surface values ignore an important part of how real world materials work.

As I mentioned in the above referenced article, the primary reason for using subsurface scattering is to acknowledge that some light which strikes a surface is transmitted through the surface and exits at some other point on the surface. This scattered transmission of light is most closely associated with human skin, however many other surfaces do this as well. Examples include cloth, soft plastics / rubber, milk, clay, etc.

Cue the Lights

Before I talk about how SSS affects your surfaces (and therefore your final images), I want to mention that much of SSS is highly dependent on the lighting in your scene. Your lights do not necessarily have to be complicated, but very simple lights (e.g. a single distant light) may not provide enough light at the proper angles to get the most out of your SSS enabled shaders.

Texture Dependencies

One of the struggles with figuring out whether or not your image will benefit from SSS or not is how dependent the results can be on the texture maps that you have to work with. For the most realistic skin rendering using SSS, you should have the following texture maps.

  • Diffuse Map – showing what we think of as the visible skin surface (see note below)
  • Specular Map – skin is not universally shiny, a good specular map which acknowledges the differences makes a big difference
  • Subsurface Map – your skin does not have a constant color for it’s subsurface, ideally the creator of the skin you’re using understands this and has prepared a map. VERY complicated skin shaders go to the level of mapping the veins and arteries in your skin.
  • Subsurface Strength – Even if the color is constant, the shader should understand that the strength of the scattering is also not constant across your entire body.

How Diffuse Is It?

One problem that I’ve seen with many skins that we use in Poser and DAZ Studio is that they are based off photos of actual skins. “Why is that a problem?” you ask. Because the camera is recording the final result of the light interacting with the model’s skin. This includes the effect of light scattering in the subsurface.

So, if you add SSS to a skin which has already captured the SSS effect in the real world, you’re going to end up with skin that looks too orange/red. This is why you often see shader presets for skins multiply the texture by a light blue color. This (roughly) removes the SSS from the captured texture, with the expectation that the remaining calculations will add it back in correctly for your purposes.

The best diffuse map would be one where the original texture was captured with a very flat light. It should also have been just enough light as required to get the image without adding a lot of strong subsurface scattering to the image that the camera recorded.

Given that the artist doesn’t really have a choice of how the original texture is captured, the second best would be that you modify the texture in an image editing tool (e.g. Photoshop, GIMP, etc.) and remove some of the red at that level. I can’t really recommend specific filters since so much is going to depend on the image you’re starting with, the tools available in your editor, etc.

You Haven’t Answered Me!

Ok, now that I’m a page and a half into this description, it is probably time to address the original question about why should you care.

Usually the first example of where you will see the SSS effect is in the translucence that you see in certain parts of the body. The most common area is around the ears, or the fingers; however it can be seen anywhere that light is shining at an angle where it would transmit through the surface toward the camera.

The effect that it has is typically a soft translucent glow on the surface. Below I show a couple of simple images showing how SSS adds to the surface of Victoria’s head.

{Images to be inserted}

While SSS is most often associated with skin, it also is an effect on many other soft surfaces where light is partially absorbed and partially scattered (transported) through the surface. Surfaces like cloth, clay, rubber, etc. also have an SSS quality to their surface. The question of whether using an SSS enabled shader for objects in your scene which have a material like this will improve the image will end up being a matter of taste.

And, even then, there may be some cases where you decide that the additional level of realism for the surface is not worth the added rendering time that it takes.

Oh, you say I forgot to mention that part? Well, when you consider the extra calculations required to determine light absorption, scattering, translucence, fresnel effects, etc. the rendering time for an image where SSS is used extensively can be significantly higher than without.

Shader Tuning

One thing that I can’t really address here is how tuning the values of your SSS enabled shader will affect your final results. As I mentioned at the beginning, the results of an SSS enabled shader rely so much on lighting, textures, even the distance that the camera is from the subject have a big effect on the end result.

For DS users, there are several tutorial resources about how to get the best out of shaders like UberSurface, the Subsurface Base Shader, etc. Take a look at the links on my Other Tutorials page for information on where to find these sorts of tutorials.

Light and SSS Surfaces

This question came up on the DAZ 3D forums ( link ). Since there is considerable text to write, I figured I would post it here as well. Note that this discussion is about how light interacts with a surface that has subsurface scattering (SSS), not about how to get the best effects from an SSS enabled surface shader.

SSSay What?

First, briefly what subsurface scattering is all about.

One thing that is sometimes difficult to remember is that a surface in 3D graphics has no actual depth. It is a set of polygons which have length and width, but the depth is effectively zero. So our surface shaders that define the characteristics of the surface often have to fake the fact that in the real world, not everything that happens with light and surfaces happens on the very top layer of the material. This is especially true for surfaces like your skin.

When light strikes the surface of your skin, it does one of three things.

  1. It reflects – Most of the light just bounces off the outer layer of your skin and reflects into the rest of the world. This is exactly like every other surface.
  2. It is absorbed – Some of the light passes through that outer layer of skin and is absorbed into the layers beneath never to be seen again.
  3. It scatters and comes back out – Some of the light bounces around in the layers of your skin and eventually exits the skin again. This light can be seen. The easiest way to see this is when you press a small flashlight or laser pointer on your skin surface and the surrounding area “glows” with a reddish light.

Technically, unless they are a perfect mirror, all surfaces reflect and absorb light. That is the simple effect that we simulate by having the diffuse layer in our shader. Those settings are basically saying “When white light hits this part of the surface, this is the part of the spectrum which is reflected back into the rest of the environment.” The rest (by extrapolation) must have been absorbed by the surface.


So then what the SSS enabled shader needs to account for that isn’t already in the calculation is the scattering of the light within the surface and (eventually) the re-transmission of that light back into the rest of the world. While it could be possible to actually simulate the bouncing of the light within your skin, calculating the point where that light exits the skin again, and casting new rays of light, most shaders take a more simplistic view.

The biggest assumption that they make is that the point where they are calculating the surface values is very similar to the points that are close by. So, rather than calculating the effect of the light bouncing, one can make the assumption that the light which is hitting the point where you are sampling the effect is the same as the light that is nearby; so we can assume that some light from somewhere else is going to have been scattered and will be exiting the surface at our sampling point.

The perfectionists in us might cringe at this broad assumption, but when you consider the very tiny distances that are usually involved in this calculation, it isn’t as bad as you might think. We can also help out sometimes by fine tuning parameters in the rendering engine like pixel sampling or shader sampling levels.

SSShow Me?

Some of you are probably visual learners; so I’ve created a couple of simple diagrams to show what I mean.

SSStandard Surfaces

First, a diagram of light reflecting from a normal 3D surface. Note that in this case I’m assuming a white light source with a white diffuse surface setting; so all light that hits the surface is reflected back from it.

Standard 3D surface reflecting light

Standard 3D surface reflecting light

SSS Surfaces

When we add subsurface scattering, we need to account for at least the scattering aspect, and if we’re doing it well, the absorption factor is figured in too.

Light Interacting with a 3D Surface with Subsurface Scattering

Light Interacting with a 3D Surface with Subsurface Scattering

Notice that I included the second light ray that is assumed to exist that is adding the scattered light to the reflected light, giving us a result that is somewhat “warmer” than the pure white light that was provided.

SSSerious Skin

Some SSS enabled shaders can be further tweaked with additional settings. For instance, there is typically a setting for the strength of the scattering effect. Ideally this setting should allow you to provide a grayscale map which adjusts the strength of the scattering at various locations on the surface. Others will allow you to control which parts of the spectrum are absorbed and/or scattered by providing color controls for those settings.

Note: I have scene articles in both artistic and scientific oriented 3D journals which go so far as to simulate multiple portions of both the epidermis and dermis layers of the skin. That is hardcore!

SSScatter Pre or Post?

One challenge that can sometimes arise for SSS enabled shaders is how to combine it with the diffuse color values which define the color of the top layer of the surface. The decision typically comes down to whether the light that enters the surface to be scattered should be filtered by the diffuse color of the surface, or should that part be considered to be white and the controls on the scattering part of the shader control how the light exiting the skin should look?

In the subsurface shader included in DAZ Studio, you can choose whether to apply the diffuse layer to the surface prior (Pre) to the subsurface scattering or after (Post) the scattering process. Will (aka Age of Armour), the author of that shader, has an excellent video tutorial ( Subsurface Shader Basics ) available which describes in much greater detail how to get better results from using that shader.

SSSigning Off

I hope this helped a little with understanding what the subsurface scattering effect is all about and what the shaders that support it are trying to simulate for you. And I hope you don’t hate me for starting all my sections sounding like a sssilly sssnake. 🙂

Anisotropic vs Isotropic Surfaces

Note: This post may become part of a larger discussion at some point in regards to more advanced 3D surfaces. At this time, I just wanted to get some thoughts recorded.

Sounds Fancy!

In some cases, I’m convinced that people throw out the word “anisotropy” (or “Anisotropy Specularity”) because it sounds big and complicated. While the shader code to accomplish it is somewhat more complex than the standard 3D surface, the explanation of what it means is actually pretty simple.

Anisotropic surfaces are surfaces which look different based on the angle you are viewing them from. A couple of real-world examples would be brushed metal and suede leather. If you look at a piece of suede in a room where there is a distinct light source (window, lamp, etc.) and spin it slowly around, the sheen of the material changes. You can most easily see this if you first brush half of the patch of suede in one direction and the other half in the opposite direction.

In the interest of completeness, isotropic surfaces look the same no matter what angle you view them at. In that same room, if you have a smooth plastic plate, turning it around won’t change the look of the surface or how light reflects from it.

Anisotropy and You

In 3D graphics, anisotropy is most commonly used with specular reflections ( if that term is unfamiliar to you, see my discussion of Diffuse, Specular, and Ambient surface settings ). Shaders (aka materials) which have an anisotropic specular model allow you to set different values based on the relationship between the camera, the surface, and the light sources. So you might have a surface which has a Glossiness value of 30% in one direction, but 90% if the light is reflecting in a different direction.

You could also have a shader which allows for variations in the diffuse surface values. For instance the special car paints that you see on show cars (or sometimes on the street) where the car “changes color” as it passes by.

It Isn’t Broken

One thing to be aware of, though. These settings may not work on all objects. The reason is that most shaders rely on the UV Mapping that was done for the object. In a simple case, the shader determines if the light’s reflection is closer to the orientation of the U axis or the V axis, and makes choices about which settings to use based on that result.

If you’re wondering why that matters, consider a sword blade. The blade is modeled using many polygons which define the length, width, and thickness of the blade. When the model creator makes the object, they apply a UV Mapping to it. During that mapping, they decide whether to have the U axis refer to the width of the blade or the length of the blade.* This all happens long before you’re ever setting up your scene, and (without re-mapping the blade yourself) there isn’t anything you can do about it. Let’s say they chose to extend the U axis across the blade and the V axis extends the length of the blade.

You apply a shader which is written to use the “Specular 1” values when light reflects along the U axis, but chooses the “Specular 2” values when the light is reflecting closer to the V axis. You set the settings such Specular 1 will create stronger highlights, but be more spread out along the surface, while Specular 2 creates smaller, more constrained highlights, but they aren’t as strong. Rather than getting interesting long highlights when the blade is viewed along it’s length, you’ll get the stronger highlights when the blade is viewed across it’s width.

In keeping with proper Internet protocol, it is now time to go to the site for the vendor who created the item or the tool that you’re using to render and rant about how their implementation of Anisotropy is obviously broken! For good measure, be sure to link to the Renderman reference shaders or (even better) link to Gregory Ward’s “Measuring and Modeling Anisotropic Reflection“!

What’s that? You’re not into creating Internet drama? “Big deal, just switch the settings,” you reply.

That’s fine, that will work in this case. But the decision about how the U and V axes of the surface map apply to the model doesn’t have to conform to anything about the model. The original creator of the model may have wanted to paint a dragon spiraling around the blade’s length. To make it easier for themselves, they twisted the UV map 30 degrees around the object. Now there is no correlation to the length of the blade and either the U or V axis.

Heading to the Tropics

If this makes your head hurt, don’t worry. In most cases you don’t need to be that concerned about whether a surface should be Anisotropic or Isotropic. And when the difference might matter, the creator of the object may have considered that fact when they made it. However, I thought it might help to understand what the term means and why it can (sometimes) be hard to achieve the effect you were hoping for using it.

* Technically they could choose the thickness of the blade for the U or V axis as well, but that would be silly; so let’s not go there.

3D Surfaces and Light (Examples)

Finishing my series on 3D Surfaces:

I’ve claimed to be “almost done” with this post for a while. It is probably high time to be “actually done” with it. 🙂

I realized that the discussion in words, while worthwhile, may not be as helpful to some people as actually seeing some images and seeing the effects in action. So, I created a simple scene and did some test renders. In the scene, I have a plane for the floor and another for the back wall. Three cubes on the left and 3 other primitives on the right. A single distant light using raytraced shadows provides the lighting. In each of the images, if you want to see the details of the surface settings, click on the image to see the “media page” as it has a full list of all the relevant channels in the description for each image.

Diffuse Only

I start with only the Diffuse channel providing any surface values. Specular and Ambient strengths are set to zero.

Diffuse surface color only

Diffuse surface color only

Not very interesting, right? No highlitghts, fairly flat colors.

Adding Ambience

Next, I added some ambient values. Now, in this first set, I did something “odd” on purpose. I set the ambient setting to be opposite of the diffuse setting. For instance, on the Green (0,255,0) cube, I set the ambient color to Magenta (255,0,255). Look what happens, even with Ambient at 100%

100% Colored Ambient Setting

100% Colored Ambient Setting

Nothing right? Can’t see a difference between that and the first one? That’s because the Ambient is being multiplied by the diffuse color on a Red, Green, Blue basis. So, since 255 x 0 = 0, you get no effect. This is an extreme case of why you have to think about how your ambient and diffuse colors are going to blend or you may not get the effect you were hoping for! Let’s try again, but this time with a white color for ambient (on the cubes only)…

Cube ambient changed to white @ 100%

Cube ambient changed to white @ 100%

Well, at least you can see the effect now. 🙂 But obviously 100% isn’t a good setting. It totally removes all shadow details, etc. Remember back to Part 1 where I said that the Ambient channel was intended to simulate indirect light on the surface? This is basically saying to DAZ Studio / 3Delight “You have a pure white. full strength floodlight shining in all directions!” Not the goal we had in mind, eh? Let’s back that ambient channel down to a more normal fill light level say 30%…

30% White Ambient Surface

30% White Ambient Surface

A little better. It gives some light effect where the direct light from my distant light isn’t shining, and it doesn’t try to change anything about the colors or anything of my cubes.

You Look Spectacular!

There are two values in the specular channel that really work together to control highlights. The strength channel controls how intense the highlight is, while the glossiness (roughness in some other rendering engines) controls how small or spread out the highlight is across the surface. I started by cranking strength and glossiness to 100%…

100% Specular, 100% Glossiness

100% Specular, 100% Glossiness

What’s that? You don’t see anything? Well, that’s because we told the rendering engine that there is ZERO margin for error on how close the camera has to be to the perfect angle between the light and surface in order to see the highlight. Basically, we made the highlight so small that it’s invisible. Some people will see this effect and think that glossiness is “broken”. It isn’t broken. You just made the surface so smooth that the highlight disappeared. Let’s back it down to 90%…

90% Glossiness

90% Glossiness

Well, now we can see something (at least on the curved objects on the right)… but not much. Even 90% is a pretty small highlight. Let’s see what happens at 60%…

60% Glossiness

60% Glossiness

Ah. Much better! We can really see that highlight on the objects on the right now. But wait, Karl … you forgot to change the cubes didn’t you?

Nope. I didn’t. The cubes have the same specular settings as the curved objects. You don’t see any highlights because those wide flat surfaces are very consistent about their reflection of light. Since a distant light throws it’s light rays in parallel across the scene, there is no angle where you can see the highlight on the cubes. This illustrates part of the reason why there is no single “right” answer in regards to specular surface settings. If you want to see the cubes “shine”, we need to go even lower on the Glossiness, let’s try 30%…

30% Glossiness

30% Glossiness

Yay! The cubes have highlights! Well … if you can call them that. Basically they just look like something went wrong with the surface. And the curved surfaces on the right have a highlight that is so spread out, it is overwhelming the diffuse color. Probably not a setting that is very helpful, hmm?

Glossy Strength

So, I mentioned that both Specular Strength and Glossiness combine to control how the surface highlights look. In the next series of images, I keep the glossiness setting at 30%, but I vary the strength. I won’t talk about each image, but the captions show the setting that was used…

Glossiness 30%, Specular Strength: 75%

Glossiness 30%, Specular Strength: 75%

Glossiness 30%, Specular Strength: 50%

Glossiness 30%, Specular Strength: 50%

Glossiness 30%, Specular Strength: 25%

Glossiness 30%, Specular Strength: 25%

So, you can see that the spread of the highlight stays the same, but the intensity of the effect goes down (fades). For a final test with the white light, I set Diffuse to 100%, Specular to 25%, Glossiness to 30%, and Ambient to 10%…

Glossiness 30%, Specular Strength: 25%, Ambient 10%

Glossiness 30%, Specular Strength: 25%, Ambient 10%

If you compare that to the image at the top, I think you’ll agree that it has much more of an interesting surface look without changing anything at all with the lights.

Light Hearted

As I mentioned in previous parts of this series, the settings in your surfaces interact with the setting in your lights. All of the above used a distant light that was White (255,255,255). So the surfaces had a full spectrum of color to reflect. But what happens if I change the light through the secondary colors? In the following series, I change the light color to Magenta (255,0,255), Yellow (255,255,0), and Cyan (0,255,255)…

Magenta Lighting

Magenta Lighting

Yellow Light

Yellow Light

Cyan Light

Cyan Light

Notice that as the color of the light removes the Green, Blue, and Red channels, the corresponding cubes turn black, and the curved primitives change to reflect only the part of the spectrum that is included in their surface. Now, you might be wondering “What if I really wanted a cyan light for this image?” Well, you still can, but you need to give the surfaces a little bit of red to render. In the final image, I used a light Cyan (64,255,255) color for the light…

Light Cyan Light

Light Cyan Light

That gives the surface a little bit of Red to reflect to the image, but overall the light still has the cyan quality you might have been looking for.

That’s a Wrap

I think this will do it for my basic surface series. Future tutorials I have in mind include…

  • Newbie Mistakes – I’ll show common mistakes that new 3D artists make so they can learn by my bad examples.
  • Reflection, Refraction, Transmission, and Transparency – How does light bounce off of and through objects in 3D?
  • Point, Spot and Distant Lights – Just the basics on what those lights are and what they can do

3D Surfaces and Light (Part 3)

Continuing my series on 3D surfaces:

In this installment, I’m going to talk about how the color settings and texture maps on the surface as well as colors in the lights affect the results.

Mapping Things Out

First, let’s talk a bit about how texture maps work. If you’ve been in the 3D world for long, you’ve probably heard the term “UV Mapping”. FYI, the “UV” isn’t about sunscreen. 🙂 It doesn’t stand for “ultraviolet” it refers to the coordinates “u” and “v” which are used to look up information from a texture map.

u and v are used as coordinates that map (or translate) from the 3 dimensional object space to a 2 dimensional space on the texture graphic. Let me see if I can make this make some sense visually. In the following diagram, the point on the front face of the cube is mapped to the (u,v) coordinates of (100,75). Those coordinates represent the portion of the texture map indicated by the arrows at (100,75).

Mapping from a point on the cube to a location on the texture.

Mapping from a point on the cube to a location on the texture.

Now, you might wonder why we need special coordinates for this. As 3D artists, we mostly work in the standard (x,y,z) coordinate space. Well, consider if the cube rotates as in the diagram below. In this case, the (x,y,z) coordinates of that point on the cube have changed. However, the (u,v) coordinates remain constant.

Showing that rotating the cube doesn't change the (u,v) coordinates.

Showing that rotating the cube doesn’t change the (u,v) coordinates.

If you get deeply into 3D graphics (especially if you start trying to write your own shaders) you will discover that there is a plethora of coordinate spaces, each of which serves a purpose.

X Marks the Spot?

So, now that we have a very basic understanding of what a map is and how it is referenced, how are texture maps used in shaders? Well … using my favorite answer for such questions … it depends! 🙂 Remember that the real answer is that how things like texture maps are used depends entirely on how the shader that is applied to the surface uses them. However, that would be a cop out on my part; so let me talk about the most common ways that you’ll see texture maps used.

It’s a Gray Kind of Day

In many of the cases I’m about to talk about, I’ll discuss whether the setting uses a color or grayscale image. Rather than repeating how a grayscale image looks, I thought I’d do it once to begin with.

Grayscale images are images where the color has been removed and only the value remains. The simplest way to think of a grayscale image is that the red, green and blue values of the color are averaged to arrive at a gray value. If a particular setting in a shader is expecting a grayscale image and you provide it with a colored one instead, it will average those values for you. This can create some interesting results. Consider the following diagram…

Different colors can have the same gray scale value

Different colors can have the same gray scale value

Although the three colors are quite different, the average RGB value is 127. So in a grayscale image, they would all look the same. For this reason, I often suggest that if a 3D artist is going to add a grayscale map to their surface settings, they should take the time to use the image editor of their choice to remove the color and make sure it is providing the values that they really want to see.

Adding Details

In my Displaced Bumps and Is This Normal? tutorials, I talked about various methods that we can add fine 3D details to our surfaces using bump, displacement, and normal mapping. In those cases, texture maps are used to indicate the magnitude of the changes to the surface. For bump and displacement, the map is treated as grayscale. Normal maps are a special case where they use colored maps, but the colors mean something. Dropping a texture graphic intended for the diffuse channel into a normal map will NOT give you the results you might have been hoping for. See my discussion of DAZ Studio Displacement in Carrara 8 for more information about normal maps.

Hiding Surfaces

In many cases, we can also apply a grayscale texture map to the opacity channel. Rather than saying that an object is 100 percent visible, or 50 percent visible, etc. We can use a texture map to change the opacity for specific parts of a surface. This is sometimes called transparency mapping. We see this most often in 3D hair content, but it can also be used in clothing materials to hide portions of a surface, for example to create a lacelike effect on a dress.

Strength Control

Most shaders will allow us to use a grayscale map in the strength channel. This allows us to have much finer level of control over the level of effect that a particular channel has on our surface. Basically, rather than telling the shader “Add 90% of the specular light to the surface”, adding a map says “Lookup the (u,v) location in the texture map and scale the level of effect by what you find there”.

It is important to note that when we’re using a texture map in the strength channel, the percentage value does still have a role to play. If we’re using a texture map for specular strength, and the strength is set to 100%, then the grayscale image will change the effective level from 0% for black (0,0,0) to 100% for white (255,255,255). If we change the percentage to 75%, then the maximum value for a white portion of our grayscale map becomes 75%.

Rose Tinted Glasses

So, those are the “other” purposes for texture maps, but this section of the tutorial is about colors, right? By far the most common purpose for texture maps is to use them in the diffuse channel to add color details to the surfaces. Without a texture map, we would be limited in what we could do for the color of the surface by what RGB values we could assign to it. Texture maps are what allow us to have human skin that isn’t uniformly a single color.

In this case, the texture map is telling the shader, “When deciding what color to make this surface, look up the (u,v) point in the texture map and choose the color from there.”

However, just like the strength channel discussion, the color setting you use in your shader values also comes into play. To understand how, we need to look at how rendering engines actually think about color.

For artists, we tend to think of colors in terms of Red, Green and Blue values. We’re probably used to expressing those values in terms of 0 to 256 for each color. However, the rendering engine doesn’t see them that way. To the engine, colors are RGB values where each value ranges from 0 to 1. So, while we might define Cyan as (0,255,255), the rendering engine sees that color as (0,1,1).

So what does that mean for how the color setting and texture map interact? Well, basically, the rendering engine multiplies the two together on a channel by channel basis. So, if you have the color value set to white (255,255,255), and the texture map lookup returns cyan (0,255,255), the multiplication is pretty simple…

Red = 255 * 0 = 1 * 0 = 0
Green = 255 * 255 = 1 * 1 = 1
Blue = 255 * 255 = 1 * 1 = 1

So, you’ll get a cyan color for that part of the surface.

However, consider if you’ve set the color value to magenta (255,0,255). At the same point, the texture map lookup returns cyan (0,255,255), but the math is going to look very different…

Red = 255 * 0 = 1 * 0 = 0
Green = 0 * 255 = 0 * 1 = 0
Blue = 255 * 255 = 1 * 1 = 1

So now your surface at that location is going to look pure blue!

By the Pale Moon Light

Let’s extend that color multiplication discussion in to lights. Do you know why a car looks blue? It is because the surface of that car is absorbing all of the light except for blue. Blue light is reflecting from the car and so that is what you see. The red and green portions of the spectrum are absorbed by the surface.

In most cases, we use lights that are white (or a close variation thereof) and so the effect that the light has on the surface isn’t that much of a factor. However, when we try to get fancier with our lights or we try to use them to create some sort of special effect by coloring them, we can end up with unintended consequences!

Let’s say that the surface calculations tell the rendering engine that the color of the surface is cyan (0,1,1). What that really means is that the surface will reflect 100% of the green and blue light that hits the surface. If our only light source is set to red (1,0,0), what do we get?

Red = 1 * 0 = 0
Green = 0 * 1 = 0
Blue = 0 * 1 = 0

We get black (0,0,0). Granted, most of the time our light and surface colors aren’t that neat and simple, but it does show why when you get too far outside the normal range of “white” lights, you can have unintended consequences.

1000 Points of Light

What if there are multiple lights in the scene? Well, the input from the lights are mixed together to get a final color for the surface. So, if we keep with our cyan (0,1,1) surface, and we have one yellow (1,1,0) and one magenta (1,0,1) light which happen due to planning or circumstance to be lighting the surface exactly equally, then we’ll get a surface color like this…

Light 1 (1,1,0)

Red = 1 * 0 = 0
Green = 1 * 1 = 1
Blue = 0 * 1 = 0

Color 1 = (0,1,0)

Light 2 (1,0,1)

Red = 1 * 0 = 0
Green = 0 * 1 = 0
Blue = 1 * 1 = 1

Color 2 = (0,0,1)

Final Color

Red = 0 + 0 = 0
Green = 1 + 0 = 1
Blue = 0 + 1 = 1

Final Color = (0,1,1)

So, we’ll end up with the cyan color of the surface.

Wrapping Up

For strength type channels (bump, displacement, opacity, color strengths, etc.) applying a grayscale image to the strength channel allows us to vary the effect of that part of the shader across the surface. Applying texture maps to color settings will force the rendering engine to lookup a value from the texture map when determining the color of the surface at that point. And if we use anything other than white in our color settings on the surface and lights, we have to keep in mind that we’re multiplying colors together, which can mean that we’ll end up with changes to the visual effect of the surface. This can work to our advantage, however, if we plan for it.

In my final installment in this series, I’m going to fire up DAZ Studio with a pretty simple scene so that we can see how varying one value while keep the rest constant changes the look of the objects. Maybe some visual examples will help where reading the text of this series wasn’t clear enough.

3D Surfaces and Light (Part 2)

Continuing my series on 3D Surfaces:

In 3D Surfaces and Light (Part 1), I talked about the physics of what diffuse, specular, and ambient settings in a surface shader are trying to simulate. In this portion of my discussion on 3D Surfaces and Light, I’ll talk about how those settings / dials you see are actually used in the shader / rendering code.

It Depends!

Ok, first I have to issue a caveat. Since much of how a surface is defined in a 3D program is dependent on the way that the shader code is written, the actual math of how things work can vary widely. You don’t have to look any further than the difference in how Poser and DAZ Studio handle the ambient channel to see an example of this.

In the default surface shaders, Poser treats the calculation of the ambient contribution to the color of the surface independent of the diffuse and specular settings. In DAZ Studio, the default shader blends ambient and diffuse together. This means that although both programs can use the same definition of the surface settings, the results that each program creates can be significantly different.

I’ve seen some folks call this a problem between Poser and DAZ Studio. This is inaccurate. The difference is in the shader code for the default surfaces. It isn’t in the rendering engine itself. And, neither is “correct” or “wrong” in how they do it, they are simply different.

Shady shaders?

A brief primer on what a ‘shader’ is. In order to make general 3D rendering engines as flexible as possible, very little about how 3D objects and surfaces are handled is hard coded into the engine. Some rendering engines (especially real time engines such as for games) may break this rule in the interest of speed, but most general purpose rendering engines use shaders to define how an object looks in the final result.

Shaders are bits of code which tell the rendering engine things like “When light strikes this surface, this is how you should calculate the effect that it has on the image.” Most things in a 3D engine are actually defined as shaders. This includes surfaces and lights, even cameras.

Affecting the Effect

Ok, enough caveats and general thoughts, let’s get to the meat of things. For this discussion, I had to pick a basic shader to use my framework. I’ve chosen Renderman’s “plastic” shader. This is a standard reference which is often used for the basis for other more advanced surfaces. For example, DAZ Studio’s standard surface shader is an advanced version of this shader.

Warning: I’m about to get into some math and programming discussions, but I’ll do my best to make it easy to follow!

The code for Pixar’s reference “plastic” shader would look something like this…

Color =
Dcolor *
Astrength * Acolor
) +
Diffuse(N, (1,1,1), Dstrength)
) +
Specular(N, -I, Scolor, Sstrength, Sroughness)

Where the following are true…

  • Dcolor = the color setting for the diffuse channel
  • Dstrength = the strength setting for the diffuse channel
  • Acolor = the color setting for the ambient channel
  • Astrength = the strength setting for the ambient channel
  • Scolor = the color setting for the diffuse channel
  • Sstrength = the strength setting for the diffuse channel
  • Sroughness = the roughness setting for the specular channel (if the shader uses the term “glossiness”, then roughness is usually 1 – glossiness)

To work it from the inside parts out…

Acontrib = (Astrength * Acolor) – multiply the Ambient strength by the Ambient color to the get the contribution that the Ambient channel is providing.

Color =
Dcolor *
Acontrib +
Diffuse(N, (1,1,1), Dstrength)
) +
Specular(N, -I, Scolor, Sstrength, Sroughness)

Dwhite = Diffuse(N, (1,1,1), Dstrength) – Call the built-in Diffuse function to calculate a diffuse value at the current location based on a pure white surface and the provided Diffuse Strength value. This gets used to “wash out” the ambient setting (see the next step). If there is no ambient setting, this will also provide the shader with what is needed to calculate the strength of the Diffuse component in the surface later in the function.

Color =
Dcolor *
Acontrib + Dwhite
) +
Specular(N, -I, Scolor, Sstrength, Sroughness)

DAcontrib = Acontrib + Dwhite – This basically “washes out” the ambient contribution by adding the ambient contribution to the white diffuse setting calculated above. This is why when you have anything above a zero in the Diffuse Strength setting, the ambient component seems to be lessened. If ambient strength had been set to zero, this factor would end up equaling the diffuse strength value.

Why Link Them?

This is where Poser and DAZ Studio diverge in how they calculate this. The example I’m using is the Pixar Renderman reference and is what the standard surface shader in DAZ Studio is based on. Poser does not combine ambient and diffuse in this way.

The question often arises, why does DAZ Studio link ambient and diffuse together and Poser doesn’t? Remember that in Part 1, I talked about how the ambient channel was an attempt to represent that in the real world, light bounces around much more than we can actually simulate in a rendering engine. So in this shader code, the programmer was trying to say that if no other light touches this part of the surface, the ambient setting should be used to represent this indirect light. However, if there is a light source on this part of the surface, that light should be stronger than the indirect lighting.

The ambient channel in Poser can also be used this way, however it ends up being up to the 3D artist to find the correct balance between ambient and diffuse lighting strengths. For this reason, in many cases the content creators for Poser use the ambient channel for special effects (like glowing patterns) rather than for the indirect lighting factor that it was designed for.

Again, neither implementation is “correct” or “wrong”, just different. And this difference is why you’ll see a change in how a surface looks in each program even with the same values in the channel settings. Back to the plastic shader breakdown…

Color =
Dcolor * DAcontrib
) +
Specular(N, -I, Scolor, Sstrength, Sroughness)

Dcontrib = Dcolor * DAcontrib – multiply the resulting Diffuse color by the washed out ambient contribution.

Color =
Dcontrib +
Specular(N, -I, Scolor, Sstrength, Sroughness)

Scontrib = Specular(N, -I, Scolor, Sstrength, Sroughness) – call the built-in function to calculate the Specular contribution based on the color, strength, and roughness settings.

Color = Dcontrib + Scontrib

Finally, add the diffuse contribution to the specular contribution to get the final color for the surface at this location.

Material Differences

As I mentioned at the beginning, this is an example of how the Renderman Plastic reference shader from Pixar works. Other surface shaders may use completely different math. For instance, consider the following code for the metal shader.

 Color =
Scolor *
Astrength * ambient() +
Sstrength * specular(N,V,Sroughness)

This code only uses ambient and specular, ignoring the diffuse settings completely. This would strengthen the specular effect, but only the ambient channel would add any other color.

Wrapped Up

I hope this made some sense to people, but if there are any questions, please feel free to ask them! In the final segment of this tutorial, I’ll talk in more detail about how texture maps and colors (both surface and light) combine to affect the look of your surface.

3D Surfaces and Light (Part 1)

One thing that I have seen confuse many new 3D artists is how Diffuse, Specular, and Ambient settings in their surfaces affect the outcome of their renders. Since this can be sort of a big topic, I’m planning to split this discussion into 4 parts.

What Are We Trying To Do?

One thing a 3D artist needs to keep in mind is that everything we’re doing in the rendering engine is a simulation of reality. As rendering engines have become more sophisticated, that simulation is getting better, but we’re still talking about an approximation of reality. So, for this reason, I wanted to start by describing the reality that we’re trying to simulate.

Light Bounces

I’m sure you’re aware that the reason that we can see anything is because light strikes the object and bounces (reflects) off of it. In a perfect mathematical / physics world, light interacting with a perfect reflector would look something like this…

A perfect light reflection around the normal of the surface.

A perfect light reflection around the normal of the surface.

Naturally, most of the surfaces in the real world are not perfect mirrors. In the real world, materials can do three things with light. They can absorb it, reflect it, or transmit it. Cloth, plastic, metal, glass, they all interact with light in different ways. For the purposes of this 3 part discussion, we’re talking about reflection only.

Since we’re trying to simulate reality here, the rendering engine designers needed a way to model the various ways that materials interact with light. Since nobody (especially movie directors) likes to wait, they had to figure out very efficient ways to simulate the interaction of light with a surface.

Don’t Cut the Red Wire!

Sorry. Whenever I say “diffuse” out loud, I think of “defuse” and trying to disarm a bomb.

The first type of surface to light interaction to look at is the diffuse reflections. A dictionary definition of the word “diffuse” that fits our usage is: “Widely spread or scattered; not concentrated.” Or from Wikipedia “Diffuse reflection is the reflection of light from a surface such that an incident ray is reflected at many angles rather than at just one angle…”

In a picture, diffuse reflections would look something like this…

Light scattered in all directions simulating diffuse reflection.

Light scattered in all directions simulating diffuse reflection.

Now, you might be thinking “Wait a second! Light doesn’t really reflect all willy nilly like that!” And (silly word choices aside) you’d be correct. Light doesn’t reflect that way. What it does is to reflect within the tiny imperfections of a surface in such a way that it appears to reflect like this. Technically, we could try to model those imperfections, define them mathematically, and wait a few days for even simple images to render. Or we can accept that we’re trying to approximate reality, not create a light wave physics simulator.

What this means for the purposes of rendering is that it doesn’t (almost) matter what angle your camera is to the surface, if there is a diffuse factor to the surface shader, it will affect the resulting image. All three of the cameras in the illustration above would be able to “see” the diffuse reflection of the light source.

The one exception would be if the surface is between the light source and the camera. In that case, the diffuse reflection from that light souce could not be seen by the camera and it won’t have an effect on the final image.

Spectacular Speculator

Diffuse lighting isn’t very interesting all by itself. The other real life factor that surface shaders need to account for is highlights. While most of the light from a real life surface is similar to diffuse lighting, without the highlights of specular lighting, the model is going to look flat and unrealistic.

If we think of diffuse reflection as showing the part of a real world material which doesn’t match the perfect reflections in theoretical physics, then specular reflections are the part of the material which gets closer to the ideal world that the eggheads live in.

In 3D surface terms, a very important factor in specular lighting model is the roughness (sometimes called glossiness) factor of the material. Basically this setting controls how close to the perfect reflection angle the camera has to be in order for the specular highlight to be evident. So in the following illustration, Camera 1 can see the specular reflection, Camera 2 cannot.

Specular reflection showing the viewing angle based on the roughness setting.

Specular reflection showing the viewing angle based on the roughness setting.

How Rough Is It?

You’re probably wondering how roughness affects the angles. Basically, it there are two effects. First, the more rough a material is, the wider the range of angles at which the camera can see the highlight. However, increasing the roughness also causes the specular highlighting to be spread out over the surface. So while the highlight will be more visible, it will also be less pronounced. A lower roughness will keep the viewing angle more narrow, but will result in a more significant highlight effect.

By the way, if your surface shader / rendering engine uses the term “glossiness”, they are probably inverting the meaning. So, a higher setting is going to to result in a smaller, more intense highlight effect, while a lower setting will spread the highlight across the surface.

Note that there is one “special case” to this guideline. At VERY low roughness (0-2%) or high glossiness (98-100%) the visible angle range can actually get so small that the highlight will appear to disappear. This can lead one to think that there is “somthing wrong” with how your rendering engine does specular light. There isn’t, it’s just that extreme cases like that don’t reflect the real world.

It has a certain … ambience

In 3D rendering, both diffuse and specular reflections are based on the lights that are in the scene. However, in the real world light keeps on bouncing until the last of it is absorbed somewhere. If it didn’t when you were outside and the sun was shining, your shadow would be completely black. You wouldn’t be able to see anything that the sun wasn’t directly shining on.

In this modern era, 3D artists may (or may not) have a feature called “Global Illumination” (GI for short) to call on to represent this bouncy behavior of light. But GI can be an expensive computation and not all rendering engines support it. Before we had such nifty features, the rendering engine wizards came up with ambient lighting.

Basically ambient lighting says “not all light in the scene is defined by the lights”. Think of it as a global factor which can be applied to surfaces so that a light doesn’t specifically have to be shining on them. In most surface shaders, ambient lighting will be washed out or over ridden if either diffuse or specular light is detected on that part of the surface. But not always. So be careful in your settings of using high values for the ambient strength channel or you might get some unwanted results.

Wrapping Up

In the second part of this discussion, I’ll talk in more detail about how the values you set for the Diffuse, Specular, and Ambient channels are actually used in the surface shaders. For now, I hope that at least explaining why there are three different channels might help clear up some confusion for you.

Is this Normal?

As an addendum to my Displaced Bumps post, I realized I didn’t mention Normal Maps.

Normal Maps

There is one primary difference (at render time) between normal maps and the bump or displacement maps I mentioned in the previous post. While the previoius two technologies ask the rendering engine to re-calculate the angle of reflection based on if a particular point was raised or lowered from the surface, normal maps actually tell the rendering engine what changes to make. They provide an offset vector (in 2 dimensions) that the rendering engine applies when calculating the light interacting with the surface.

So, while a bump/displacement map might tell the engine that the point on the mesh is raised or lowered by 0.5 centimeters, a normal map tells the engine to deflect the normal at that point by 3 degrees in one dimension and -2 degrees in the other (for example).

Is it better?

I hear all the time about how normal maps are “better” than bump or displacement. And they certainly do render more quickly. After all, they tell the rendering engine directly what changes to make to the normal angle rather than asking the engine to calculate them itself based on if the point was raised or lowered. This is why normal maps are all the rage in realtime rendering engines (like games).

However, normal maps don’t change the surface of the object. So you can’t get shadows from a normal map. You also wouldn’t see any profile changes in a closeup picture.

Also, the “proper” way to create a normal map is to use two versions of a 3D object. One is the low-res version that the map will be applied to. The other is a high-res version of the model with all of the extra details added. You can create similar maps by converting displacement or bump maps into normal maps (see my Using DS Displacement in Carrara post for more info), however the results from this sort of converted map are almost certainly not going to be all that superior to the bump or displacement map you started with. Unless your rendering engine you’re using just happens to be better at using normal maps than the others.

Displaced Bumps

Resurrecting an old tutorial I wrote for DAZ Studio about the difference between a bump map and a displacement map in 3D graphics rendering. Specific information about how DAZ Studio treats them.


This tutorial is intended to provide a description of three types of map files (bump, displacement and transparency) that are typically used to modify surfaces on your 3D objects. It describes each of the types and uses a simple scene created in DAZ|Studio to illustrate the effects.

Map Files and Settings

Both maps are created using 256 level grayscale images. Colors are ignored in these maps, it’s the value (gray scale) not the hue (color) that makes a difference.

Poser and DAZ|Studio treat bump and displacement maps differently. In DAZ|Studio, medium gray (128, 128, 128) is considered neutral. Darker shades simulate negative changes, lighter shades represent positive changes. However in Poser, black (0,0,0) is considered neutral and all changes to the surface of the object are in the positive direction only.

This difference is important to note when trying to re-create settings in DAZ|Studio which mimic settings in Poser. Also when adapting material settings which are designed for Poser, the negative displacement setting in DAZ|Studio should be set to 0 (zero).


The basic scene without any maps assigned to the sphere is shown above. I’m using a plain sphere colored a dark cyan (64,128,128) with three walls each colored a dark shade of a primary color. There are three lights … a key light above and to the right of the camera, a fill light below and to the left and a rim light behind the sphere at a very high angle.

Bumping and Displacing

To understand what bump and displacement maps do, a bit of physics may be necessary. When light strikes a reflective object, the light bounces away from the object at the same angle that it struck it. So if you shoot a light at a flat surface at a 45 degree angle, the light reflecting from that surface will also be at a 45 degree angle.

( We’ll talk in another tutorial about the difference between diffuse, glossy and specular reflections, for now hang with me on this. )

When a bump map is applied to a surface, it tricks the rendering engine into thinking that the surface at that point is tilted. Darker settings indicate the surface angles down, lighter settings make it appear to angle up. It’s important to note that the surface isn’t actually changed, the bump map just fools the rendering engine into thinking it has been.

On the other hand, a displacement map does change the surface. It physically alters the geometry of the surface during rendering by raising vertices on the surface and creating a slightly different mesh.

The picture below illustrates what happens when light strikes a surface with or without one of these maps applied to it.


Bump Maps

Bump maps change the way that light interacts with the surface of an object. A bump map simulates bumps in the surface by creating highlights and shadows. However a bump map doesn’t actually change the surface of the object, it just simulates the effect. Applying the texture as a Bump Map (by placing it in the Bump Strength channel on the sphere creates the effect shown below.


For comparison sake, I also created two series of pictures showing just one quarter of the sphere.

The first series shows the same bump map applied at 50%, 100%, 150% and 200% strength (keeping the negative and positive settings at the default of -0.1 and 0.1 centimeters).


The second series keeps the bump map at 100%, but set the bump values at 0.1, 0.2, 0.5, and 1.0 centimeters.


Displacement Maps

Similar to bump maps, displacement maps change how light effects the surface of an object, however, displacement maps actually move (or displace) the surface of the object. The effect is usually stronger than a bump map with the same settings and has the added effect of changing the silhouette of the object (note the changes in the edges of the sphere and the shadow in the example image.

Displacement maps have the added value of being able to cast shadows. A bump map lightens and darkens a surface by changing the angle, but a displacement map moves it in such a way that it casts a shadow across itself and other objects in the scene.

The scene below illustrates the same texture applied as a displacement map using the default 100% +/-0.1 cm settings.


Like with the bump map, I created a two series of images. The first showing the effects of keeping the displacement values the same and adjusting the strength of the map.


The second series shows the effect of keeping the strength consistent (100%) and varying the positive and negative limits.


Choosing a Surface

So if the displacement has advantages like realistic shadows, why would you use bump maps?

There are a couple of reasons you might choose to bump a surface instead of displace it. First is that your render engine might not support displacements. Using a displacement map requires that the rendering engine is able to change the geometry of a surface during the rendering process. Poser only added support for displacement maps in version 5. Bryce still isn’t able to support displacement maps for this reason.

The second reason is that displacement is a bit harder on the processing time than bumping. Since the engine has to create new geometry as it processes your scene, displacement maps can create performance issues (albeit minor ones for most systems). However even a minor performance hit is significant in some areas (like games) so many systems prefer bump mapping over displacements.

Sometimes, both maps (using different images) may be the best way to go. For instance if I were modeling a basketball I might use a bump map to simulate the pebbly texture of the leather, but use displacement to create the indentions for the lines and logos.

Below I’ve included a side-by-side comparison of a close-up of the sphere. The one on the left uses bump mapping, the one on the right uses displacements. Both apply the map at 100% strength with -1.0 and +1.0 as their limit values.



Applying these types of texturing maps to your models can allow you to easily create an entirely new look for an object without having to resort to modeling a new object yourself. A simple object (like a basic t-shirt or dress) can be transformed into many different looks using these techniques. This extends the usefulness of the original item and allows you, the artist, to create exactly the look and feel you want to have for the objects in your scenes.