Additional points about SSS

Since I opened the discussion about subsurface scattering (SSS) with my Light and SSS and SSS – Why Should I Care? posts, I’ve received some good feedback / additional information. I wanted to capture those here.

Other Uses

First, the point has been made that although we think of SSS as adding realism to surfaces which don’t reflect 100% of the light that strikes them, the effect of SSS can be used for other purposes. It can add some depth to the surfaces for toon style rendering, and can even completely change the look of an object. For some examples, see the following product pages at the DAZ 3D store.

Note: I don’t get any commission if you choose to buy any of these products. 🙂 I’m actually referencing them because they have example images that show the effects.

DAZ Studio – SSS Shader

We’ve had a couple of good discussions about the Subsurface Shader Base that is available for free for DAZ Studio. These discussions have largely been about how the shader works. It was actually one of these discussions which spawned my initial blog posts. I wanted to capture a couple of important points here.

Pre or Post?

The first point was asking for some clarification about how the selection of either Pre or Post processing of the SSS effect changes the resulting calculations. Age of Armour (Will) was kind enough to provide us with some information in this thread on the DAZ 3D forums.

The choice to Pre or Post application of the SSS effect has to do with how the surface values are calculated. For the Pre option, the calculation is:

(Diffuse map * Diffuse Color * Diffuse strength)
* Lighting
) +
(Subsurface Calculation * Lighting)

This basically means that the Diffuse surface color is calculated, then the SSS effect is added to the result.

When chosing the Post option for the SSS effect, the calculation looks significantly different.

(Subsurface Calculation * Lighting)
* (Diffuse map * Diffuse Color)
) +
(Diffuse Map * Diffuse Color * Diffuse strength)
* Lighting

In this case, there are two calculations that use the Diffuse surface settings. In the first part, the SSS effect is multiplied by the diffuse color. Note that the diffuse strength is not factored in at this point, it is simply creates a version of the diffuse color which is tinted by the subsurface effect. The second part of that equation is a standard diffuse surface calculation. The two diffuse colors are then added together to arrive at the final color for the surface.

The Origins of SSS

The ideas and concepts around subsurface scattering for the purpose of computer graphics were first described in a paper titled “A Practical Model for Subsurface Light Transport” presented to the ACM Siggraph conference by Henrik Wann Jensen, Stephen R. Marschner, Marc Levoy, and Pat Hanrahan. Warning for those who seek to understand SSS at that level, this is NOT trivial mathematics by any stretch. I cannot be held responsible for any damage to your brain from trying to read the paper.


SSS – Why Should I Care?

Markus Ba (one of the members of our DAZ Studio Artists group on Facebook) raised a question following the posting of my SSS and Light tutorial. “This is interesting, but why should I care about this?” It’s valid question and one that I’ll try to address here. But, first several caveats!

You Might Not Care!

I can’t tell you for certain that you should care about subsurface scattering. Depending on the visual style you are shooting for, the content you are using, etc. adding SSS effects to your surface shaders may not help your final image at all.

However, for accurate representation of surfaces other than hard plastic or metal, subsurface scattering is an important part of how the material interacts with light. Standard surface shaders using only diffuse, specular and ambient surface values ignore an important part of how real world materials work.

As I mentioned in the above referenced article, the primary reason for using subsurface scattering is to acknowledge that some light which strikes a surface is transmitted through the surface and exits at some other point on the surface. This scattered transmission of light is most closely associated with human skin, however many other surfaces do this as well. Examples include cloth, soft plastics / rubber, milk, clay, etc.

Cue the Lights

Before I talk about how SSS affects your surfaces (and therefore your final images), I want to mention that much of SSS is highly dependent on the lighting in your scene. Your lights do not necessarily have to be complicated, but very simple lights (e.g. a single distant light) may not provide enough light at the proper angles to get the most out of your SSS enabled shaders.

Texture Dependencies

One of the struggles with figuring out whether or not your image will benefit from SSS or not is how dependent the results can be on the texture maps that you have to work with. For the most realistic skin rendering using SSS, you should have the following texture maps.

  • Diffuse Map – showing what we think of as the visible skin surface (see note below)
  • Specular Map – skin is not universally shiny, a good specular map which acknowledges the differences makes a big difference
  • Subsurface Map – your skin does not have a constant color for it’s subsurface, ideally the creator of the skin you’re using understands this and has prepared a map. VERY complicated skin shaders go to the level of mapping the veins and arteries in your skin.
  • Subsurface Strength – Even if the color is constant, the shader should understand that the strength of the scattering is also not constant across your entire body.

How Diffuse Is It?

One problem that I’ve seen with many skins that we use in Poser and DAZ Studio is that they are based off photos of actual skins. “Why is that a problem?” you ask. Because the camera is recording the final result of the light interacting with the model’s skin. This includes the effect of light scattering in the subsurface.

So, if you add SSS to a skin which has already captured the SSS effect in the real world, you’re going to end up with skin that looks too orange/red. This is why you often see shader presets for skins multiply the texture by a light blue color. This (roughly) removes the SSS from the captured texture, with the expectation that the remaining calculations will add it back in correctly for your purposes.

The best diffuse map would be one where the original texture was captured with a very flat light. It should also have been just enough light as required to get the image without adding a lot of strong subsurface scattering to the image that the camera recorded.

Given that the artist doesn’t really have a choice of how the original texture is captured, the second best would be that you modify the texture in an image editing tool (e.g. Photoshop, GIMP, etc.) and remove some of the red at that level. I can’t really recommend specific filters since so much is going to depend on the image you’re starting with, the tools available in your editor, etc.

You Haven’t Answered Me!

Ok, now that I’m a page and a half into this description, it is probably time to address the original question about why should you care.

Usually the first example of where you will see the SSS effect is in the translucence that you see in certain parts of the body. The most common area is around the ears, or the fingers; however it can be seen anywhere that light is shining at an angle where it would transmit through the surface toward the camera.

The effect that it has is typically a soft translucent glow on the surface. Below I show a couple of simple images showing how SSS adds to the surface of Victoria’s head.

{Images to be inserted}

While SSS is most often associated with skin, it also is an effect on many other soft surfaces where light is partially absorbed and partially scattered (transported) through the surface. Surfaces like cloth, clay, rubber, etc. also have an SSS quality to their surface. The question of whether using an SSS enabled shader for objects in your scene which have a material like this will improve the image will end up being a matter of taste.

And, even then, there may be some cases where you decide that the additional level of realism for the surface is not worth the added rendering time that it takes.

Oh, you say I forgot to mention that part? Well, when you consider the extra calculations required to determine light absorption, scattering, translucence, fresnel effects, etc. the rendering time for an image where SSS is used extensively can be significantly higher than without.

Shader Tuning

One thing that I can’t really address here is how tuning the values of your SSS enabled shader will affect your final results. As I mentioned at the beginning, the results of an SSS enabled shader rely so much on lighting, textures, even the distance that the camera is from the subject have a big effect on the end result.

For DS users, there are several tutorial resources about how to get the best out of shaders like UberSurface, the Subsurface Base Shader, etc. Take a look at the links on my Other Tutorials page for information on where to find these sorts of tutorials.

Light and SSS Surfaces

This question came up on the DAZ 3D forums ( link ). Since there is considerable text to write, I figured I would post it here as well. Note that this discussion is about how light interacts with a surface that has subsurface scattering (SSS), not about how to get the best effects from an SSS enabled surface shader.

SSSay What?

First, briefly what subsurface scattering is all about.

One thing that is sometimes difficult to remember is that a surface in 3D graphics has no actual depth. It is a set of polygons which have length and width, but the depth is effectively zero. So our surface shaders that define the characteristics of the surface often have to fake the fact that in the real world, not everything that happens with light and surfaces happens on the very top layer of the material. This is especially true for surfaces like your skin.

When light strikes the surface of your skin, it does one of three things.

  1. It reflects – Most of the light just bounces off the outer layer of your skin and reflects into the rest of the world. This is exactly like every other surface.
  2. It is absorbed – Some of the light passes through that outer layer of skin and is absorbed into the layers beneath never to be seen again.
  3. It scatters and comes back out – Some of the light bounces around in the layers of your skin and eventually exits the skin again. This light can be seen. The easiest way to see this is when you press a small flashlight or laser pointer on your skin surface and the surrounding area “glows” with a reddish light.

Technically, unless they are a perfect mirror, all surfaces reflect and absorb light. That is the simple effect that we simulate by having the diffuse layer in our shader. Those settings are basically saying “When white light hits this part of the surface, this is the part of the spectrum which is reflected back into the rest of the environment.” The rest (by extrapolation) must have been absorbed by the surface.


So then what the SSS enabled shader needs to account for that isn’t already in the calculation is the scattering of the light within the surface and (eventually) the re-transmission of that light back into the rest of the world. While it could be possible to actually simulate the bouncing of the light within your skin, calculating the point where that light exits the skin again, and casting new rays of light, most shaders take a more simplistic view.

The biggest assumption that they make is that the point where they are calculating the surface values is very similar to the points that are close by. So, rather than calculating the effect of the light bouncing, one can make the assumption that the light which is hitting the point where you are sampling the effect is the same as the light that is nearby; so we can assume that some light from somewhere else is going to have been scattered and will be exiting the surface at our sampling point.

The perfectionists in us might cringe at this broad assumption, but when you consider the very tiny distances that are usually involved in this calculation, it isn’t as bad as you might think. We can also help out sometimes by fine tuning parameters in the rendering engine like pixel sampling or shader sampling levels.

SSShow Me?

Some of you are probably visual learners; so I’ve created a couple of simple diagrams to show what I mean.

SSStandard Surfaces

First, a diagram of light reflecting from a normal 3D surface. Note that in this case I’m assuming a white light source with a white diffuse surface setting; so all light that hits the surface is reflected back from it.

Standard 3D surface reflecting light

Standard 3D surface reflecting light

SSS Surfaces

When we add subsurface scattering, we need to account for at least the scattering aspect, and if we’re doing it well, the absorption factor is figured in too.

Light Interacting with a 3D Surface with Subsurface Scattering

Light Interacting with a 3D Surface with Subsurface Scattering

Notice that I included the second light ray that is assumed to exist that is adding the scattered light to the reflected light, giving us a result that is somewhat “warmer” than the pure white light that was provided.

SSSerious Skin

Some SSS enabled shaders can be further tweaked with additional settings. For instance, there is typically a setting for the strength of the scattering effect. Ideally this setting should allow you to provide a grayscale map which adjusts the strength of the scattering at various locations on the surface. Others will allow you to control which parts of the spectrum are absorbed and/or scattered by providing color controls for those settings.

Note: I have scene articles in both artistic and scientific oriented 3D journals which go so far as to simulate multiple portions of both the epidermis and dermis layers of the skin. That is hardcore!

SSScatter Pre or Post?

One challenge that can sometimes arise for SSS enabled shaders is how to combine it with the diffuse color values which define the color of the top layer of the surface. The decision typically comes down to whether the light that enters the surface to be scattered should be filtered by the diffuse color of the surface, or should that part be considered to be white and the controls on the scattering part of the shader control how the light exiting the skin should look?

In the subsurface shader included in DAZ Studio, you can choose whether to apply the diffuse layer to the surface prior (Pre) to the subsurface scattering or after (Post) the scattering process. Will (aka Age of Armour), the author of that shader, has an excellent video tutorial ( Subsurface Shader Basics ) available which describes in much greater detail how to get better results from using that shader.

SSSigning Off

I hope this helped a little with understanding what the subsurface scattering effect is all about and what the shaders that support it are trying to simulate for you. And I hope you don’t hate me for starting all my sections sounding like a sssilly sssnake. 🙂

Tutorial Links

Just a VERY quick post to note that I’ve added a page to the blog where I will track other 3D tutorials from around the Internet which I have found to be helpful.

Additional Tutorials

3D Lights – Shapes

I’m going to do a series on the technical bits of lights in 3D software. I plan to cover the following topics…

  • Light Shapes
  • Casting Shadows
  • Light Control and Colors
  • Advanced Topics

This series will be interspersed with other stuff; so no guarantees on the timeframe.

Point Lights

A point light casts light in all directions (like a sphere) from a single point in space. A bare light bulb or the light from a candle is a good real world example. Depending on the software you’re using, point lights may have limited use. For instance in DAZ Studio, the default point light casts a fairly weak light. It doesn’t reach very far at all. And because of the way that shadows work in 3Delight, it doesn’t do well with Deep Shadow Maps either.


Spotlights are kind of like flashlights or theater lights. Like point lights, they emit their light from a single point in 3D space, however unlike point lights, that light has a specific direction. The light spreads from the point of origin along the direction that the light is pointing in a cone shape.

In the basic spotlight for DAZ Studio, we can adjust the spread of that cone, allowing us to control how much of the 3D scene the light affects. In more advanced lights, you may also be able to control things like falloff (how far does the light reach) or apply gels and gobos to the light for special effects.

Spotlights are much more controllable and flexible than point lights. In most scene lighting (especially indoors), Spotlights are going to be your primary source of light.

Distant Lights

Distant lights simulate light cast from a very far source. The Sun and Moon are two such distant light sources which are typically simulated using distant lights. Unlike point and spot lights, distant lights do not have a point of origin in 3D space. The object that you see in the 3D viewport is to help you visualize the angle that the light is pointing. It does not represent where the light “starts”. The control for the distant light could be located underground, but as long as the angle of the light says that it shines on the objects in your scene, it will still light the scene.

In DAZ Studio, distant lights are typically used to simulate the sun, sky or moon. They are also used sometimes by new artists because they are easier to manage, you only have to worry about the angle that it’s shining. However because they light everything in the scene that they can shine on, they are not nearly as flexible as spotlights. Also, because of a limitation of the implementation of Deep Shadow maps, you may have some issues getting shadows to look exactly right using them.

Advanced “Shapes”

There are a couple of other lights that are worth mentioning here before we move on from shapes. These lights are different from the ones I talked about above in technical details of how they interact with the rendering engine, but since they do emit light into your scene, I wanted to mention them.

Area Lights

Technically, an “area light” (aka “mesh light”) isn’t a light in the same way that the others are. Instead an area light is a surface shader which emits light. Using such a shader allows you to make any object in your scene emit light from its surface. These sorts of lights can cast very pleasant consistent light across objects in your scene. In the real world, this is similar to the “umbrellas” and “light boxes” that photographers use. The drawback in 3D rendering is that typically they take longer to render as light rays are emitted from several locations on the object surface.

Ambient / Environment Lights

Ambient (or Environment) lights give the artist greater control over the ambient light in a scene. Most rendering engines have a built in ambient() function which returns a global value for ambient light in the scene. These sorts of lights (for example uberEnvironment which is provided in DAZ Studio) give the artist the ability to control how that light is calculated. They are very useful for simulating the indirect light that bounces around the real world.

Things to Come

My next topic in this series will be on casting shadows as this is typically an area that many artists struggle with getting to look just right.

Anisotropic vs Isotropic Surfaces

Note: This post may become part of a larger discussion at some point in regards to more advanced 3D surfaces. At this time, I just wanted to get some thoughts recorded.

Sounds Fancy!

In some cases, I’m convinced that people throw out the word “anisotropy” (or “Anisotropy Specularity”) because it sounds big and complicated. While the shader code to accomplish it is somewhat more complex than the standard 3D surface, the explanation of what it means is actually pretty simple.

Anisotropic surfaces are surfaces which look different based on the angle you are viewing them from. A couple of real-world examples would be brushed metal and suede leather. If you look at a piece of suede in a room where there is a distinct light source (window, lamp, etc.) and spin it slowly around, the sheen of the material changes. You can most easily see this if you first brush half of the patch of suede in one direction and the other half in the opposite direction.

In the interest of completeness, isotropic surfaces look the same no matter what angle you view them at. In that same room, if you have a smooth plastic plate, turning it around won’t change the look of the surface or how light reflects from it.

Anisotropy and You

In 3D graphics, anisotropy is most commonly used with specular reflections ( if that term is unfamiliar to you, see my discussion of Diffuse, Specular, and Ambient surface settings ). Shaders (aka materials) which have an anisotropic specular model allow you to set different values based on the relationship between the camera, the surface, and the light sources. So you might have a surface which has a Glossiness value of 30% in one direction, but 90% if the light is reflecting in a different direction.

You could also have a shader which allows for variations in the diffuse surface values. For instance the special car paints that you see on show cars (or sometimes on the street) where the car “changes color” as it passes by.

It Isn’t Broken

One thing to be aware of, though. These settings may not work on all objects. The reason is that most shaders rely on the UV Mapping that was done for the object. In a simple case, the shader determines if the light’s reflection is closer to the orientation of the U axis or the V axis, and makes choices about which settings to use based on that result.

If you’re wondering why that matters, consider a sword blade. The blade is modeled using many polygons which define the length, width, and thickness of the blade. When the model creator makes the object, they apply a UV Mapping to it. During that mapping, they decide whether to have the U axis refer to the width of the blade or the length of the blade.* This all happens long before you’re ever setting up your scene, and (without re-mapping the blade yourself) there isn’t anything you can do about it. Let’s say they chose to extend the U axis across the blade and the V axis extends the length of the blade.

You apply a shader which is written to use the “Specular 1” values when light reflects along the U axis, but chooses the “Specular 2” values when the light is reflecting closer to the V axis. You set the settings such Specular 1 will create stronger highlights, but be more spread out along the surface, while Specular 2 creates smaller, more constrained highlights, but they aren’t as strong. Rather than getting interesting long highlights when the blade is viewed along it’s length, you’ll get the stronger highlights when the blade is viewed across it’s width.

In keeping with proper Internet protocol, it is now time to go to the site for the vendor who created the item or the tool that you’re using to render and rant about how their implementation of Anisotropy is obviously broken! For good measure, be sure to link to the Renderman reference shaders or (even better) link to Gregory Ward’s “Measuring and Modeling Anisotropic Reflection“!

What’s that? You’re not into creating Internet drama? “Big deal, just switch the settings,” you reply.

That’s fine, that will work in this case. But the decision about how the U and V axes of the surface map apply to the model doesn’t have to conform to anything about the model. The original creator of the model may have wanted to paint a dragon spiraling around the blade’s length. To make it easier for themselves, they twisted the UV map 30 degrees around the object. Now there is no correlation to the length of the blade and either the U or V axis.

Heading to the Tropics

If this makes your head hurt, don’t worry. In most cases you don’t need to be that concerned about whether a surface should be Anisotropic or Isotropic. And when the difference might matter, the creator of the object may have considered that fact when they made it. However, I thought it might help to understand what the term means and why it can (sometimes) be hard to achieve the effect you were hoping for using it.

* Technically they could choose the thickness of the blade for the U or V axis as well, but that would be silly; so let’s not go there.

3D Surfaces and Light (Examples)

Finishing my series on 3D Surfaces:

I’ve claimed to be “almost done” with this post for a while. It is probably high time to be “actually done” with it. 🙂

I realized that the discussion in words, while worthwhile, may not be as helpful to some people as actually seeing some images and seeing the effects in action. So, I created a simple scene and did some test renders. In the scene, I have a plane for the floor and another for the back wall. Three cubes on the left and 3 other primitives on the right. A single distant light using raytraced shadows provides the lighting. In each of the images, if you want to see the details of the surface settings, click on the image to see the “media page” as it has a full list of all the relevant channels in the description for each image.

Diffuse Only

I start with only the Diffuse channel providing any surface values. Specular and Ambient strengths are set to zero.

Diffuse surface color only

Diffuse surface color only

Not very interesting, right? No highlitghts, fairly flat colors.

Adding Ambience

Next, I added some ambient values. Now, in this first set, I did something “odd” on purpose. I set the ambient setting to be opposite of the diffuse setting. For instance, on the Green (0,255,0) cube, I set the ambient color to Magenta (255,0,255). Look what happens, even with Ambient at 100%

100% Colored Ambient Setting

100% Colored Ambient Setting

Nothing right? Can’t see a difference between that and the first one? That’s because the Ambient is being multiplied by the diffuse color on a Red, Green, Blue basis. So, since 255 x 0 = 0, you get no effect. This is an extreme case of why you have to think about how your ambient and diffuse colors are going to blend or you may not get the effect you were hoping for! Let’s try again, but this time with a white color for ambient (on the cubes only)…

Cube ambient changed to white @ 100%

Cube ambient changed to white @ 100%

Well, at least you can see the effect now. 🙂 But obviously 100% isn’t a good setting. It totally removes all shadow details, etc. Remember back to Part 1 where I said that the Ambient channel was intended to simulate indirect light on the surface? This is basically saying to DAZ Studio / 3Delight “You have a pure white. full strength floodlight shining in all directions!” Not the goal we had in mind, eh? Let’s back that ambient channel down to a more normal fill light level say 30%…

30% White Ambient Surface

30% White Ambient Surface

A little better. It gives some light effect where the direct light from my distant light isn’t shining, and it doesn’t try to change anything about the colors or anything of my cubes.

You Look Spectacular!

There are two values in the specular channel that really work together to control highlights. The strength channel controls how intense the highlight is, while the glossiness (roughness in some other rendering engines) controls how small or spread out the highlight is across the surface. I started by cranking strength and glossiness to 100%…

100% Specular, 100% Glossiness

100% Specular, 100% Glossiness

What’s that? You don’t see anything? Well, that’s because we told the rendering engine that there is ZERO margin for error on how close the camera has to be to the perfect angle between the light and surface in order to see the highlight. Basically, we made the highlight so small that it’s invisible. Some people will see this effect and think that glossiness is “broken”. It isn’t broken. You just made the surface so smooth that the highlight disappeared. Let’s back it down to 90%…

90% Glossiness

90% Glossiness

Well, now we can see something (at least on the curved objects on the right)… but not much. Even 90% is a pretty small highlight. Let’s see what happens at 60%…

60% Glossiness

60% Glossiness

Ah. Much better! We can really see that highlight on the objects on the right now. But wait, Karl … you forgot to change the cubes didn’t you?

Nope. I didn’t. The cubes have the same specular settings as the curved objects. You don’t see any highlights because those wide flat surfaces are very consistent about their reflection of light. Since a distant light throws it’s light rays in parallel across the scene, there is no angle where you can see the highlight on the cubes. This illustrates part of the reason why there is no single “right” answer in regards to specular surface settings. If you want to see the cubes “shine”, we need to go even lower on the Glossiness, let’s try 30%…

30% Glossiness

30% Glossiness

Yay! The cubes have highlights! Well … if you can call them that. Basically they just look like something went wrong with the surface. And the curved surfaces on the right have a highlight that is so spread out, it is overwhelming the diffuse color. Probably not a setting that is very helpful, hmm?

Glossy Strength

So, I mentioned that both Specular Strength and Glossiness combine to control how the surface highlights look. In the next series of images, I keep the glossiness setting at 30%, but I vary the strength. I won’t talk about each image, but the captions show the setting that was used…

Glossiness 30%, Specular Strength: 75%

Glossiness 30%, Specular Strength: 75%

Glossiness 30%, Specular Strength: 50%

Glossiness 30%, Specular Strength: 50%

Glossiness 30%, Specular Strength: 25%

Glossiness 30%, Specular Strength: 25%

So, you can see that the spread of the highlight stays the same, but the intensity of the effect goes down (fades). For a final test with the white light, I set Diffuse to 100%, Specular to 25%, Glossiness to 30%, and Ambient to 10%…

Glossiness 30%, Specular Strength: 25%, Ambient 10%

Glossiness 30%, Specular Strength: 25%, Ambient 10%

If you compare that to the image at the top, I think you’ll agree that it has much more of an interesting surface look without changing anything at all with the lights.

Light Hearted

As I mentioned in previous parts of this series, the settings in your surfaces interact with the setting in your lights. All of the above used a distant light that was White (255,255,255). So the surfaces had a full spectrum of color to reflect. But what happens if I change the light through the secondary colors? In the following series, I change the light color to Magenta (255,0,255), Yellow (255,255,0), and Cyan (0,255,255)…

Magenta Lighting

Magenta Lighting

Yellow Light

Yellow Light

Cyan Light

Cyan Light

Notice that as the color of the light removes the Green, Blue, and Red channels, the corresponding cubes turn black, and the curved primitives change to reflect only the part of the spectrum that is included in their surface. Now, you might be wondering “What if I really wanted a cyan light for this image?” Well, you still can, but you need to give the surfaces a little bit of red to render. In the final image, I used a light Cyan (64,255,255) color for the light…

Light Cyan Light

Light Cyan Light

That gives the surface a little bit of Red to reflect to the image, but overall the light still has the cyan quality you might have been looking for.

That’s a Wrap

I think this will do it for my basic surface series. Future tutorials I have in mind include…

  • Newbie Mistakes – I’ll show common mistakes that new 3D artists make so they can learn by my bad examples.
  • Reflection, Refraction, Transmission, and Transparency – How does light bounce off of and through objects in 3D?
  • Point, Spot and Distant Lights – Just the basics on what those lights are and what they can do

Acquiring 3D Content

A new artist over at the DAZ Forums asked about strategies for acquiring content. Since I took the time to write up a lengthy reply about my approach, I thought I’d re-post it here too.

I’m not “new” at this by any stretch; so my approach is different now than when I was first building my library. With over 2600 packages (so, many more individual items than that) in the library, I can afford to be choosy. However, since you asked, this is how I approach purchasing content…

How Useful Is It?

  1. I only buy stuff I’m pretty sure I’m going to use. I don’t do a lot of renders with male figures, or in present day settings. So fantasy/sci-fi and female clothing / characters / etc. are the way to go for me. Your mileage may vary on that one.
  2. Think about the utility of the item. In clothing, I look to see if I could easily mix-and-match to get an outfit that I want or am I forced to use their entire set? For props and scenery, I look for things where I could re-use bits and pieces so that you don’t look at something I rendered and think “Oh, he used THAT building…” The more versatile something is, the more it is worth to me.

Price Tag Watching

  1. Ignore the % off!! It is an arbitrary number. Is something that is marked down at $14 from $20 retail price really more valuable than if it was $14 to begin with? If it isn’t worth paying full price for, it probably isn’t worth paying a sale price for either. There are a few exceptions. When something gets down to the $2 range, I might buy it “just in case I need it” But mostly if I wouldn’t pay the full retail price for it, I’m not going to buy it just because it was arbitrarily marked down 30%.
  2. Keep in mind, there will ALWAYS be another sale. Don’t know if you’re familiar, but around here we have a department store called “Kohls”. If you ever pay full price for something at Kohls, you’re ripping yourself off. Practice patience and it will be on sale. If you really can’t afford it right now, take a breath and remind yourself that it will go on sale again some time.
  3. Prepare for the big seasonal sales. March is traditionally a big month for sales @ DAZ. So are the last few months of the year. That’s when I will pick up the things that I kind of want, but not badly enough to buy at full price. Other sites have similar cyclical sales. If you learn them, you can pace your spending so that you can splurge when things are cheaper.

Saving the Bank

  1. Platinum Club – Especially for a new artist, there frankly is no better value than joining. If you can afford the whole year, I’d just go ahead and do that so you don’t need to think about it again for a while. 🙂 The PC goes on sale too sometimes; so if you’re not planning any big purchases for a while, hold off and see what comes up in the near future.
  2. The wishlist is your friend. Rather than going to the sale categories to see what is there, I use my Wishlist to see if anything that I have previously indicated I actually WANT is on sale. That helps with making sure I’m spending on the right stuff.
  3. If you have a tight budget for these things and you find yourself compulsively overspending, consider what I’ve done … I use a prepaid credit card for purchases. So i load it up at the beginning of each month. When the money is gone for that month, it is gone. Or at least I have to make a conscious decision to reload some money on to it rather than just thinking “Oh, that’s cool, and it’s only $8!”

Managing Virtual Memory

Lately there have been a lot of threads on the DAZ Studio Forums about crashes of the program (mostly when attempting to render something). As there can be so many factors involved when it comes to what causes any program to crash, troubleshooting these issues becomes a guessing game as to which one applies to any particular situation. However, one item that is important for 3D rendering of any sort is how memory is used in your computer. I will try to provide some help here for some ideas on tweaking the settings in Windows for how the Operating System manages your RAM.

Notice: This information is provided based on my experience managing my own computer systems over the past 20 or so years. No guarantee or warranty is either expressed or implied in the tips provided here. If you choose to follow any suggestion offered in this information, you are doing so at your own risk!

32 verses 64-bit computing

I won’t go into all the implications of what the difference is between a 32-bit and 64-bit version of Microsoft Windows. For this conversation, it is sufficient to say that in 32-bit versions of the Windows operating system, an application is (by default) limited to no more than 2 Gigabytes of memory at a time. And the system overall cannot address more than a total of 4 Gigabytes of RAM.

In a 64-bit version of Windows, a 32-bit application is allowed to access up to 4 Gigabytes of memory. On the other hand, a 64-bit application running on a 64-bit version of Windows would (theoretically) be permitted to access up to 8 Terabytes (that’s 8096 Gigabytes) of memory!

So, simply running a 64-bit version of Windows will provide more available memory for your 32-bit applications, and it will allow you to run 64-bit applications as well.

What about /3GB?

Ok, you’ve probably seen all these great tips about how in your 32-bit Windows, if you change the startup sequence to include the /3GB switch that your applications will “magically” be allowed to use 3 Gigabytes instead of 2, right? Well … not exactly. See, that switch does ALLOW applications to use up to 3 Gigabytes of RAM. However there are some significant caveats to that. First and foremost, the application has to have been created (compiled) with the ability to take advantage of that OS feature. Most 32-bit applications are NOT compiled that way.

Second, what you are doing by using /3GB is telling Windows that it should use less memory for itself and give more memory to the applications. If you have a lot of services and other things running in your operating system space, you could actually see significantly worse performance of your system by enabling the /3GB switch.

As for DAZ Studio, the executable itself indicates that it does support the large address space option. And evidence from end-user testing suggests that the 32-bit version of DS 4.0 (and later) does benefit in some way from the /3GB switch in Windows. So, it may be worth a try if nothing else seems to help.

Memory Usage

Microsoft provides several tools for monitoring the memory usage of your applications. Task Manager has the Process tab where you can sort by memory usage to see what is consuming that resource. On the performance tab, you can also see how much of your physical and virtual memory is in use. Also from that tab, starting with Windows Vista, they provided a tool called Resource Monitor which can show in greater detail exactly what resources your applications are using.

Physical Memory

Ok, let’s be honest here, there isn’t much you can do to influence the use of your physical RAM other than adding more. So get more if you can. 🙂 You can also use tools like the built-in memory tester in Windows 7 ( link ) to test your RAM chips for potential problems.

Virtual Memory

In discussing memory in Windows / DAZ Studio, I think one part that is often overlooked is the virtual memory (aka page file) portion of memory management. First a quick understanding of what virtual memory is all about.


Let’s say you have Firefox, DAZ Studio, and Skype all running under a 32-bit version of Windows 7. Each of those applications has been told by Windows that it has up to 2 Gigabytes of memory available. However, your system only have 4 Gigabytes of RAM in it. And Windows itself needs quite a bit of RAM just for basic operations. What happens when you fill up your physical RAM?

That’s where the pagefile comes in to play. When physical memory starts to become scarce, Windows looks for pages of memory which haven’t been accessed in a while. Those unused pages are temporarily swapped out of RAM and into a file on your hard drive (pagefile.sys). If the data in that page is requested by an application, the virtual memory manager, loads the page back from the page file into RAM and the data is available again.

Managing Virtual Memory

By default, Windows sets up a pagefile.sys on your system drive (c:) and configures it to be “system managed”. This means that Windows looks at your memory usage over time and adjusts the size of that page file to handle your likely needs. For general computing use, it’s a good way to go. The system balances the need for virtual memory with trying not to take up too much of your system drive with the page file and things are good.

The problem with 3D graphics, though, is that our use of memory doesn’t match the models that Microsoft used when they created the virtual memory manager. We can quickly require large amounts of memory, and the more complicated we get with what we’re doing, the larger the textures we’re working with, etc. the more memory we need. We don’t follow a nice even curve of memory usage, our memory needs spike and plummet often.

This causes the virtual memory manager some issues when it tries to figure out what to do with your page file. In my experience, Windows doesn’t grow the page file quickly enough to keep up with our needs because it is looking at average usage over time, whereas we need memory NOW. So, one thing I do whenever I build (or rebuild) my graphics machines is to take manual control over the page file.

How Much Page File?

Before I get into the mechanics of setting your page file, let’s talk a bit about how much of a page file you may need. Well … it depends. Mostly on a couple of factors. The first of which is how many hard drives you have in your computer. The second is how much space they have available.

How Many Drives?

You have at least one hard drive (C:\). But in many desktops (and some laptops) you may have more than one hard drive installed. Note that I am talking here about physical hard drives. While you can separate a hard drive into multiple partitions, the advice I’m about to give doesn’t help in that case, it only helps if you have separate physical storage devices.

Windows (and applications) perform better when the page file is on a separate physical drive from your system files. Your C:\ drive is already going to be busy with Windows asking for DLLs, program executables, etc. Keeping the pagefile.sys there as well is going to create contention for that resource. You are better off moving it to another physical drive if you can.

There is, however, limited benefit to splitting the page file into multiple drives. The added overhead of needing to query extra devices and re-combining pages tends to balance out the benefit of the extra device IO speed.

A quick note about Solid State Drives (SSDs) … I’m torn about using those for page file. First, since the page file is essentially an extension of your RAM, the faster the storage device the better; so in that way SSDs make sense. However one problem that plagues SSDs is fragmentation over time when you do a lot of reading and writing to them, which is exactly what happens to the page file. So I guess it is up to you.

What is the proper sizing?

This is going to depend on a few factors. The biggest of which is how much space is available on the drives you’re going to use for paging. Now don’t go all crazy and allocate 100 Gigabytes or something just because you have it. You want to keep things reasonable. My rule of thumb is that normally I want somewhere between 1.5 to 2.0 times the physical RAM I have in my PC. So, if I have 8 Gigabytes of RAM, I want a page file between 12 and 16 Gigabytes in size.

If I have 16 Gigabytes available and I’m not overly crunching space on my drive to get it, I’ll use that, otherwise I’ll go smaller. I would hate to go smaller than 1.0 times my physical RAM though. Below that, I’m likely to starve my system for memory.

You’ll see in the mechanics section below in regards to setting the page file size that you can tell Windows “Start at this size and grow to this…” Don’t bother. I always set both the min and max to the same value. Expanding a page file is “expensive” in computing terms. It also opens up to the page file becoming fragmented on the hard drive (thus making it slower) and if you only have the C drive available, you can end up filling up your system drive when the page file expands (which trust me you NEVER want to do!).

There is a drawback to not allowing Windows to ever expand the page file, though. If you were to happen to fill up all 16 Gigabytes of page file and Windows needed more, the next memory allocation call would fail. So, if you think that’s a possibility (and your hard drive space can stand it), you might consider something like a range of 16-18 to give yourself a little breathing room.

As with any “rule” like this, there are going to be exceptions. The 1.5x sizing works fine up to a point. Once you get to where your PC has more than 8 Gbytes of RAM, it starts to be excessive from a pure need basis. You need something (see the Warning section at the bottom), but if you have a significant amount of physical RAM, a smaller page file could easily work for you. I would just keep an eye on your memory usage, especially when using memory intensive programs, and make sure you’re not getting close to maxing out the page file usage.

How Do I set it?

Note that the following screenshots are from Windows 7. If you are using Windows XP or Windows 8, your specific dialog boxes, etc. may vary, however the concepts are the same.

  1. Open Control Panel – I always change Control Panel to the Large Icon view since I find the Categories view annoying
  2. Click on the System Icon

    Control Panel

    Control Panel

  3. On the left panel, click on Advanced System Settings

    System Control Panel

    System Control Panel

  4. Click on the Advanced tab

    System Properties

    System Properties

  5. In the Performance section, click on the Settings button

    Advanced System Properties

    Advanced System Properties

  6. Click on the Advanced tab

    Performance Settings

    Performance Settings

  7. In the Virtual Memory section, click on the Change button

    Advanced Performance Settings

    Advanced Performance Settings

  8. Uncheck the checkbox at the top that says “Automatically manage paging file sizes for all drives”

    Virtual Memory Dialog

    Virtual Memory Dialog

  9. Select the hard drive where you want to have a page file and change the “System managed size” option to “Custom Size”
    Setting Page File Size

    Setting Page File Size

    1. In the Initial Size, type the size of the page file in Megabytes (Gigabytes x 1024)
    2. In the Maximum Size, either type the same number, or some number 1024-2048 Megabytes larger
    3. Click the Set button
  10. If you have more than one hard drive, and you are moving the page file from your system drive to another hard drive, select the C drive, then choose “No paging file” and click the Set button.
    Note: Windows will warn you that without a page file on the system drive you won’t be able to create a dump file in the event of a system crash (BSOD). While this is true, in 20 years of experience as a Microsoft technology consultant, I’ve only actually NEEDED a dump file once.
  11. Click OK to save the settings
  12. Windows will warn you that the new settings will only apply after the next reboot, you can reboot right away if you’re ready to test

If something really goes badly for you with these manual changes, you will still be able to boot Windows, even without a page file anywhere. At that point, you can re-check the system managed option to get back to how things were before you messed with things.

Corrupted Page Files

I have on very rare occasions seen a corrupted page file. When that occurs, it can look like the computer has issues with it’s physical memory even though the problem is with the data stored in the page file. If you suspect this could be the case for you:

  1. Follow the instructions shown above
  2. Set all hard drives to “No page file”
  3. Reboot your computer
  4. Using Windows Explorer, ensure that the pagefile.sys file is gone from the root of all your hard drives (you may need to turn on the ability to see hidden and system files in order to see it)
  5. Reset the virtual memory page file settings to what you want them to be, forcing Windows to create a new file
  6. Reboot the computer again to start using your new page file


Do NOT run your computer for any length of time without any page file at all. Even if you believe you have enough physical RAM for any memory needs your applications might require. Many applications (and even Windows itself) expect to be able to store certain information in the pagefile when they know in advance that it will seldom be needed. Running without any page file on your computer for an extended period is likely to cause significant stability problems!

Guidance not Rules

The above information is intended to be guidance for you to work with in exploring if configuring your page file might help performance and stability issues for your PC. The information here should not be taken as a hard and fast rule as to how everyone MUST configure their computer.  Naturally, I can’t guarantee you that changing your page file configuration is going to fix problems you might have with rendering in DAZ Studio. However, taking greater control over how Windows manages your memory allocation may help some and isn’t likely to hurt.

Coming Soon: New Artist Mistakes

I’m almost done with Part 4 of my 3 part series on 3D surfaces. I’ve decided to follow that up with a post or two on common mistakes that new 3D artists make with their first images. I may need to break them down into a couple of categories. I’m thinking right now of the following…

  • Lighting Mistakes
  • Posing Mistakes
  • Set / Framing Mistakes

I’m going to create some images that purposefully make these mistakes and then point out why they don’t work. I’ll also create some quick images that are similar that correct the mistakes to show alternatives.

Those of us who want to help others grow as artists want to be able to offer criticism. But sometimes, the artist’s ego is a fragile one. it can be hard for us to see the same “obvious” mistake for the 100th time and think of a way to say it without hurting the artists feelings. My hope is that by ripping apart my own images I’ll be able to provide some of that valuable feedback to others without damaging a budding new artist’s interest in this hobby.

For those who may be following this, if you can think of any ideas for mistakes you’d like to see addressed, please feel free to leave a comment here.

Newer entries » · « Older entries