Archive for 3D Tutorials

Creating a Holey Cube

In Daz 3D’s Hexagon forums, one of our newer modelers asked about how to create a cube with intersecting holes from two sides. In his Penetrating a Rhomboid post, I suggested using the bridge function, which got halfway there, but I realized after he tried it that I could have been more complete in my description. So, this is how I did it.

Step 1 – Create the cube

I created a cube primitive with 8 tesselations to give me a nice center set of faces to work with on each side.

Holey Cube 01

The cube we’re going to pierce.

Step 2 – Make holes

I removed the 9 middle faces on each side, leaving the top and bottom solid.

Holey Cube 002

The center faces removed

Step 3 – Bridge two holes

I selected the edges around two opposing faces (click on one edge of each hole and use the Loop selection to select the hole). Then in Vertex Modeling, I chose Bridge and accepted the results.

Holey Cube 03

The first two holes bridged

Step 4 – Bridge the other holes

Then I repeated the process to bridge the other two holes. This creates the structure, but as you can see, the holes don’t go all the way through. Each bridge is blocking the view through the other.

Holey Cube 04

Both holes bridged

Step 5 – Tesselate the intersection

I hid the top of the cube (created a material zone with the top faces and hid that zone) so you can see inside the cube. I used Tesselate by Slice to slice each bridge as close to the other bridge as I could.

Holey Cube 05

Bridge overlaps tesselated

Step 6 – Remove the intersecting faces

Back inside the holes, I selected the new faces that were blocking my view through each hole and removed them.

Holey Cube 06

Intersections removed

Now you can see through, but if you look closely inside the hole there is a slight cap between the edges of each hole.

Step 7 – Weld the edges together

I admit to forgetting about the tools Hexagon offers at first. I started by manually going through and welding the vertices in each edge together. That was painful. Then I remembered the Average Weld function. It’s perfect for this as Hexagon is smart enough to figure out that those vertices are close enough to be welded. That went a LOT faster! Like a single click and it was done. 🙂

Holey Cube 07

Intersection edges welded

Step 8 – Test smoothing

Just to show I wasn’t quite done yet, I set smoothing level to 2. See that mess in the middle? That’s because there are still some faces from each hole that are overlapping there messing up the smoothing algorithm.

Holey Cube 08

Bad smoothing due to overlapping holes

Step 9 – Remove the overlapping faces

I removed the faces from one of the two holes, leaving the faces from the other one in place.

Holey Cube 09

Overlapping faces removed

Step 10 – Tesselate and weld

Once again I took the remaining faces and used Tesselate by Slice to create corresponding edges, making a grid in the center. Then I used Average Weld again to weld it all together.

Holey Cube 10

Intersection tesselated and welded

Final Product

Finally, with Smoothing turned to 1 you can see all my gaps and such are gone. I could adjust the edges of the holes a bit to make them more round instead of square, add some edges around the outline of the cube to keep it from smoothing too much, but that’s just tweaking it for the effect you’re going for really.

Holey Cube 11

Smoothed

Advertisements

Linking to Converted Clothing

This is a tip for people who may be using RiverSoftArt’s wonderful Clothing Converter from Genesis 3 Female to Genesis 8 Female for Daz Studio.

If you’re like me and…

  1. Don’t use Smart Content, but rather browse the Content Library
  2. You followed River’s suggestion and placed your converted clothing somewhere other than your main content library
  3. You’re running Windows 7 or later

…I might have a tip for you to make your converted content easier to find.

    1. Find the full path to your converted clothing. (e.g. c:\users\jonnyray\documents\DAZ 3D\Studio 4\My Library\People\Genesis 8 Female\Clothing)
    2. Open a command prompt as an administrator
      1. Start -> Run -> cmd.exe
      2. right click and say Run as Administrator
      3. click OK on any security warnings)
    3. Change your command prompt location to the location of your main clothing folder for Genesis 8 Female.
      cd “c:\users\public\documents\DAZ 3D\Studio\My DAZ Library\People\Genesis 8 Female\Clothing”
    4. Create a symbolic link to the converted clothing path you found in step 1…
      mklink /D “Converted from G3F” “c:\users\jonnyray\documents\DAZ 3D\Studio 4\My Library\People\Genesis 8 Female\Clothing”

 

What this will do is create a “folder” in your Genesis 8 Female\Clothing folder called “Converted from G3F” that will point to the location where the converter is putting your clothes. It won’t show the metadata tags like “Wardrobe” and such, but everything will load just like it loads from the actual location and you don’t have to browse two different content library structures to find your converted clothing.

3D Modeling Observations

As I’ve gotten back into modeling some of my own 3D content, I realized how it has given me more freedom of expression. A lot of my renders lately are of the roleplaying characters for my girlfriend and I to create snapshots of stories we’re co-writing. We tend to have very specific ideas about the look of our characters and the things they might own; so being able to create simple things myself has allowed me to reach my goals for my images without having to be limited by the content that others have created.

For example, our characters recently got married in-game. So we wanted the images I created to have wedding bands. But she’s particular about wanting to have silver/platinum and simple, but not entirely plain. While there are a lot of ring collections available from marketplace sites like Daz 3D or Renderosity and even freebies from places like ShareCG, nothing was quite what we needed and I didn’t feel like spending $10-12 for a collection of rings that were “close” when I could create some myself.

It took me a full evening to create the rings we wanted, but most of that was actually about getting them to work properly as props attached to the character’s hand rather than the modeling itself.

Moncreiffe Wedding Rings

Wedding rings worn by Conall and Simi

Another example was a simple picture frame that I needed. The story is that they are fans of Tim Burton’s “Nightmare Before Christmas” and so we needed a picture to hang on the wall with a frame that was appropriately just a little unusual. It took me less than 20 minutes to come up with this as opposed to buying a collection of frames or spending an hour searching for a free one.

Framed Nightmare Before Christmas

Picture from Nightmare before Christmas framed and hanging on the wall.

As a final example, the crib below is based off a design that Jenny really wanted to use for the baby. This project took longer because of a couple of false starts on my part. I could go into my mistakes and rework at length, but in the end it was mostly about knowing when to do the UV mapping of the parts of the crib. It also represents the first time I created something specifically to use Daz Studio’s dForce cloth simulation (the canopy is modeled from a basic cone shape and the dForce simulation makes it drape properly).

Siofra's Crib

Baby crib with a lace canopy.

My point to all of this is that none of these objects existed exactly in any 3D market or freebie sharing site anywhere. Learning how to model them myself allowed me to create exactly the items I needed for my image instead of just browsing through my collection of 1000s of pre-made items to find something that is “close enough”. Not having to compromise (and yes, being a bit proud of rendering with something I created myself) is a good feeling as an artist.

I encourage anyone who wants to take their artistry from composing the objects created by other people into a realm where you’re creating images that conform exactly to your vision to learn at least the basics of modeling. You may not ever want to get to the point where you’re creating your own clothing or modeling an entire forest. But the freedom you gain from knowing you can create your own lamps, picture frames, dishes, even furniture is a wonderful new experience!

Daz Studio 4.10 Iray Viewports

Note: A lot of this information is taken right out of Daz 3D’s Getting Started in Iray tutorial video on YouTube. If you learn better from videos, you might find that helpful.

The Problem

I’ve seen repeated questions about improving performance of the Iray drawing style in Daz Studio 4.10 viewports. Imagine my surprise when I was watching the Getting Started in Iray tutorial video and found a wealth of information already available on the topic!

Photo realistic rendering in the Daz Studio viewports can slow down even some of the fastest computers out there because Studio is trying to interactively create a “final” image and has to recalculate light paths, material interactions, shadows, and such each time you move your view or relocate content. This can cause people to feel like the program is really sluggish and/or that it causes everything else on the computer to grind to a halt every few seconds.

Interactive to the rescue!

Technically, NVIDIA Iray has two modes that it can render in. The one that is the default and we’re most familiar with is Photoreal. With only a few exceptions, this will be the mode you want to use for final image rendering.

There’s another mode called “Interactive” which has many of the same features as Photoreal. However, because it lacks support for computationally expensive features like subsurface scattering and caustics, it will generally render much faster.

Rendering Devices

On the Advanced tab of your Render Settings, you have the option to select which devices (CPU or graphics cards) can be used to perform Iray renders. There are separate selections for Photoreal verses Interactive. Personally, I don’t mind if Studio has to fail over to my CPU for a large final render, but for Interactive mode that we’re using on viewports, it’s probably better to uncheck the CPU. This will also stop the Iray engine from grabbing the CPU for rendering purposes and slowing everything else down on your computer.

Render Device Settings

NVIDIA Iray Rendering Devices in Daz Studio 4.10

Render Settings

While we’re on the Render Settings, go back to the Editor tab, Render Mode and change this to Interactive. Technically you might think this would ONLY affect if you’re doing a final render. However the Daz Tutorial indicates that Rendering Style and Draw Style are linked in some way; so it’s best to set this setting to Interactive as well.

Iray Rendering Mode

Choosing between Photoreal and Interactive modes

Just don’t forget to switch it BACK before you do your final rendering!

Draw Settings

Next is setting the drawing style for your viewports. There’s a good chance you don’t have the Draw Setting tab open in Studio. Go to your Window menu -> Tabs and select Draw Settings. You can dock it wherever it feels most natural to you.

Also, you will need to repeat the steps below for each viewport that you’re using Iray in. Most of the time, I will set my Auxillary (Aux) Viewport to Iray so I always have a rendered looking image to refer to even if I’m using Texture Shaded on my main viewport. Draw Settings changes focus for each active viewport that you click on; so make this changes in any viewport using Iray.

Draw Mode

Go to your Draw Settings tab, Drawing section and change the Draw Mode from Photoreal to Interactive.

Draw Mode Settings

Setting the drawing style on the current viewport

Response Threshold

When you’re using Interactive Draw Mode, Studio will pixelate your image when you start moving your view around and then will resolve it back to a rendered image when you stop. Response Threshold tells Studio how sensitive to be to view changes before it changes to the pixelated view. The lower the number, the more quickly it decides to pixelate the image. Higher numbers make the Interactive rendering engine work a little harder, but if your graphics card can handle it, it’s probably less annoying for you. You may need to play with that value to find a number that works for you.

Manipulation Mode (optional)

If you find the pixelated image still feels too sluggish when you’re navigating around your scene, you can choose how the content display changes when the Response Threshold is exceeded. As I mentioned, by default it pixelates the image, but from the Draw Mode tab, General, Manipulation section, you can choose to use either wireframe or solid bounding boxes instead. This is another option for those who are using slower computers or if you have a very large scene with a lot of detailed content.

Manipulation Drawing Style

Choosing how Daz Studio draws content while moving the view

Conclusion

I hope this information is helpful to someone. The Tutorial video covers a few other pointers on using Iray as part of the scene setup process. I highly recommend it to anyone getting started in using this tool.

DS Content Management (Characters)

I’ve seen some posts lately in the Daz 3D – New Users forum about people wondering about how to organize their content. While things like Smart Content and Content Categories have made this a lot easier than it used to be, I still find myself typically browsing the content folders.

So, to make things easier on myself, there are some standard things that I do to make content easier to find. In this entry, I’ll talk about how I move / copy / rename folders for my main characters.

Caveats

  1. The method I’m about to talk about does have a drawback. When there is an update to something that was moved, you will need to go and repeat the move / copy of files and folders. I’ll cover that more at the end, but if you don’t want to have to remember to update your folders and files after a product is updated, this may not be the method for you.
  2. This represents how MY mind thinks about content and what’s important. while it might work for you too, I’m sure there are other ways to accomplish similar goals

My Problem

I have three issues when I’m looking for a character to use. First is that when I’m looking for characters, the names of the folders aren’t always sufficient. 6 months after I bought it, remembering that Giada is a young teen looking girl based on Aiko 8 is almost impossible for me.

Second, even if I do remember the character’s name I’m looking for, the number of clicks to get there is annoying. I’d rather have my base characters available at a higher level in the structure.

Finally, I find it annoying to have all those folders with the default folder icon. Wouldn’t it be better to have the character’s headshot instead of a folder?

DAZ Default Character Folders

By default, DAZ wants to organize your character folders like this:

DAZ Studio Character Organization

The default organization of how folders and files are saved for character content in DAZ Studio

So, to load Aiko 8, I’d have to click on People, Genesis 8 Female, Characters, Aiko 8, and then I find the Actor file to load her. Also, unless I magically remember that Giada and Yuka are two Aiko 8 variants, I might have to click on each one, see their icon and then my memory is jogged.

My Character Folders

I will describe the actions I take on my character folders below, but here’s a diagram of the changes I make.

My Character Organization

How I copy and rename character folders and files

My Approach

Several key steps here in what I’m doing with this organization.

  1. All of the main characters start with an exclamation point (!). Since Daz Studio sorts things alphabetically for you in the content folder view, this will force all of the main characters to the top of the list.
  2. I copy of all the actor files (and their thumbnail png files) to the Characters folder. This does two things for me.
    1. First, it means I can see all of my Genesis 8 Female characters in a single folder
    2. Second, since the Actor file and the folder name are the same, it changes the icon on the content browser from the default folder to the character’s Actor thumbnail.
  3. If the character is another layer down (say for example with Giada, the folder path might have been People> GF8 > Characters > FWSA > Giada), then I move the whole folder up one level. While I appreciate the effort that Fred Winkler and Sabby put into their character, the “FWSA” folder is just an unnecessary click to get to what I really want.
  4. For characters that are derived from one of the base character shapes, I add a prefix to the folder, actor, and thumbnail filename. For example, since Giada’s Product Page says that she requires Aiko 8, I add “A8” to the folder and file names. This helps me group my characters into basic families of similar body shapes. Also, in order for the thumbnail trick to work, both the actor thumbnail file and the folder have to be exactly the same. Other prefixes I’ve used include:
    • Genesis 8 Female = G8F
    • Victoria 8 = V8
    • Olympia 8 = O8
    • Charlotte 8= C8
    • The Girl 8 = TG8
    • … you get the idea.

More to Come

As I said, this is a method that works for me. If it at least gives you some ideas on how to help you get your hands around the 3D Content that you own, then I’m glad I helped. Feel free to post questions here or, if you’re on the Daz 3D forums, drop me a PM at JonnyRay.

I will keep adding other categories of content to this series. Check my Content Management category for other similar posts.

Hair Raising Project

So way  back in 2014, I wrote about the research I was doing on the Current State of Rendering Hair my goal at the time was to see if I could apply the concepts of that research (which was designed for fiber based hairs) to the more common transparency mapped hairs of the hobbiest market. I was targeting the 3Delight rendering engine in DAZ Studio.

Fast forward through a number of life changes in 4 years and I’m starting to look at this again. However in that time the rendering engine of choice for DAZ Studio has changed from the Renderman compliant 3Delight to the physically based rendering Iray engine from Nvidia.

This has some advantages for me since the core of the Material Definition Language (MDL) already has a lot of the concepts of 3D surfaces built into it, I mostly need to write some custom code for the scattering and transmission components. I also still have to work out how to make the rendering engine think a piece of geometry which resembles a long flat ribbon is actually a collection of hair strands, but I have some ideas on that one. Will provide some updates once I get somewhere with this.

Hair Rendering – Current State

I’ve been doing quite a bit of research lately about how to render hair in a Renderman compliant rendering engine. As I’ve gathered data from several sources, I thought I’d create a post that summarized what I’ve found so far. Hopefully anyone else who decides to research this crazy topic can benefit by not needing to find everything on their own.

Stephen Marschner

While there are other predecessors to the idea about how to render hair in computer graphics (most notably Kajiya and Kay in 1989), most serious work begins with the paper by Stephen Marschner, et. al. titled “Light scattering from human hair fibers” published in 2003.

Marschner noted that light interacting with hair is actually a very complicated model. When light strikes the surface of a strand of hair, it does 3 different things. Part of it is reflected back into the environment, part of it is refracted and transmitted to objects behind the hair, and part of it reflects within the hair strand, re-exiting the hair at another point further down the strand.

Marschner's model showing the interaction between light and hair

Marschner’s model showing the interaction between light and hair

Side Note: It may be interesting to notice that Marschner is also the primary author on the Subsurface Scattering paper that I linked over in my Additional SSS Information article.

Intermediate Works

Following Marschner’s article, several other researchers worked on refining his model. Which, in academic terms, really means trying to show what’s wrong with that model. Improvements were made to reduce some of the computational complexity as well as to fix issues common to shading models such as energy conservation.

Side Note: Energy conservation in computer graphics terms means that an object should not reflect / transmit more light energy than strikes it. Some shaders can be very bad about this and it can result in unintended effects.

One of the best papers in this category (in my opinion) is “Dual Scattering Approximation for Fast Multiple Scattering in Hair” by Arno Zinke, et. al. in 2008. In this paper, they note that to be fully implemented, Marschner’s model requires that all light striking the hair needs to be fully calculated. It also does not account for an effect in curly hair where the angle of the light striking the curl has a significant factor in how the light is transmitted or refracted. Instead, they use a sampling model for the scattering that allows you to only consider the effect at the shading point. Also, the eccentricity of the hair fiber (i.e. the tightness of the curl) is taken into effect.

Side Note: The researchers at the University of Bonn have done a lot of very interesting work in the area of computer graphics and the modeling / rendering of hair.

Artist Friendly

In 2010, 3D artists from Disney Studios (Iman Sadeghi and Heather Pritchett) and a couple of professors from the University of California at San Diego (Henrik Wann Jensen and Rasmus Tamstorf) brought forward the idea that while these mathematical models are quite interesting, they aren’t very friendly for artists to work with. Most of them require a deep understanding of the math involved in order to provide inputs that produce predictable results.

Disney in particular was finding that often they spent more time testing lights and such than they did actually working on the scenes they were creating. Therefore in 2010 they proposed what they termed an “An Artist Friendly Hair Shading System”.

In this system, the parameters provided to the artist are more familiar terms such as the curliness and coarseness of the hair being rendered rather than details such as eccentricity, cuticle angles and cross-section measurements.

Also, since the goal for Disney Studios is not to create the most physically accurate model of human hair, they take some liberties with the math so that the result is artistically more pleasing, if not quite as mathematically perfect.

This Is My State

So, this is where I stand on the research. I’m working on a shader for hair in DAZ Studio which uses this Artist friendly approach to create a shader model which will produce better results. My goal is to make it friendly for transparency mapped hair. Most of the models reference above expect the objects they are rendering to be cylinders. So I am working on a modification to the model with works with planes, but simulates many small cylinders for the hair.

I’ve finally gotten to the point where I understand enough of the math to begin working on the implementation. Further updates as situations warrant.

Acknowledgement

I would be remiss if I didn’t mention a thesis paper from a graduate student at Bournemouth University. Sarah Invernizzi wrote “On Physically Based Hair Rendering” for her Master of Science degree in Computer Animation and Visual Effects. Her paper did a lot for me in terms of providing the history of this topic and does a good job of making things a little be clearer for those of us who aren’t as versed in the mathematics.

Additional points about SSS

Since I opened the discussion about subsurface scattering (SSS) with my Light and SSS and SSS – Why Should I Care? posts, I’ve received some good feedback / additional information. I wanted to capture those here.

Other Uses

First, the point has been made that although we think of SSS as adding realism to surfaces which don’t reflect 100% of the light that strikes them, the effect of SSS can be used for other purposes. It can add some depth to the surfaces for toon style rendering, and can even completely change the look of an object. For some examples, see the following product pages at the DAZ 3D store.

Note: I don’t get any commission if you choose to buy any of these products. 🙂 I’m actually referencing them because they have example images that show the effects.

DAZ Studio – SSS Shader

We’ve had a couple of good discussions about the Subsurface Shader Base that is available for free for DAZ Studio. These discussions have largely been about how the shader works. It was actually one of these discussions which spawned my initial blog posts. I wanted to capture a couple of important points here.

Pre or Post?

The first point was asking for some clarification about how the selection of either Pre or Post processing of the SSS effect changes the resulting calculations. Age of Armour (Will) was kind enough to provide us with some information in this thread on the DAZ 3D forums.

The choice to Pre or Post application of the SSS effect has to do with how the surface values are calculated. For the Pre option, the calculation is:

(
(Diffuse map * Diffuse Color * Diffuse strength)
* Lighting
) +
(Subsurface Calculation * Lighting)

This basically means that the Diffuse surface color is calculated, then the SSS effect is added to the result.

When chosing the Post option for the SSS effect, the calculation looks significantly different.

(
(Subsurface Calculation * Lighting)
* (Diffuse map * Diffuse Color)
) +
(
(Diffuse Map * Diffuse Color * Diffuse strength)
* Lighting
)

In this case, there are two calculations that use the Diffuse surface settings. In the first part, the SSS effect is multiplied by the diffuse color. Note that the diffuse strength is not factored in at this point, it is simply creates a version of the diffuse color which is tinted by the subsurface effect. The second part of that equation is a standard diffuse surface calculation. The two diffuse colors are then added together to arrive at the final color for the surface.

The Origins of SSS

The ideas and concepts around subsurface scattering for the purpose of computer graphics were first described in a paper titled “A Practical Model for Subsurface Light Transport” presented to the ACM Siggraph conference by Henrik Wann Jensen, Stephen R. Marschner, Marc Levoy, and Pat Hanrahan. Warning for those who seek to understand SSS at that level, this is NOT trivial mathematics by any stretch. I cannot be held responsible for any damage to your brain from trying to read the paper.

SSS – Why Should I Care?

Markus Ba (one of the members of our DAZ Studio Artists group on Facebook) raised a question following the posting of my SSS and Light tutorial. “This is interesting, but why should I care about this?” It’s valid question and one that I’ll try to address here. But, first several caveats!

You Might Not Care!

I can’t tell you for certain that you should care about subsurface scattering. Depending on the visual style you are shooting for, the content you are using, etc. adding SSS effects to your surface shaders may not help your final image at all.

However, for accurate representation of surfaces other than hard plastic or metal, subsurface scattering is an important part of how the material interacts with light. Standard surface shaders using only diffuse, specular and ambient surface values ignore an important part of how real world materials work.

As I mentioned in the above referenced article, the primary reason for using subsurface scattering is to acknowledge that some light which strikes a surface is transmitted through the surface and exits at some other point on the surface. This scattered transmission of light is most closely associated with human skin, however many other surfaces do this as well. Examples include cloth, soft plastics / rubber, milk, clay, etc.

Cue the Lights

Before I talk about how SSS affects your surfaces (and therefore your final images), I want to mention that much of SSS is highly dependent on the lighting in your scene. Your lights do not necessarily have to be complicated, but very simple lights (e.g. a single distant light) may not provide enough light at the proper angles to get the most out of your SSS enabled shaders.

Texture Dependencies

One of the struggles with figuring out whether or not your image will benefit from SSS or not is how dependent the results can be on the texture maps that you have to work with. For the most realistic skin rendering using SSS, you should have the following texture maps.

  • Diffuse Map – showing what we think of as the visible skin surface (see note below)
  • Specular Map – skin is not universally shiny, a good specular map which acknowledges the differences makes a big difference
  • Subsurface Map – your skin does not have a constant color for it’s subsurface, ideally the creator of the skin you’re using understands this and has prepared a map. VERY complicated skin shaders go to the level of mapping the veins and arteries in your skin.
  • Subsurface Strength – Even if the color is constant, the shader should understand that the strength of the scattering is also not constant across your entire body.

How Diffuse Is It?

One problem that I’ve seen with many skins that we use in Poser and DAZ Studio is that they are based off photos of actual skins. “Why is that a problem?” you ask. Because the camera is recording the final result of the light interacting with the model’s skin. This includes the effect of light scattering in the subsurface.

So, if you add SSS to a skin which has already captured the SSS effect in the real world, you’re going to end up with skin that looks too orange/red. This is why you often see shader presets for skins multiply the texture by a light blue color. This (roughly) removes the SSS from the captured texture, with the expectation that the remaining calculations will add it back in correctly for your purposes.

The best diffuse map would be one where the original texture was captured with a very flat light. It should also have been just enough light as required to get the image without adding a lot of strong subsurface scattering to the image that the camera recorded.

Given that the artist doesn’t really have a choice of how the original texture is captured, the second best would be that you modify the texture in an image editing tool (e.g. Photoshop, GIMP, etc.) and remove some of the red at that level. I can’t really recommend specific filters since so much is going to depend on the image you’re starting with, the tools available in your editor, etc.

You Haven’t Answered Me!

Ok, now that I’m a page and a half into this description, it is probably time to address the original question about why should you care.

Usually the first example of where you will see the SSS effect is in the translucence that you see in certain parts of the body. The most common area is around the ears, or the fingers; however it can be seen anywhere that light is shining at an angle where it would transmit through the surface toward the camera.

The effect that it has is typically a soft translucent glow on the surface. Below I show a couple of simple images showing how SSS adds to the surface of Victoria’s head.

{Images to be inserted}

While SSS is most often associated with skin, it also is an effect on many other soft surfaces where light is partially absorbed and partially scattered (transported) through the surface. Surfaces like cloth, clay, rubber, etc. also have an SSS quality to their surface. The question of whether using an SSS enabled shader for objects in your scene which have a material like this will improve the image will end up being a matter of taste.

And, even then, there may be some cases where you decide that the additional level of realism for the surface is not worth the added rendering time that it takes.

Oh, you say I forgot to mention that part? Well, when you consider the extra calculations required to determine light absorption, scattering, translucence, fresnel effects, etc. the rendering time for an image where SSS is used extensively can be significantly higher than without.

Shader Tuning

One thing that I can’t really address here is how tuning the values of your SSS enabled shader will affect your final results. As I mentioned at the beginning, the results of an SSS enabled shader rely so much on lighting, textures, even the distance that the camera is from the subject have a big effect on the end result.

For DS users, there are several tutorial resources about how to get the best out of shaders like UberSurface, the Subsurface Base Shader, etc. Take a look at the links on my Other Tutorials page for information on where to find these sorts of tutorials.

Light and SSS Surfaces

This question came up on the DAZ 3D forums ( link ). Since there is considerable text to write, I figured I would post it here as well. Note that this discussion is about how light interacts with a surface that has subsurface scattering (SSS), not about how to get the best effects from an SSS enabled surface shader.

SSSay What?

First, briefly what subsurface scattering is all about.

One thing that is sometimes difficult to remember is that a surface in 3D graphics has no actual depth. It is a set of polygons which have length and width, but the depth is effectively zero. So our surface shaders that define the characteristics of the surface often have to fake the fact that in the real world, not everything that happens with light and surfaces happens on the very top layer of the material. This is especially true for surfaces like your skin.

When light strikes the surface of your skin, it does one of three things.

  1. It reflects – Most of the light just bounces off the outer layer of your skin and reflects into the rest of the world. This is exactly like every other surface.
  2. It is absorbed – Some of the light passes through that outer layer of skin and is absorbed into the layers beneath never to be seen again.
  3. It scatters and comes back out – Some of the light bounces around in the layers of your skin and eventually exits the skin again. This light can be seen. The easiest way to see this is when you press a small flashlight or laser pointer on your skin surface and the surrounding area “glows” with a reddish light.

Technically, unless they are a perfect mirror, all surfaces reflect and absorb light. That is the simple effect that we simulate by having the diffuse layer in our shader. Those settings are basically saying “When white light hits this part of the surface, this is the part of the spectrum which is reflected back into the rest of the environment.” The rest (by extrapolation) must have been absorbed by the surface.

SSScattering

So then what the SSS enabled shader needs to account for that isn’t already in the calculation is the scattering of the light within the surface and (eventually) the re-transmission of that light back into the rest of the world. While it could be possible to actually simulate the bouncing of the light within your skin, calculating the point where that light exits the skin again, and casting new rays of light, most shaders take a more simplistic view.

The biggest assumption that they make is that the point where they are calculating the surface values is very similar to the points that are close by. So, rather than calculating the effect of the light bouncing, one can make the assumption that the light which is hitting the point where you are sampling the effect is the same as the light that is nearby; so we can assume that some light from somewhere else is going to have been scattered and will be exiting the surface at our sampling point.

The perfectionists in us might cringe at this broad assumption, but when you consider the very tiny distances that are usually involved in this calculation, it isn’t as bad as you might think. We can also help out sometimes by fine tuning parameters in the rendering engine like pixel sampling or shader sampling levels.

SSShow Me?

Some of you are probably visual learners; so I’ve created a couple of simple diagrams to show what I mean.

SSStandard Surfaces

First, a diagram of light reflecting from a normal 3D surface. Note that in this case I’m assuming a white light source with a white diffuse surface setting; so all light that hits the surface is reflected back from it.

Standard 3D surface reflecting light

Standard 3D surface reflecting light

SSS Surfaces

When we add subsurface scattering, we need to account for at least the scattering aspect, and if we’re doing it well, the absorption factor is figured in too.

Light Interacting with a 3D Surface with Subsurface Scattering

Light Interacting with a 3D Surface with Subsurface Scattering

Notice that I included the second light ray that is assumed to exist that is adding the scattered light to the reflected light, giving us a result that is somewhat “warmer” than the pure white light that was provided.

SSSerious Skin

Some SSS enabled shaders can be further tweaked with additional settings. For instance, there is typically a setting for the strength of the scattering effect. Ideally this setting should allow you to provide a grayscale map which adjusts the strength of the scattering at various locations on the surface. Others will allow you to control which parts of the spectrum are absorbed and/or scattered by providing color controls for those settings.

Note: I have scene articles in both artistic and scientific oriented 3D journals which go so far as to simulate multiple portions of both the epidermis and dermis layers of the skin. That is hardcore!

SSScatter Pre or Post?

One challenge that can sometimes arise for SSS enabled shaders is how to combine it with the diffuse color values which define the color of the top layer of the surface. The decision typically comes down to whether the light that enters the surface to be scattered should be filtered by the diffuse color of the surface, or should that part be considered to be white and the controls on the scattering part of the shader control how the light exiting the skin should look?

In the subsurface shader included in DAZ Studio, you can choose whether to apply the diffuse layer to the surface prior (Pre) to the subsurface scattering or after (Post) the scattering process. Will (aka Age of Armour), the author of that shader, has an excellent video tutorial ( Subsurface Shader Basics ) available which describes in much greater detail how to get better results from using that shader.

SSSigning Off

I hope this helped a little with understanding what the subsurface scattering effect is all about and what the shaders that support it are trying to simulate for you. And I hope you don’t hate me for starting all my sections sounding like a sssilly sssnake. 🙂

« Previous entries