Archive for January, 2014

Acquiring 3D Content

A new artist over at the DAZ Forums asked about strategies for acquiring content. Since I took the time to write up a lengthy reply about my approach, I thought I’d re-post it here too.

I’m not “new” at this by any stretch; so my approach is different now than when I was first building my library. With over 2600 packages (so, many more individual items than that) in the library, I can afford to be choosy. However, since you asked, this is how I approach purchasing content…

How Useful Is It?

  1. I only buy stuff I’m pretty sure I’m going to use. I don’t do a lot of renders with male figures, or in present day settings. So fantasy/sci-fi and female clothing / characters / etc. are the way to go for me. Your mileage may vary on that one.
  2. Think about the utility of the item. In clothing, I look to see if I could easily mix-and-match to get an outfit that I want or am I forced to use their entire set? For props and scenery, I look for things where I could re-use bits and pieces so that you don’t look at something I rendered and think “Oh, he used THAT building…” The more versatile something is, the more it is worth to me.

Price Tag Watching

  1. Ignore the % off!! It is an arbitrary number. Is something that is marked down at $14 from $20 retail price really more valuable than if it was $14 to begin with? If it isn’t worth paying full price for, it probably isn’t worth paying a sale price for either. There are a few exceptions. When something gets down to the $2 range, I might buy it “just in case I need it” But mostly if I wouldn’t pay the full retail price for it, I’m not going to buy it just because it was arbitrarily marked down 30%.
  2. Keep in mind, there will ALWAYS be another sale. Don’t know if you’re familiar, but around here we have a department store called “Kohls”. If you ever pay full price for something at Kohls, you’re ripping yourself off. Practice patience and it will be on sale. If you really can’t afford it right now, take a breath and remind yourself that it will go on sale again some time.
  3. Prepare for the big seasonal sales. March is traditionally a big month for sales @ DAZ. So are the last few months of the year. That’s when I will pick up the things that I kind of want, but not badly enough to buy at full price. Other sites have similar cyclical sales. If you learn them, you can pace your spending so that you can splurge when things are cheaper.

Saving the Bank

  1. Platinum Club – Especially for a new artist, there frankly is no better value than joining. If you can afford the whole year, I’d just go ahead and do that so you don’t need to think about it again for a while. 🙂 The PC goes on sale too sometimes; so if you’re not planning any big purchases for a while, hold off and see what comes up in the near future.
  2. The wishlist is your friend. Rather than going to the sale categories to see what is there, I use my Wishlist to see if anything that I have previously indicated I actually WANT is on sale. That helps with making sure I’m spending on the right stuff.
  3. If you have a tight budget for these things and you find yourself compulsively overspending, consider what I’ve done … I use a prepaid credit card for purchases. So i load it up at the beginning of each month. When the money is gone for that month, it is gone. Or at least I have to make a conscious decision to reload some money on to it rather than just thinking “Oh, that’s cool, and it’s only $8!”

Managing Virtual Memory

Lately there have been a lot of threads on the DAZ Studio Forums about crashes of the program (mostly when attempting to render something). As there can be so many factors involved when it comes to what causes any program to crash, troubleshooting these issues becomes a guessing game as to which one applies to any particular situation. However, one item that is important for 3D rendering of any sort is how memory is used in your computer. I will try to provide some help here for some ideas on tweaking the settings in Windows for how the Operating System manages your RAM.

Notice: This information is provided based on my experience managing my own computer systems over the past 20 or so years. No guarantee or warranty is either expressed or implied in the tips provided here. If you choose to follow any suggestion offered in this information, you are doing so at your own risk!

32 verses 64-bit computing

I won’t go into all the implications of what the difference is between a 32-bit and 64-bit version of Microsoft Windows. For this conversation, it is sufficient to say that in 32-bit versions of the Windows operating system, an application is (by default) limited to no more than 2 Gigabytes of memory at a time. And the system overall cannot address more than a total of 4 Gigabytes of RAM.

In a 64-bit version of Windows, a 32-bit application is allowed to access up to 4 Gigabytes of memory. On the other hand, a 64-bit application running on a 64-bit version of Windows would (theoretically) be permitted to access up to 8 Terabytes (that’s 8096 Gigabytes) of memory!

So, simply running a 64-bit version of Windows will provide more available memory for your 32-bit applications, and it will allow you to run 64-bit applications as well.

What about /3GB?

Ok, you’ve probably seen all these great tips about how in your 32-bit Windows, if you change the startup sequence to include the /3GB switch that your applications will “magically” be allowed to use 3 Gigabytes instead of 2, right? Well … not exactly. See, that switch does ALLOW applications to use up to 3 Gigabytes of RAM. However there are some significant caveats to that. First and foremost, the application has to have been created (compiled) with the ability to take advantage of that OS feature. Most 32-bit applications are NOT compiled that way.

Second, what you are doing by using /3GB is telling Windows that it should use less memory for itself and give more memory to the applications. If you have a lot of services and other things running in your operating system space, you could actually see significantly worse performance of your system by enabling the /3GB switch.

As for DAZ Studio, the executable itself indicates that it does support the large address space option. And evidence from end-user testing suggests that the 32-bit version of DS 4.0 (and later) does benefit in some way from the /3GB switch in Windows. So, it may be worth a try if nothing else seems to help.

Memory Usage

Microsoft provides several tools for monitoring the memory usage of your applications. Task Manager has the Process tab where you can sort by memory usage to see what is consuming that resource. On the performance tab, you can also see how much of your physical and virtual memory is in use. Also from that tab, starting with Windows Vista, they provided a tool called Resource Monitor which can show in greater detail exactly what resources your applications are using.

Physical Memory

Ok, let’s be honest here, there isn’t much you can do to influence the use of your physical RAM other than adding more. So get more if you can. 🙂 You can also use tools like the built-in memory tester in Windows 7 ( link ) to test your RAM chips for potential problems.

Virtual Memory

In discussing memory in Windows / DAZ Studio, I think one part that is often overlooked is the virtual memory (aka page file) portion of memory management. First a quick understanding of what virtual memory is all about.

Pagefile.sys

Let’s say you have Firefox, DAZ Studio, and Skype all running under a 32-bit version of Windows 7. Each of those applications has been told by Windows that it has up to 2 Gigabytes of memory available. However, your system only have 4 Gigabytes of RAM in it. And Windows itself needs quite a bit of RAM just for basic operations. What happens when you fill up your physical RAM?

That’s where the pagefile comes in to play. When physical memory starts to become scarce, Windows looks for pages of memory which haven’t been accessed in a while. Those unused pages are temporarily swapped out of RAM and into a file on your hard drive (pagefile.sys). If the data in that page is requested by an application, the virtual memory manager, loads the page back from the page file into RAM and the data is available again.

Managing Virtual Memory

By default, Windows sets up a pagefile.sys on your system drive (c:) and configures it to be “system managed”. This means that Windows looks at your memory usage over time and adjusts the size of that page file to handle your likely needs. For general computing use, it’s a good way to go. The system balances the need for virtual memory with trying not to take up too much of your system drive with the page file and things are good.

The problem with 3D graphics, though, is that our use of memory doesn’t match the models that Microsoft used when they created the virtual memory manager. We can quickly require large amounts of memory, and the more complicated we get with what we’re doing, the larger the textures we’re working with, etc. the more memory we need. We don’t follow a nice even curve of memory usage, our memory needs spike and plummet often.

This causes the virtual memory manager some issues when it tries to figure out what to do with your page file. In my experience, Windows doesn’t grow the page file quickly enough to keep up with our needs because it is looking at average usage over time, whereas we need memory NOW. So, one thing I do whenever I build (or rebuild) my graphics machines is to take manual control over the page file.

How Much Page File?

Before I get into the mechanics of setting your page file, let’s talk a bit about how much of a page file you may need. Well … it depends. Mostly on a couple of factors. The first of which is how many hard drives you have in your computer. The second is how much space they have available.

How Many Drives?

You have at least one hard drive (C:\). But in many desktops (and some laptops) you may have more than one hard drive installed. Note that I am talking here about physical hard drives. While you can separate a hard drive into multiple partitions, the advice I’m about to give doesn’t help in that case, it only helps if you have separate physical storage devices.

Windows (and applications) perform better when the page file is on a separate physical drive from your system files. Your C:\ drive is already going to be busy with Windows asking for DLLs, program executables, etc. Keeping the pagefile.sys there as well is going to create contention for that resource. You are better off moving it to another physical drive if you can.

There is, however, limited benefit to splitting the page file into multiple drives. The added overhead of needing to query extra devices and re-combining pages tends to balance out the benefit of the extra device IO speed.

A quick note about Solid State Drives (SSDs) … I’m torn about using those for page file. First, since the page file is essentially an extension of your RAM, the faster the storage device the better; so in that way SSDs make sense. However one problem that plagues SSDs is fragmentation over time when you do a lot of reading and writing to them, which is exactly what happens to the page file. So I guess it is up to you.

What is the proper sizing?

This is going to depend on a few factors. The biggest of which is how much space is available on the drives you’re going to use for paging. Now don’t go all crazy and allocate 100 Gigabytes or something just because you have it. You want to keep things reasonable. My rule of thumb is that normally I want somewhere between 1.5 to 2.0 times the physical RAM I have in my PC. So, if I have 8 Gigabytes of RAM, I want a page file between 12 and 16 Gigabytes in size.

If I have 16 Gigabytes available and I’m not overly crunching space on my drive to get it, I’ll use that, otherwise I’ll go smaller. I would hate to go smaller than 1.0 times my physical RAM though. Below that, I’m likely to starve my system for memory.

You’ll see in the mechanics section below in regards to setting the page file size that you can tell Windows “Start at this size and grow to this…” Don’t bother. I always set both the min and max to the same value. Expanding a page file is “expensive” in computing terms. It also opens up to the page file becoming fragmented on the hard drive (thus making it slower) and if you only have the C drive available, you can end up filling up your system drive when the page file expands (which trust me you NEVER want to do!).

There is a drawback to not allowing Windows to ever expand the page file, though. If you were to happen to fill up all 16 Gigabytes of page file and Windows needed more, the next memory allocation call would fail. So, if you think that’s a possibility (and your hard drive space can stand it), you might consider something like a range of 16-18 to give yourself a little breathing room.

As with any “rule” like this, there are going to be exceptions. The 1.5x sizing works fine up to a point. Once you get to where your PC has more than 8 Gbytes of RAM, it starts to be excessive from a pure need basis. You need something (see the Warning section at the bottom), but if you have a significant amount of physical RAM, a smaller page file could easily work for you. I would just keep an eye on your memory usage, especially when using memory intensive programs, and make sure you’re not getting close to maxing out the page file usage.

How Do I set it?

Note that the following screenshots are from Windows 7. If you are using Windows XP or Windows 8, your specific dialog boxes, etc. may vary, however the concepts are the same.

  1. Open Control Panel – I always change Control Panel to the Large Icon view since I find the Categories view annoying
  2. Click on the System Icon

    Control Panel

    Control Panel

  3. On the left panel, click on Advanced System Settings

    System Control Panel

    System Control Panel

  4. Click on the Advanced tab

    System Properties

    System Properties

  5. In the Performance section, click on the Settings button

    Advanced System Properties

    Advanced System Properties

  6. Click on the Advanced tab

    Performance Settings

    Performance Settings

  7. In the Virtual Memory section, click on the Change button

    Advanced Performance Settings

    Advanced Performance Settings

  8. Uncheck the checkbox at the top that says “Automatically manage paging file sizes for all drives”

    Virtual Memory Dialog

    Virtual Memory Dialog

  9. Select the hard drive where you want to have a page file and change the “System managed size” option to “Custom Size”
    Setting Page File Size

    Setting Page File Size

    1. In the Initial Size, type the size of the page file in Megabytes (Gigabytes x 1024)
    2. In the Maximum Size, either type the same number, or some number 1024-2048 Megabytes larger
    3. Click the Set button
  10. If you have more than one hard drive, and you are moving the page file from your system drive to another hard drive, select the C drive, then choose “No paging file” and click the Set button.
    Note: Windows will warn you that without a page file on the system drive you won’t be able to create a dump file in the event of a system crash (BSOD). While this is true, in 20 years of experience as a Microsoft technology consultant, I’ve only actually NEEDED a dump file once.
  11. Click OK to save the settings
  12. Windows will warn you that the new settings will only apply after the next reboot, you can reboot right away if you’re ready to test

If something really goes badly for you with these manual changes, you will still be able to boot Windows, even without a page file anywhere. At that point, you can re-check the system managed option to get back to how things were before you messed with things.

Corrupted Page Files

I have on very rare occasions seen a corrupted page file. When that occurs, it can look like the computer has issues with it’s physical memory even though the problem is with the data stored in the page file. If you suspect this could be the case for you:

  1. Follow the instructions shown above
  2. Set all hard drives to “No page file”
  3. Reboot your computer
  4. Using Windows Explorer, ensure that the pagefile.sys file is gone from the root of all your hard drives (you may need to turn on the ability to see hidden and system files in order to see it)
  5. Reset the virtual memory page file settings to what you want them to be, forcing Windows to create a new file
  6. Reboot the computer again to start using your new page file

WARNING

Do NOT run your computer for any length of time without any page file at all. Even if you believe you have enough physical RAM for any memory needs your applications might require. Many applications (and even Windows itself) expect to be able to store certain information in the pagefile when they know in advance that it will seldom be needed. Running without any page file on your computer for an extended period is likely to cause significant stability problems!

Guidance not Rules

The above information is intended to be guidance for you to work with in exploring if configuring your page file might help performance and stability issues for your PC. The information here should not be taken as a hard and fast rule as to how everyone MUST configure their computer.  Naturally, I can’t guarantee you that changing your page file configuration is going to fix problems you might have with rendering in DAZ Studio. However, taking greater control over how Windows manages your memory allocation may help some and isn’t likely to hurt.

Coming Soon: New Artist Mistakes

I’m almost done with Part 4 of my 3 part series on 3D surfaces. I’ve decided to follow that up with a post or two on common mistakes that new 3D artists make with their first images. I may need to break them down into a couple of categories. I’m thinking right now of the following…

  • Lighting Mistakes
  • Posing Mistakes
  • Set / Framing Mistakes

I’m going to create some images that purposefully make these mistakes and then point out why they don’t work. I’ll also create some quick images that are similar that correct the mistakes to show alternatives.

Those of us who want to help others grow as artists want to be able to offer criticism. But sometimes, the artist’s ego is a fragile one. it can be hard for us to see the same “obvious” mistake for the 100th time and think of a way to say it without hurting the artists feelings. My hope is that by ripping apart my own images I’ll be able to provide some of that valuable feedback to others without damaging a budding new artist’s interest in this hobby.

For those who may be following this, if you can think of any ideas for mistakes you’d like to see addressed, please feel free to leave a comment here.

Learning From Elena

If you’re on Facebook (or likely any other social networking site), you’ve probably already seen links to the excellent photography work by Elena Shumilova (gallery on 500px). What I would encourage any aspiring artist (no matter what medium, but especially 3D images) to do would be to review her work to understand why she has gained such worldwide notoriety in a short amount of time.

The Critical Eye

When I was in to creative writing in a big way, I talked with friends and family about how studying writing made me read differently. Part of me would be reading the story, but part of my brain was also analyzing the structure of the story. If I got confused by something, I would stop and try to figure out what the author did wrong. If something really touched me, I’d think about what they were doing right. I had to force my brain to turn off that analysis sometimes so I could just enjoy the story.

I’ve reviewed Elena’s photographs a few times now. The first time, I was probably like most of her visitors “Aww” … “Beautiful!” … “Amazing!” … etc. But if I want to get any better at my current art form, I can’t let it stop there. So I have gone back and looked again and again. I look at how the photo is framed. How does she use light and darkness and color to enhance the feeling of the image? How does the posing of the principal actors in the photo contribute to the image?

Note: I’m not suggesting she set up these photos and they are “fake” in some way. I’ve also studied photography. When shooting photos of children and animals, one of the hardest parts of the shoot is figuring out how to anticipate that perfect moment and capture it in time. One thing that makes her images so ‘magical’ is that she managed to do that. Especially with the photos of her boys and the animals.

Learning to look at images like this with a critical eye and understand WHY you like something will help you become better at getting the vision that you have in your mind in to the image that you’re creating.

But .. but .. I’m a rebel!

Some of you won’t like them. Some of you may feel that the sentimentality or the “cuteness” isn’t something that you’d ever want to have in one of your images. That’s ok. You don’t have to try to create THAT mood. As I heard a film professor telling potential students one day “Don’t aspire to be the ‘next Steven Speilberg’! We already HAVE one of those. Aspire to be the best Roger Harrison or Jenny Wolf or Jim Blackwell that you can be!”

That doesn’t mean that you can’t learn something by studying the photos she’s taken. Basics like light and darkness, framing the image in a way that keeps the focus on the important things, etc. will apply no matter what type of emotion or message you’re trying to share.

The Architect – Skyline v0

I spent some time tonight on the city that my architect is working on. The cityscape is a combination of Stonemason’s Greeble City Blocks and an old set called the Dystopia City Blocks. The platform in the foreground is from The Core, another Stonemason creation. I spent a fair amount of time working on the city so that it didn’t look like every other future city that uses Stefan’s greeble blocks. But it still needs some work.

First take on the skyline

First take on the skyline

The skyline isn’t finished yet, and I don’t care for the lighting as it stands, but it is a start.

Time Spent

Approximately an hour actually working on things. Another 30 minutes or so was spent looking for things. 🙂 Neither the Dysotopia nor Greeble City Blocks had been installed yet; so I had to find them first.

Accumulated time: 5.5 hours

3D Surfaces and Light (Part 3)

Continuing my series on 3D surfaces:

In this installment, I’m going to talk about how the color settings and texture maps on the surface as well as colors in the lights affect the results.

Mapping Things Out

First, let’s talk a bit about how texture maps work. If you’ve been in the 3D world for long, you’ve probably heard the term “UV Mapping”. FYI, the “UV” isn’t about sunscreen. 🙂 It doesn’t stand for “ultraviolet” it refers to the coordinates “u” and “v” which are used to look up information from a texture map.

u and v are used as coordinates that map (or translate) from the 3 dimensional object space to a 2 dimensional space on the texture graphic. Let me see if I can make this make some sense visually. In the following diagram, the point on the front face of the cube is mapped to the (u,v) coordinates of (100,75). Those coordinates represent the portion of the texture map indicated by the arrows at (100,75).

Mapping from a point on the cube to a location on the texture.

Mapping from a point on the cube to a location on the texture.

Now, you might wonder why we need special coordinates for this. As 3D artists, we mostly work in the standard (x,y,z) coordinate space. Well, consider if the cube rotates as in the diagram below. In this case, the (x,y,z) coordinates of that point on the cube have changed. However, the (u,v) coordinates remain constant.

Showing that rotating the cube doesn't change the (u,v) coordinates.

Showing that rotating the cube doesn’t change the (u,v) coordinates.

If you get deeply into 3D graphics (especially if you start trying to write your own shaders) you will discover that there is a plethora of coordinate spaces, each of which serves a purpose.

X Marks the Spot?

So, now that we have a very basic understanding of what a map is and how it is referenced, how are texture maps used in shaders? Well … using my favorite answer for such questions … it depends! 🙂 Remember that the real answer is that how things like texture maps are used depends entirely on how the shader that is applied to the surface uses them. However, that would be a cop out on my part; so let me talk about the most common ways that you’ll see texture maps used.

It’s a Gray Kind of Day

In many of the cases I’m about to talk about, I’ll discuss whether the setting uses a color or grayscale image. Rather than repeating how a grayscale image looks, I thought I’d do it once to begin with.

Grayscale images are images where the color has been removed and only the value remains. The simplest way to think of a grayscale image is that the red, green and blue values of the color are averaged to arrive at a gray value. If a particular setting in a shader is expecting a grayscale image and you provide it with a colored one instead, it will average those values for you. This can create some interesting results. Consider the following diagram…

Different colors can have the same gray scale value

Different colors can have the same gray scale value

Although the three colors are quite different, the average RGB value is 127. So in a grayscale image, they would all look the same. For this reason, I often suggest that if a 3D artist is going to add a grayscale map to their surface settings, they should take the time to use the image editor of their choice to remove the color and make sure it is providing the values that they really want to see.

Adding Details

In my Displaced Bumps and Is This Normal? tutorials, I talked about various methods that we can add fine 3D details to our surfaces using bump, displacement, and normal mapping. In those cases, texture maps are used to indicate the magnitude of the changes to the surface. For bump and displacement, the map is treated as grayscale. Normal maps are a special case where they use colored maps, but the colors mean something. Dropping a texture graphic intended for the diffuse channel into a normal map will NOT give you the results you might have been hoping for. See my discussion of DAZ Studio Displacement in Carrara 8 for more information about normal maps.

Hiding Surfaces

In many cases, we can also apply a grayscale texture map to the opacity channel. Rather than saying that an object is 100 percent visible, or 50 percent visible, etc. We can use a texture map to change the opacity for specific parts of a surface. This is sometimes called transparency mapping. We see this most often in 3D hair content, but it can also be used in clothing materials to hide portions of a surface, for example to create a lacelike effect on a dress.

Strength Control

Most shaders will allow us to use a grayscale map in the strength channel. This allows us to have much finer level of control over the level of effect that a particular channel has on our surface. Basically, rather than telling the shader “Add 90% of the specular light to the surface”, adding a map says “Lookup the (u,v) location in the texture map and scale the level of effect by what you find there”.

It is important to note that when we’re using a texture map in the strength channel, the percentage value does still have a role to play. If we’re using a texture map for specular strength, and the strength is set to 100%, then the grayscale image will change the effective level from 0% for black (0,0,0) to 100% for white (255,255,255). If we change the percentage to 75%, then the maximum value for a white portion of our grayscale map becomes 75%.

Rose Tinted Glasses

So, those are the “other” purposes for texture maps, but this section of the tutorial is about colors, right? By far the most common purpose for texture maps is to use them in the diffuse channel to add color details to the surfaces. Without a texture map, we would be limited in what we could do for the color of the surface by what RGB values we could assign to it. Texture maps are what allow us to have human skin that isn’t uniformly a single color.

In this case, the texture map is telling the shader, “When deciding what color to make this surface, look up the (u,v) point in the texture map and choose the color from there.”

However, just like the strength channel discussion, the color setting you use in your shader values also comes into play. To understand how, we need to look at how rendering engines actually think about color.

For artists, we tend to think of colors in terms of Red, Green and Blue values. We’re probably used to expressing those values in terms of 0 to 256 for each color. However, the rendering engine doesn’t see them that way. To the engine, colors are RGB values where each value ranges from 0 to 1. So, while we might define Cyan as (0,255,255), the rendering engine sees that color as (0,1,1).

So what does that mean for how the color setting and texture map interact? Well, basically, the rendering engine multiplies the two together on a channel by channel basis. So, if you have the color value set to white (255,255,255), and the texture map lookup returns cyan (0,255,255), the multiplication is pretty simple…

Red = 255 * 0 = 1 * 0 = 0
Green = 255 * 255 = 1 * 1 = 1
Blue = 255 * 255 = 1 * 1 = 1

So, you’ll get a cyan color for that part of the surface.

However, consider if you’ve set the color value to magenta (255,0,255). At the same point, the texture map lookup returns cyan (0,255,255), but the math is going to look very different…

Red = 255 * 0 = 1 * 0 = 0
Green = 0 * 255 = 0 * 1 = 0
Blue = 255 * 255 = 1 * 1 = 1

So now your surface at that location is going to look pure blue!

By the Pale Moon Light

Let’s extend that color multiplication discussion in to lights. Do you know why a car looks blue? It is because the surface of that car is absorbing all of the light except for blue. Blue light is reflecting from the car and so that is what you see. The red and green portions of the spectrum are absorbed by the surface.

In most cases, we use lights that are white (or a close variation thereof) and so the effect that the light has on the surface isn’t that much of a factor. However, when we try to get fancier with our lights or we try to use them to create some sort of special effect by coloring them, we can end up with unintended consequences!

Let’s say that the surface calculations tell the rendering engine that the color of the surface is cyan (0,1,1). What that really means is that the surface will reflect 100% of the green and blue light that hits the surface. If our only light source is set to red (1,0,0), what do we get?

Red = 1 * 0 = 0
Green = 0 * 1 = 0
Blue = 0 * 1 = 0

We get black (0,0,0). Granted, most of the time our light and surface colors aren’t that neat and simple, but it does show why when you get too far outside the normal range of “white” lights, you can have unintended consequences.

1000 Points of Light

What if there are multiple lights in the scene? Well, the input from the lights are mixed together to get a final color for the surface. So, if we keep with our cyan (0,1,1) surface, and we have one yellow (1,1,0) and one magenta (1,0,1) light which happen due to planning or circumstance to be lighting the surface exactly equally, then we’ll get a surface color like this…

Light 1 (1,1,0)

Red = 1 * 0 = 0
Green = 1 * 1 = 1
Blue = 0 * 1 = 0

Color 1 = (0,1,0)

Light 2 (1,0,1)

Red = 1 * 0 = 0
Green = 0 * 1 = 0
Blue = 1 * 1 = 1

Color 2 = (0,0,1)

Final Color

Red = 0 + 0 = 0
Green = 1 + 0 = 1
Blue = 0 + 1 = 1

Final Color = (0,1,1)

So, we’ll end up with the cyan color of the surface.

Wrapping Up

For strength type channels (bump, displacement, opacity, color strengths, etc.) applying a grayscale image to the strength channel allows us to vary the effect of that part of the shader across the surface. Applying texture maps to color settings will force the rendering engine to lookup a value from the texture map when determining the color of the surface at that point. And if we use anything other than white in our color settings on the surface and lights, we have to keep in mind that we’re multiplying colors together, which can mean that we’ll end up with changes to the visual effect of the surface. This can work to our advantage, however, if we plan for it.

In my final installment in this series, I’m going to fire up DAZ Studio with a pretty simple scene so that we can see how varying one value while keep the rest constant changes the look of the objects. Maybe some visual examples will help where reading the text of this series wasn’t clear enough.

The Architect – Character work

I worked a bit more on her skin tonight, then on getting her outfit to work. I’m using an outfit called Bioflow by Aeon Soul (formerly known as Aery Soul). The outfit isn’t currently available anywhere. Which also presents a challenge since Alice’s body shape has changed; so I had to work on manually adjusting some of the pieces to get them to fit her.

Alice as my architect.

Alice as my architect.

Time Spent

Another 2.5 hours of work tonight. Could have been faster if I’d used an older version of Alice’s body shape, or a newer sci-fi set. But I’m kind of stubborn that way.

Time so far: 4.5 hours.

The Architect (prelude)

While this is a new project, the concept for it has been bouncing around in my head for years. It will be a sci-fi themed image with a young woman as an architect designing a building to fit into a city skyline.

I’ve had in mind before the idea that sometimes new 3D artists may not be aware of all the work that goes into some of the final images they see. They see some amazing artwork in a gallery, read that it was done with DAZ Studio or Poser and think “Oh, I could do that then!” So I’m going to try to chronicle the process that I go through from beginning to end with this. And provide some information about the amount of time it takes me to get to my final result.

I’m SURE that there are people who can do things faster than I do. And I’m a bit rusty after some considerable time away from the hobby. But I hope people find this journey interesting and informative anyway.

Alice

I’m using Alfaseed’s Alice character. So far, I’ve just been messing with getting the skin shaders to work how I’d like them. Lighting is pretty poor. Just two spotlights to allow me to see how changing the SSS parameters was affecting the image. I used Age of Armour’s excellent Subsurface Shader Basics Tutorial as a guide to the SSS shader in DS 4.6. I actually did the work without any hair on her, but I decided to put at least something there before I posted a shot. Probably won’t be the hair I use in the final image I have in mind.

Headshot of my architect

Headshot of my architect

Time Spent

I spent a little over 2 hours on this last night. To be honest, much of that was just getting Alice to work properly. Alfaseed changed the way she works in this version from previous versions of Alice. The end result is easier to use, but the installation is more difficult. If I had read their installation tutorial FIRST, I would have saved myself a lot of time. 🙂

3D Surfaces and Light (Part 2)

Continuing my series on 3D Surfaces:

In 3D Surfaces and Light (Part 1), I talked about the physics of what diffuse, specular, and ambient settings in a surface shader are trying to simulate. In this portion of my discussion on 3D Surfaces and Light, I’ll talk about how those settings / dials you see are actually used in the shader / rendering code.

It Depends!

Ok, first I have to issue a caveat. Since much of how a surface is defined in a 3D program is dependent on the way that the shader code is written, the actual math of how things work can vary widely. You don’t have to look any further than the difference in how Poser and DAZ Studio handle the ambient channel to see an example of this.

In the default surface shaders, Poser treats the calculation of the ambient contribution to the color of the surface independent of the diffuse and specular settings. In DAZ Studio, the default shader blends ambient and diffuse together. This means that although both programs can use the same definition of the surface settings, the results that each program creates can be significantly different.

I’ve seen some folks call this a problem between Poser and DAZ Studio. This is inaccurate. The difference is in the shader code for the default surfaces. It isn’t in the rendering engine itself. And, neither is “correct” or “wrong” in how they do it, they are simply different.

Shady shaders?

A brief primer on what a ‘shader’ is. In order to make general 3D rendering engines as flexible as possible, very little about how 3D objects and surfaces are handled is hard coded into the engine. Some rendering engines (especially real time engines such as for games) may break this rule in the interest of speed, but most general purpose rendering engines use shaders to define how an object looks in the final result.

Shaders are bits of code which tell the rendering engine things like “When light strikes this surface, this is how you should calculate the effect that it has on the image.” Most things in a 3D engine are actually defined as shaders. This includes surfaces and lights, even cameras.

Affecting the Effect

Ok, enough caveats and general thoughts, let’s get to the meat of things. For this discussion, I had to pick a basic shader to use my framework. I’ve chosen Renderman’s “plastic” shader. This is a standard reference which is often used for the basis for other more advanced surfaces. For example, DAZ Studio’s standard surface shader is an advanced version of this shader.

Warning: I’m about to get into some math and programming discussions, but I’ll do my best to make it easy to follow!

The code for Pixar’s reference “plastic” shader would look something like this…

Color =
(
Dcolor *
(
(
Astrength * Acolor
) +
Diffuse(N, (1,1,1), Dstrength)
)
) +
Specular(N, -I, Scolor, Sstrength, Sroughness)

Where the following are true…

  • Dcolor = the color setting for the diffuse channel
  • Dstrength = the strength setting for the diffuse channel
  • Acolor = the color setting for the ambient channel
  • Astrength = the strength setting for the ambient channel
  • Scolor = the color setting for the diffuse channel
  • Sstrength = the strength setting for the diffuse channel
  • Sroughness = the roughness setting for the specular channel (if the shader uses the term “glossiness”, then roughness is usually 1 – glossiness)

To work it from the inside parts out…

Acontrib = (Astrength * Acolor) – multiply the Ambient strength by the Ambient color to the get the contribution that the Ambient channel is providing.

Color =
(
Dcolor *
(
Acontrib +
Diffuse(N, (1,1,1), Dstrength)
)
) +
Specular(N, -I, Scolor, Sstrength, Sroughness)

Dwhite = Diffuse(N, (1,1,1), Dstrength) – Call the built-in Diffuse function to calculate a diffuse value at the current location based on a pure white surface and the provided Diffuse Strength value. This gets used to “wash out” the ambient setting (see the next step). If there is no ambient setting, this will also provide the shader with what is needed to calculate the strength of the Diffuse component in the surface later in the function.

Color =
(
Dcolor *
(
Acontrib + Dwhite
)
) +
Specular(N, -I, Scolor, Sstrength, Sroughness)

DAcontrib = Acontrib + Dwhite – This basically “washes out” the ambient contribution by adding the ambient contribution to the white diffuse setting calculated above. This is why when you have anything above a zero in the Diffuse Strength setting, the ambient component seems to be lessened. If ambient strength had been set to zero, this factor would end up equaling the diffuse strength value.

Why Link Them?

This is where Poser and DAZ Studio diverge in how they calculate this. The example I’m using is the Pixar Renderman reference and is what the standard surface shader in DAZ Studio is based on. Poser does not combine ambient and diffuse in this way.

The question often arises, why does DAZ Studio link ambient and diffuse together and Poser doesn’t? Remember that in Part 1, I talked about how the ambient channel was an attempt to represent that in the real world, light bounces around much more than we can actually simulate in a rendering engine. So in this shader code, the programmer was trying to say that if no other light touches this part of the surface, the ambient setting should be used to represent this indirect light. However, if there is a light source on this part of the surface, that light should be stronger than the indirect lighting.

The ambient channel in Poser can also be used this way, however it ends up being up to the 3D artist to find the correct balance between ambient and diffuse lighting strengths. For this reason, in many cases the content creators for Poser use the ambient channel for special effects (like glowing patterns) rather than for the indirect lighting factor that it was designed for.

Again, neither implementation is “correct” or “wrong”, just different. And this difference is why you’ll see a change in how a surface looks in each program even with the same values in the channel settings. Back to the plastic shader breakdown…

Color =
(
Dcolor * DAcontrib
) +
Specular(N, -I, Scolor, Sstrength, Sroughness)

Dcontrib = Dcolor * DAcontrib – multiply the resulting Diffuse color by the washed out ambient contribution.

Color =
Dcontrib +
Specular(N, -I, Scolor, Sstrength, Sroughness)

Scontrib = Specular(N, -I, Scolor, Sstrength, Sroughness) – call the built-in function to calculate the Specular contribution based on the color, strength, and roughness settings.

Color = Dcontrib + Scontrib

Finally, add the diffuse contribution to the specular contribution to get the final color for the surface at this location.

Material Differences

As I mentioned at the beginning, this is an example of how the Renderman Plastic reference shader from Pixar works. Other surface shaders may use completely different math. For instance, consider the following code for the metal shader.

 Color =
Scolor *
(
Astrength * ambient() +
Sstrength * specular(N,V,Sroughness)
)

This code only uses ambient and specular, ignoring the diffuse settings completely. This would strengthen the specular effect, but only the ambient channel would add any other color.

Wrapped Up

I hope this made some sense to people, but if there are any questions, please feel free to ask them! In the final segment of this tutorial, I’ll talk in more detail about how texture maps and colors (both surface and light) combine to affect the look of your surface.

Project: Renderman Shaders

I’ve been working with the Shader Builder in DAZ Studio lately. To learn more about how it works and using it to write shader code, I’ve been converting some of the reference shaders from the Renderman Companion site and Fundza into Shader Builder Networks. Here are some of the results so far…

Gooch

MK Gooch shader from Renderman Companion.

MK Gooch shader from Renderman Companion.

Screen

Uses the Screen shader from Renderman Companion.

Uses the Screen shader from Renderman Companion.

I’m struggling some with the screen. You can see on Aiko that the bands of the screen aren’t aligning properly. So I tried using the transform function to change the coordinate space that is used for determining the gridlines. Here are the results using each transform…

S & T coordinates translated to Camera space.

S & T coordinates translated to Camera space.

S & T coordinates translated to Object space.

S & T coordinates translated to Object space.

S & T coordinates translated to Shader space.

S & T coordinates translated to Shader space.

S & T coordinates translated to World space.

S & T coordinates translated to World space.

S & T coordinates translated to Screen space.

S & T coordinates translated to Screen space.

S & T coordinates translated to Raster space.

S & T coordinates translated to Raster space.

S & T coordinates translated to Normalized Device Coordinates (NDC) space.

S & T coordinates translated to Normalized Device Coordinates (NDC) space.

 

It seems to me that the Object space works best to remove the seams, but still isn’t perfect; so I implemented the “Show ST” shader which shows the S & T coordinates for the models.

ShowST

You can see from the render here that Aiko 4 has some seams in her UV maps that may be hard to get rid of.

Uses the Show ST shader from Renderman Companion.

Uses the Show ST shader from Renderman Companion.

« Previous entries