Author Topic: Art With Botanic (Dial-up warning LOTS of images)  (Read 7489 times)

botanic

  • Hydlaa Resident
  • *
  • Posts: 61
    • View Profile
Art With Botanic (Dial-up warning LOTS of images)
« on: August 12, 2009, 10:50:06 pm »
I will post a number of tutorials here that will explain any number of things related to art.

Normalmapping in General  http://www.hydlaaplaza.com/smf/index.php?topic=35743.msg408973#msg408973

Advanced Texturing-Quick Guide  http://www.hydlaaplaza.com/smf/index.php?topic=35743.msg408974#msg408974

Normalmaps Explained  http://www.hydlaaplaza.com/smf/index.php?topic=35743.msg409215#msg409215

Character Creation P1  http://www.hydlaaplaza.com/smf/index.php?topic=35743.msg409230#msg409230
Character Creation P2  http://www.hydlaaplaza.com/smf/index.php?topic=35743.msg409230#msg409231

Disclaimer: Not all these tutorials have been written by me.

Feel free to re-post these anywhere, however please do not change them.
« Last Edit: December 06, 2010, 12:46:49 am by botanic »

botanic

  • Hydlaa Resident
  • *
  • Posts: 61
    • View Profile
Re: Art With Botanic
« Reply #1 on: August 12, 2009, 10:59:54 pm »
Smoothed and Unsmoothed Tangents

Normalmaps are baked differently depending on object's smoothing.
The renderer goal is to compensate for the smoothing, so the lowpoly with the applied result will be as close as possible to the highpoly version.





Smoothed Tangents - Smoothing Errors Problem and Walkaround

In current realtime rendering technology, smoothing on faces with hard angles (such as cubes, and sharp slopes), leads to bad shading - commonly referred as smoothing errors.
 


The walkaround to this problem, is to add more geometry to the edges - which makes them smoother, and lead to more accurate smoothing results by the realtime renderer.
 
Although this walkaround is useful for objects with a few problematic areas - on objects with a lot of sharp edges the polycount overhead would be tremendous, thus making this method pretty useless for most technical models.
This is also one of the reasons many games reserve normalmapping for characters (which are good for smoothing) and use very little on technical models.




Unsmoothed Tangents - Edges Errors Problem and Walkaround

Unsmoothed Tangents don't use smoothing, and therefore wouldn't have the above smoothing errors.
However let's render the simple box with Unsmoothed Tangents and see the problem that occurs :

 
The problems occurs because each face normalmap points to a different direction as illustrated below.
Even if the uv edge is exactly between the pixels, the way those pixels would be pulled by the realtime renderer will mix them up resulting in a sharp normal transition which stands out from the rest.

 
So, where there are seams, this problem won't occur - therefore using unique uv's would completly eliminate the problem.
 

As you can see, this UV both sacrifices precious texture real-estate, and also impractical to texture first hand. However it can still be used with projection technology (which is available for most high-end packages) - for example this can be textured on a proper UV second hand and then projected to this model with the unique UV's.




3d studio max's smoothgroups

This theory can be replicated through the use of 3ds max smoothgroups, although it's not as efficient, the results are satisfactory.

   1. Wherever there's extreme sharpness - seperate to smooth groups.
   2. Make sure UV Islands don't contain multiple smooth groups, or you'll have a black line where they seperate.
   3. This is mainly needed for the baking, post baking you can use the next method.


 



 NORMAL MAPS: PART II - page 2: Tips for Creating Models

As I've discussed, the whole goal of normal mapping is to make a low poly model look like a high poly model. This is usually achieved by creating both a low poly model and a high poly model and then using the detail of the high poly model to create a normal map for the low poly model. So we've got two models involved. This page contains tips for creating both the high and low res models that will help you to achieve better final results. First, we'll discuss the order of creation for the models.

Which Model Do I Create First?

So if you have to create a low poly model and a high poly model, which one do you create first? The honest answer is, which ever one you want! Here are a couple of options:

   * Create the High Poly Model First
      If you are used to creating high poly models for non-real-time rendering, you might want to create your high poly model first. Once you've created a beautiful model with lots of detail, make a copy of the model. This copy will become your low poly model - you just need to reduce the poly count. Most 3D programs come with a feature to simplify a model. In 3DS Max, the MultiRes modifier works pretty well. Just crank the poly count down to a number that fits in your budget. If the model is a character that needs to deform, be sure that you leave enough detail in the joint areas for realistic deformation.
    * Create the Low Poly Model First
      A lot of artists in the game industry are more comfortable creating low poly models. If that's you, you'll probably want to create the low poly model first. This gives you very fine control over the low poly mesh. (Sometimes resing down a high poly model, like in the previous method, gives you a very messy mesh.) Once you've got a low poly model you're happy with, make a copy of it. This copy will become your high poly version. Just subdivide it several times. The Mesh Smooth modifier in 3DS Max works well for this. Now go in and add all the detail that you've always wanted to add but couldn't because of your polygon budget.

There are other options, but this is probably enough to get you going. Once you get started you'll probably settle on a method that you're most comfortable with.

^Tips For Creating the Low Poly Model^

The low poly model is the version that will actually get used in the game. It needs to have a poly count that fits in your engine's budget. Creating a low poly model that uses a normal map is a little different from creating a regular model. Here are some tips for getting good results from your normal map:

    * Only One Smoothing Group, No Hard Edges
      Up until now, smoothing groups (or hard edges) have been a good way to accentuate features of a low poly model and make the details more clear and readable. It used to be important to use smoothing groups carefully to create a good model. Throw all of that out the window. Smoothing groups are an enemy to normal maps. Your low poly model should have one smoothing group (no hard edges). Here's why:
 
      Hard edges cause gaps in data collected in the normal map

      This image illustrates the process of generating a normal map. For every pixel in the normal map, a ray is cast from the surface of the low poly model outward along the normal where the ray started. The high poly model surface normal is recorded at each ray intersection. (A more detailed explaination is available in my first normal mapping tutorial here.) If your model has a hard edge, all of the rays on the polygons that share that edge will be uniform (go in the same direction). This will leave a gap in the rays between the two polygons. No data from the high res model will be recorded in this gap and an ugly seam on your final model will result.
 
      Use soft edges to avoid gaps in the data

      This image illustrates the same case but with a soft edge. Here we see that with a soft edge, the low poly normals are interpolated (smoothed) from one polygon to the other. There is no gap in the rays that are cast from the low poly model to the high poly model. No high poly model data is left out.

      There is a similar principle with regards to the high poly model. Be sure to check it out below in the high poly model tips section.

   * Avoid Extremely Sharp Angles
      Because you're using one smoothing group, as mentioned above, if your model has sharp angles (greater than 90 degrees or so) you will get bad artifacts in your normal map. These are caused by the large difference between the angle of the faces and the vert normals. Round off your sharp angles with extra faces for better results, or if you need sharp angles, go ahead and use smoothing groups/hard egdes to create them.

    * Hide UV Seams Well
      The low poly model needs to have UV coordinates applied. You should already be familiar with this process. There are a few things to keep in mind when laying out the UVs. The first is to hide your UV seams as best you can. Put them on the insides of arms, in the back of the head, where the neck meets the shirt, around the waist, etc. Try to find spots on the model were a seam makes sense. UV seams are much more pronounced in a normal map than they are on a regular diffuse texture. There is one method that you can use to erase them. I discuss that on page 4.

    * Flipped UVs Issue
      In order to save space on the texture map and to achieve a high texel density, most character modelers will unwrap half of the model and then mirror that half for the other side. You can also do this during the unwrap process by selecting a group of UVs and flipping them horizontally or vertically and then laying them down on top of the coresponding set of UVs from the other side of the model. This is a great technique, but it causes some problems when creating normal maps. Here's the issue:

      The lighting on a normal mapped model is dependant on the surface direction (normal) of the polygons, but also on the normal direction of the UVs. It's as if the UVs have a normal also. When you flip the UV coordinates horizontally or vertically, the effect on the lighting is the same as if you flipped the polygon from front facing to back facing. Suddenly, the lighting looks like it's coming from the opposite direction.

      If you work for a game company, you can ask your engineers to add some additional code to your model exporter, so that when your models are exported from Max or Maya, etc, the exporter will recognize cases were the normal of the polygon and the normal of the UV coords don't match. In those cases, the exporter can flip the UV normal back the right way for you without effecting the unwrap at all. If they need help figuring out how to do that, this is a good place to start.

      If you don't work at a game company or have access to programmers that can just whip up stuff like that, your best bet is just to avoid using mirrored texture coordinates all together. That's probably not what you wanted to hear since you'll have to use a lot more texture space to get the results you want, but that's the best you can do.

    * Overlapping UVs
      Often when applying UVs to a model there are several parts of the model that share the same part of the texture, so the UV coordinates of those pieces are all on top of each other. This will work, but requires some special handling. Generally, programs that create normal maps, like NormalMapper, get all confused if you have overlapping UVs - so the trick is to make a special copy of your low res model that's used only for generating the normal map. On that copy, delete all of the polygons that have overlapping UVs except for one set. That way, when you generate the normal map using the copy, there are no overlapping UVs, but you can then apply the normal map to the original model that has overlapping and evething will work fine.

    * Splitting Up Your Model
      Many models require more than one normal map. In these cases make a copy of the original low res model, break the copy into multiple pieces, one for each normal map. Put all of the polygons that will be using the first normal map in the first piece, all of the polygons that will be using the second normal map in the second piece, and so on. Then generate the normal maps using the seperate pieces as if each piece was its own model. Once the normal maps are created, you can apply them to your original model.

      Sometimes it is also necessary to split up a complex model into several different peices even when it's just using one normal map. If you try to generate the normal map for your whole model and it comes out very messy looking with high res details in the wrong places and lots of errors, you can often fix the problem by making a copy of your low res model and breaking it up into pieces along the UV seams. Generate a normal map for each piece seperately and then put all of the parts of the normal map together in a paint program to create a single normal map for your original model.

^Tips For Creating the High Poly Model^

The high poly model will only be used to create your normal map. Since it won't be rendered in real-time, you can use as many polyons as you want - even millions! You can model every rivet and every nail head, every skin wrinkle and every pore. You can sub-divide the model until the wireframe is do dense it looks like one solid color. (This kind of modeling is very gratifying for those of us who've been stuck in low poly land for many years.) The only practicle limitation is the amount of time it takes to generate the normal map. Here are some tips to keep in mind when creating your high poly model:

    * You Don't Need UV Coordinates
      That's right! The high res model does not need UV coordinates. It's just a high detail mesh. Only the low res model needs UV coordinates. Optionally, you can apply UV coordinates to the high res mesh and then apply a bump map to it. Then when the normal map gets created, all of the detail from the high res mesh AND the bump map will go into the normal map (I'll get into more detail about how this is done on page 3 of the tutorial), but this is not required.

    * Be Careful With Straight Extrusions
      Details that extend straight out from the surface of the mesh don't translate to the normal map very well. If you select some faces on the high res mesh and extrude them straight out from the surface, the detail won't show up in the normal map or it will not look very good because the normal map contains surface direction, not surface height. Consider extruding the faces and then scaling them down a little at the top so that the sides are slightly sloped. Rounded details always translate into the normal map better than sharp grooves and ridges.

    * Remember Your Normal Map Resolution
      Keep in mind that all of the detail in the high res model is being created for the purpose of creating a normal map. The normal map will be created at a limited resolution (256x256, or 512x512, etc). If you put details on the surface of your high res mesh that are smaller than the pixel size of your normal map, you're wasting your time because they won't show up, they won't be clear. Remember that as you add small details to your high res mesh.

I hope that these modeling tips help you get better results when creating your normal maps. This is certainly not a complete list of tips for creating the high and low res models.


Tips for Editing Normal Maps

So you've created your normal map, you apply it to your model and it looks great! -- Except for those annoying seams and those few spots that didn't turn out quite right. Well that's easy, you can just open the normal map in Photoshop and fix those with the clone tool, right? WRONG!

A normal map is very different from a regular texture, because the colors represent the X, Y, and Z values of a directional vector instead of just a color. Not only that, but the three values together must yield a directional vector whose unit lenth is one (a normalized vector). This means that only a very specific set of colors can be used. So you can't just go painting colors and expect them to give you good lighting when used as a normal map.

"Can't I just paint with the colors that are already there?" - Yes, but if your brush has a soft edge, the blending will give you colors that no longer result in a vector length of one. Just about any editing that you do in your paint program that doesn't preserve the exact colors that were there when you started will give you incorrect results.

"So I can't edit my normal map at all in a paint program?" - You can. But there are a few things that you need to watch out for and a few tips that will help you get better results. Read on!

Re-normalization

The main thing that you need to do to avoid the problems that I mentioned above is to "re-normalize" your normal map. You can paint on it, and use the clone tool and edit all you like as long as you make sure all of the colors result in a vector with unit length one (normalized) when you're done.

Luckyly, Nvidia has created a very cool Photoshop plug-in that will do this for you. When you're done with your editing, you just run the plug-in and it looks at every pixel in the normal map and adjusts the colors so they give you normalized vectors! [http://developer.nvidia.com/object/photoshop_dds_plugins.html|You can grab the plug-in here]. Download the "Adobe Photoshop Normal Map Plugin" and install it. Once you're done with the installation, follow the step below to "re-normalize" your normal map.

   1. If you added layers while you were editing, flatten your image so it's all on the base layer. Get rid of any alpha channels if you have them.
   2. Choose "Filter -> nvTools -> NormalMapFilter . . ." This will bring up the Normal Map filter options window. There are options here for converting a bump map into a normal map, but since we already have a normal map, we're just going to use the normalization feature.
   3. Under "Alternate Conversions" choose "Normalize only." Click "OK."

That's it! Your normal map is now normalized again, just like it was before you started editing. Now you can save it and apply it to your model.

Seams

One of the main problems that occurs when creating normal maps is that there are seams at the edges of UV regions. This problem can be solved in several ways. The best way is to have the program that creates the normal map expand the colors of the map beyond the edges of the UV regions. In some normal map creation software this is called "edge padding" and in others it's called "expand border texels." There may be other names for this feature, but that's the first thing you should try if you're having trouble with seams.

If that doesn't work, another method is to paint a small buffer strip of light blue (127, 127, 255) along both of the edges of the seam. This light blue color represents the vector that points straight out of the surface. If both edges of the seam are using this color, they'll match each other better. You might lose a little of your high res detail, but at least you won't have an ugly seam. I recommend only using this method if you really need it.

Mip-Maps

Real-time textures use mip-maps to reduce texture flickering and sizzling when the texture is smaller on the screen than it is in resolution. Mip-maps are smaller copies of the texture. The smaller the texture appears on screen, the smaller the copy of the texture that gets used. Usually the mip-maps are automatically generated when the texture is used and the artist doesn't have to worry about them at all. With normal maps, it's a different story.

Mip-maps are generated by copying the texture and scaling it down. For regular textures, this works fine, but for normal maps, the scaling also de-normalizes all of the normals. That means that as your model moves away from the camera, all of the normals will get de-normalized and the lighting on your model won't look that good.

The solution to this problem is to make sure that when mip-maps are created for normal maps, the mip-maps get renormalized. This is probably a step that a graphics engine programmer could do pretty easily. If you don't have a graphics engine programmer handy, you can do it yourself by using DDS as your image file format. 3DS Max will read DDS image format.

DDS stands for Direct Draw Surface. It's an image file format that is used natively by DirectX. It contains extra information that DirectX can use - like mip-maps. Nvidia makes a plug-in for Photoshop that will read and write DDS format. The plug-in allows you to do a ton of stuff with the image when you save it in DDS format. Most importantly it allows you to normalize all of the mip-maps. If you installed the plug-in that I talked about above in the "Re-normalization" section you already have it. If not, you can get it here. Once you're done with the installation, follow the step below to "re-normalize" your mip-maps.

   1. In Photoshop open your normal map and choose "Save As . . ." Pick DDS format and click "Save."
 
      Save your image in DDS file format

   2. When you click the Save button, this options dialog will appear that allows you to specify all of the parameters for saving your image.
 
      The DDS format options dialog box
      I was really impressed with how much control all of these options gave me over exactly what I wanted to do with my image. You can do a lot more with DDS format than we're going to cover in this tutorial.

   3. Under MIP Map Generation, choose "Generate MIP maps." The plug-in will automatically create mip-maps for your normal map when you're ready to save. The mip-maps will be stored as a part of the DDS format image.

   4. Click on the "Normal Map Settings ..." button to access the normal map conversion options. The following window will open:
 
      The normal map conversion dialog box

   5. Make sure that the "Convert to Tangent Space Normal Map" box is checked. Under "Alternate Conversions" choose "Normalize only." Click "OK." These settings will ensure that when your mip-maps are generated, they will be re-normalized. Now click the "Save" button.

Now you've got a DDS image saved that has correctly normalized mip-maps. The lighting detail will remain correct regardless of the model's distance from the camera.

Texture Compression

To allow for as many textures as possible, most video games use some type of texture compression. Most use S3TC, DXTC, or some type of palletization. These types of texture compression work well for diffuse textures, but they don't work so well for normal maps.

All of these compression techniques change that colors of the image so that it can be smaller. This causes the normals to become denormalized. The compression also introduces other artifacts. These artifacts aren't as obvious on a diffuse texture, but on a normal map they really stick out.

The following table illustrates the effects of texture compression on a normal map. The left column shows the normal map itself with several different types of texture compression. The right column shows the lit model with the normal map applied.

 

While the file size savings is significant, the quality loss is also significant with all types of compression. It's pretty obvious from the chart that 8 bit compression is not usable at all. You might be able to use 16 bit, but it's only 2:1 compression. There is very little difference between DXT5 and DXT1 except that DXT5 also has an alpha channel (not shown here) which can contain extra information - like a height map.

Obviously, the best thing to do is to not use any texture compression at all on normal maps. If you absolutely have to use compression, DXT1 seems to be the best option. It has the smallest file size and is better looking than 8 bit.

Creating textures that will be made into Normal Maps

These are the rules for making a texture that will be made into a Normalmap

1) Check the UV

Is the UV good? Is there enough space between all elements so that the detail will show correctly? Have all your seams been hidden well? Does the UV map overlap? Are there any seams that you could remove?

2) Are you using 2 layers?

Make sure that all submitted art has 2 final layers. One layer will not have any shadow information the second layer will only have shadow information. This is ONLY important for the final submitted image and not the source PSD (Or w/e format it is in). If your image program does not support layers then send one image with shadows and one without.

3) Is the texture seamless?

If you can see a seam anywhere then it will only get worse when you apply a normal map. Fix that before you submit it please.

4) Have you created a normal map and tested for seams?

You should always make a quick normal map and apply it just to check for seams, as normal maps are much more picky on seamless textures.



This is NOT everything to watch for however these rules MUST be followed.
« Last Edit: August 12, 2009, 11:24:59 pm by botanic »

botanic

  • Hydlaa Resident
  • *
  • Posts: 61
    • View Profile
Re: Art With Botanic
« Reply #2 on: August 12, 2009, 11:12:22 pm »
This tutorial describes how to create a usable basis for texturing by splitting a scan or foto into different layers.
When you elaborate textures for a game engine, there is the need to work with different layers and divide them very accurately.
This allows you to easily create normalmaps from your textures. The goal is to have one "flat" layers with only the colors
(no shadow or specularity information (in the following abbreviated by "spec") and a second layer with only the shadows, but no spec.

If you have enough time to paint everything by hand, then you can easily keep during the production layers splitted,
so you have one layer with only colors and at least a second wih the shadow information. In reality artists work
with many more layers, f.e. for different parts of the texture and to have a specularity layer (for testing only, cause in the game engine the spec comes from the dynamic lighting).
But often (mainly for time reasons, but also it suits your working style) artists mix self painted parts with filtered content from fotos or scans.

This tutorial describes, how you have to split such material into different layers, so you can easily use it for creating normalmaps and
use the resulting material as a basis for a game engine.

1. The orginal foto. Rename the layer to "orig": Note the white area at the top. The problem is how to divide the light parts, which are parts of the base texture, and the parts, which are light, cause the have some kind of specularity. So how can we split the specularity and shadows from the foto? A software or computer is not able to tell the difference between the first white pixel (f.e. white mortar) and another white one, which is induced my specularity. The following method solves this problem, which seems at first unvsolvable :-)



2. Copy the "orig" layer. Apply a 30px gaussian blur and invert the colors. Then make it greyscale. Change the mode of this layer to "overlay" and duplicate the layer. The result is a flat texture without light or shadows stretching over large parts of the texture. Notice that very local dark and very local light parts remain untouched from the gaussian blur. Somehow we have created a highpass filter, which dont exist f.e. in Gimp. Feel free to use the existing function, if you have photoshop.




3. Copy the visible result to a layer called "color_specless". To be exactly this layer lacks specularity and shadows.




4. Copy the orig layer and make it monochrome. Place it above the orig layer.




5. Then change the layer mode to "grain extract". This are only the colors without any structure, shadow or specularity.




6. Copy the result to a layer called "colors_flat". Also copy the layer color_specless, make it monochrome and name it "structures".



7. Move the monochrome layer (subtract mode) above the structures (normal mode) layer. The result is the difference between the two layers when you subtract the second from the first. The visible result can be copied to a layer called "shadow".



8. Now change the layers positions, so that the structures (subtract mode) are above the monochrom layer (normal mode). The visible result can be copied to a layer called "specs".



9. For the final setup we change all layers modes in the following way: First change the mode of the colors_flat layer to "normal" mode.




10. The structures mode will be "grain merge". The result is a colored and structured texture. We could call it "diffuse".



11. The mode of the shadows layer will be set to "subtract". Note the smooth shadows visible in the texture.



12. The mode of the spec layer will be "addition". Be sure to make all other layer invisible. The image shows, that we have rebuild the exact look of the orignal foto. Note: If it doesnt:
Depending on the texture, steps 11. and 12. may have to be changed in the way, that you change spec and shadow.



13. By switching the specs off, we have a good basis for creating the normal map: a texture without specularities, but with enough structure information to have a crisp basis to use with a normal map.




14. You can also switch off specs AND shadows. The structure channel gives enough information about the form of the object for the gimp normalmap plugin from nifelheim. (a tutorial will follow)
I copied the visible result to a new layer called diffuse. I use this one together with the spec map in the normalmap plugin to create and test the normalmap.
So finally we have here a diffuse map, a normal map, and a specularity map:




The final result with a strong specularity level for demonstration purposes:
Note: i applied a "Difference of Gaussians" filter as an edge detect filter and also a 10px gaussian blur on the structure layer and used this
as a basis for the normal map creation with the nifelheim plugin. But what you do with the structures layer depends heavily on the kind of material you use.
The method above gives only a solution to get a good basis by removing shadows and specs of the original material.

The rendering without normalmap looks somehow flat, while the rendering with normalmap shows interesting shading variation depending on the local vectors of the normalmap:



The different angel of the viewer to the objects shows the variation, which is possible with normalmaps: notice how the area top left of the spotlight circle has changed by moving the camera.
So a player, who walks along this wall, will notice a variation of light while changing his position.



botanic

  • Hydlaa Resident
  • *
  • Posts: 61
    • View Profile
Re: Art With Botanic
« Reply #3 on: August 18, 2009, 01:25:30 pm »
Elevation Maps Typically, bump maps are generated by creating 8-bit gray scale images, often referred to as elevation maps. In these gray scale images, white represents the most elevated features, and black, the most recessed ones. An 8-bit gray scale can represent 256 levels. Here a Beethoven bust is represented with an elevation map.


When this map is applied to a quad as a bump map, it can simulate a bas-relief.


Elevation maps are limited in the kind of surfaces they can represent. Often, surfaces that are highly edge on do not render very accurately, which subdues the level of apparent elevation. The limitations of 256 levels of height can also introduce a terraced or grainy appearance to surfaces, and smooth surfaces and sudden angle changes do not reproduce well. Normal Maps To produce lighting effects, elevation maps are converted into normal maps. Rather than representing elevations, normal maps represent the normal of the surface at each pixel in the map. Just as normals are used to calculate lighting and reflections on actual geometry, normal maps can be used to determine at what angle light strikes a surface or what angle should be used in calculating reflections. To represent these normals, a normal map uses a 24-bit image in which different colors represent different directions.

Here is a normal map for the Beethoven bust.



Applying this normal map to a quad allows for a much higher quality bump effect than is achievable with an elevation map.
Here is a quad with the Beethoven normal map applied to it.

Notice that the overall appearance of the object is smoother and more dimensional than that created with an elevation map. This is due partially to the 8-bit resolution of an elevation map versus the 24-bit precision of a normal map; however, it is also due to inevitable artifacts introduced by the per-pixel conversion approach used by existing elevation to normal map converters and the ability of normal maps to handle sudden changes in surface orientation.

In this normal map for a sphere, red ranges from 255 at the far right to 0 at the left. From top to bottom, green ranges from 255 to 0. From the center of the hemisphere to the outside edge, blue ranges from 255 to 128.


RGB

This can be seen clearly if the separate red green and blue channels of the map are examined individually:


RED


GREEN


BLUE


While the individual channels are very similar to a conventionally lit sphere, there are important differences.

Fortunately, a simple technique generates a very accurate normal map from a 3-D model. If you want to start with an elevation map rather than a 3-D model, you can apply the elevation map to a mesh as a deformation map or even map it onto a quad as a bump map and use this technique, with some limitations. Generating a Normal Map Through the use of appropriate ambient levels, material settings, and the use of negative intensity lights, normal maps can be created for any 3-D object with rendering software such as 3D MAX, Maya, Lightwave, and Avid SoftImage. This technique was used to create the sphere and Beethoven normal maps seen above.

To render an image that captures 3-D information for a normal map, set the ambient level to 50 percent using material or lighting settings, depending on the package used. Place a pure red light to the right of the object and a pure green light above it. Place a negative intensity pure red light to the left of the object and a negative intensity pure green light below it. Place a pure blue light directly ahead of the object. No negative blue light is required because we will be creating the image along this axis and we can't see the back of the object.

The following diagram shows the lighting setup:


NOTE: THE LIGHTING SETUP IS NOT A TECHNIQUE TO MAKE NORMALMAPS IT IS JUST TO EXPLAIN HOW THE COMPUTER GENERATES THEM OR TO BE USED AS A EXERCISE FOR BETTER UNDERSTANDING OF THE CONCEPT

The object material should be set to white and all specular attributes should be set to zero so that only diffuse light is considered in the rendering. Due to some differences in light modeling, it is necessary to use different light, ambient, and surface characteristics to get the correct results in each package. I used Blinn lighting in MAX, and Lambert lighting in Maya, though other lighting models available in these packages may also work.

The object is rendered from the front isometric window or with a high zoom setting on a distant camera placed in front of the object when using software that does not allow isometric viewport rendering.

The background should be set to 50 percent gray. This is the value used to represent a zero normal in normal maps. You may also want to render an alpha map at the same time you create the normal map.

In addition to producing very high quality normal maps, this technique allows a number of effects that are not readily achievable using other methods for normal map creation. For example, smoothing groups can be used to create smoothed or sharp edges, something not possible with elevation maps. Additionally, you can apply procedural and drawn bump maps to surfaces, and the results will be identical to their appearance in your rendering package.


« Last Edit: September 28, 2009, 10:52:46 am by botanic »

botanic

  • Hydlaa Resident
  • *
  • Posts: 61
    • View Profile
Re: Art With Botanic
« Reply #4 on: August 18, 2009, 05:59:23 pm »
Freakin Long Ars Tut
Introduction: This tutorial will cover my personal workflow for 3d modeling, 3d sculpting,
retopo / optimizing, UV’s, normal map generation, light map generation, and texturing.
The Creature Summary: Paragalis is literally a walking parasite that’s approximately the
size of a large dog from tentacle tip to tale. The creature is an ambush predator who
catches its prey with a tongue that can extend to twice the size of its body. Paragalis’s
tentacle like mouth contains rows of teeth for gripping its prey, with a barb at the end its
tongue that injects toxic venom! Once the prey has been caught its then quickly digested
with its remains excreted from the creatures back hump in a gassy form! “I’d suggest that if
you were to run into this thing to run the other way, quickly!”
The Specs: Lowpoly model / 7,000 triangles, Textures / 2024x2024 / diffuse, spec, normal
Tools: 3ds max 8 / 9, Photoshop, Mudbox, Zbrush 3.1, Crazybump, Polyboost
Step One: From concept to model
Once I have the design of the creature illustrated I then proceeded to the modeling phase. I
find it helpful when making a symmetrical creature to at least have some sort of concept or
rough outline to work from. This makes things easier on the modeling end and cuts out the
guesswork when trying to nail the profile. I used a simple box modeling technique to
quickly rough out the base mesh, just remember that I wasn’t going for any high-level detail
and only focused on the broad strokes. This is important to remember since the model will
change to some degree in the sculpting stage. When modeling the base mesh I try to keep in
mind how much detail will be required in certain parts of the model, for instance I know
that the tongue, tale and tentacles will need a good amount of detail. I also use this level of
thinking when I work on characters as well, hence extra edge loops in the head, hands, etc.
Anyways as you can see from the images below I simply extruded a bunch of edges and
faces in 3dsmax “editable poly mode”. I switched over to the perspective viewport once I
was satisfied with the profile and began roughing out the form and proportions making
sure the design held up in 3d. Since I didn’t draw a front sketch of the creature I had to use
a bit of imagination on how wide it would look. I usually judge width by heads but I had to
use its tail and a pig’s body instead due to the creature’s unusual design. As an artist it’s
always good to reference actual creatures, people, places and things when making
judgment calls on forms and proportions’, having this extra tool work wonders. Anyway the
total modeling time was around an hour and a half and it’s completely made up of quads.


Step Two: Separate Elements
When it comes to creating a base mesh it’s always good to think two steps ahead so you
don’t run into any issues later on down the road! This is why I created the tongue
separately; doing so will free me up in the sculpting stage too not only hide that part of the
mesh easily but also subdivide the hell out of it for more detail. I can get away with this
method since the tongue goes pretty far back in its mouth and doesn’t contain any obvious
skin interaction.

Step Three: Subdivide smooth Test
Once I’ve completed the base mesh I then run a few tests, mainly 3ds max “turbo smooth
modifier”. As a rule of thumb I always make sure that my quad based models can withstand
a turbo smooth. This is important since this’ll give me a heads up on how it’s going to
smooth in either Mudbox or Zbrush, not only that but I can also look out for any weird
pinching in obvious areas.

Step Four: Preparing for Export
After I finished checking the base mesh for any smoothing oddities it’s time to get it ready
for export. I’m pretty paranoid about errors like double faces, multiple edges, holes, etc.
Due to this I make sure to run 3dsmax “STL check modifier”, this will instantly highlight any
errors my base mesh contains. Once the modifier has been run it’s time to “Reset Xform”
which basically resets all transforms made to my base mesh. Last but not the least is setting
up my pivots, I tend to center my pivots to the object and then adjust it to a comfortable
position. The reason for this is so that both the pivots in my modeling app and sculpting
app match up. I find this to be helpful when working with multiple elements, for example a
space marine with lots of separate gear.

Step Five: Model Export
This step is pretty easy as you can see the settings that I used from the screen shot below.
Step Six: Model Import
This is pretty self-explanatory much like the export step but I did want to highlight the fact
that I name my separate elements! I think it’s a good habit to get into when dealing with
multiple objects. Once both objects are in my sculpting app I then subd the mesh to a
workable density level.


Step Seven: Creating the Reference sheet and Paint Over
I find that it’s important to gather your reference materials even at this early stage and
create what’s called a “Material callout sheet”. Generally this sheet is created before the
texturing stage but comes in handy when you have to sculpt as well. I can’t stress this
enough but it’s always good to be prepared and have some sort of reference handy! I see
way to many artists pulling out detail from thin air without any real appreciation for the
surface their trying to simulate. With that said, the sheet should have a healthy mix of
photos and illustrations. While the reference sheet isn't written in stone it'll serve as an
initial guide for you and or client, art director, etc. The last thing I wanted to address was
the teeth; I kept them separate since they would just get in the way of my sculpting.
Imagine trying to work up to nice level of detail in your sculpt while avoiding rows of little
teeth along the way, it’s pretty annoying and it’s always best to leave small details like that
off until the sculpt is near completion.

Step Eight: Creating the Digital Sculpt
Part One: Establishing the Detail Flow
The first thing you need to do when creating a sculpture is to identify your detail flow. This
means you should visually breakdown where certain details are going to be placed on the
object and how intense they need to be. This is an important step to master since it’s easy
to lose your way when sculpting! Personally I believe this approach and level of
observation will result in a more focused effort and a better sculpt when all said and done.
You can see from image below that I broke down what areas required what details and how
they will be distributed throughout the model.
Minimal level of detail: This area should contain a subtle amount of surface detail
corresponding to the surface and or material type. Usually this area is made up of broad
sections on the sculpture and doesn’t contain much visual interest. In regards to Paragalis I
made this area consist of stretched skin which at first glance may appear to be quite
intense but will be pushed to the background once the model has been baked and textured.
Moderate level of detail: This area will make up most of the detail on the sculpture and
will usually contain a broad sense of visual interest. As you can see from the image below,
Paragalis is mostly made up of moderate detail, which contains general muscle, skin, and
fat information.
High level of detail: These are the areas that should contain extreme levels of surface
information. Usually these areas are kept to a minimum to increase their visual impact and
presence. When it came to Paragalis, I focused the high level detail to the tentacle like a
mouth, tongue tip, back hump, feet and parts of the top skin.
In closing all objects whether there are people, places, or things require different levels of
detail, which can vary quite a bit since you can easily go from the modest to the extreme.
The most important thing at the end of the day is to recognize what you’re making and how
it’s going to be perceived by the viewer. Now what that means in terms of game art is to
always have your creation readable at various distances without losing its purpose.

Part Two / Building on the Curve
So now that I have a good idea on how to approach the model, it’s time for me to start
sculpting out the forms. You can easily see that I move in steps from the images below.
Base Mesh: This is my imported mesh that I’ll look over and make slight changes to,
usually I’ll only adjust the proportions and positioning on certain elements here.
Rough Sculpt: In this step I’ll quickly start applying the rough muscle mass making sure to
only make broad strokes and staying away from any high level details. I encourage you
when in this step to shy away from going into too high of a sub division level! Make sure to
pick a moderately detailed subdivision level and work on it until you can only see the rough
forms taking shape. The reason you want to do this is so that you can avoid obvious
lumpiness, misshaped details, pinching, etc. When it came to Paragalis I went from the base
mesh “body only” which was 2064 polygons to the third subdivision level, which was
33,024 polygons. It was at the third sub division level where I spent a good deal of time
roughing out the sculpture.
Refined Sculpt: I subdivided the mesh two more times and proceeded to the next step
once I was satisfied with the rough sculpture’s forms and level of detail. This is the step
where I focused on pulling together the details such as muscle mass, skin, nails, fat, etc. It’s
important to keep your energy and attention focused on working with the detail you have
and making sure that it’s what you want before moving onto the next step.
Polished Sculpt: Now it’s time for me to add the finishing touches to the sculpture. This is
where I add the teeth and focus on the little things like the skin, back hump, tongue, or
anything else that will make the sculpt feel more alive.

Step Nine: Retopology / Optimizing the model
All right, now that I’m happy with my high poly mesh it’s time to convert into a low poly
model, so where do I start? Well I have a few options available to me in this task and I’ll
make sure to give a brief overview of each of them. Up until this point I’ve tried to stay
software neutral but looks as though I have to get a bit specific in terms of the functions
and workflow. From here on I’ll be going over a few features in both Zbrush 3.1 and 3ds
max with Polyboost. So let’s start with option number one.
ZBrush 3.1 retopology workflow:
ZBrush as we know has a cool set of tools and one of them happens to be its ability to build
new topology over an imported mesh! Below, the quick start guide pretty much shows how
I get up and run when it comes to building new topology in ZBrush. Before I begin there are
few things you need to know when you’re just starting out:
- When creating your low poly mesh you can deselect by clicking on the canvas, and when
you want to select vertices just click on it.
- To delete a vertex hit “alt” and click on the vertices you want to get rid of.
- To create new edges just click on it, “preferably” in the center of the edge”.
- Make sure to hit “A” regularly preview what your retopologized model will look like.
- If you would like to have open holes in your retopologized mesh then set the “Max Strip
length to 3 or 4.
- If you would like to move the vertices you created around, then go into move mode. The
“Move” button is right next to the “Draw and Edit” buttons.
Once you’re done editing those vertices you can switch back to draw mode.
1. The first thing I have to do is to import my Mesh into ZBrush as a tool.
2. Once my mesh is loaded I select a zsphere and draw it on the canvas, after that I press the
edit button.

3. Then in the rigging tab I select the model that I want to retopologize. When that happens
the tool momentarily shares the same space as the ZSphere.

4. After that I click on the topology tab and Hit "Edit Topology". The mesh is the only thing
on the canvas at this point and it turns brick red.

5. Then go into the Transform palette and turn on symmetry

6. When that’s done I click on the model to start adding new topology lines, creating and
connecting vertices along the way.

At this point I start creating the vertices that connects the topology lines. You can see from
the image below how this works. I started with the head and made my way down the back
by simply roughing out the shape. I try not to worry about edge flow and tight detail in the
early going; I simply concentrate on all of the big forms. Once I have the new topology
conformed to the mesh, I then begin to tweak the vertices and edges for better edge flow.
You should always keep in mind that the new topology would be used for a low poly model,
so try not to concern yourself with a perfect shrink-wrap! All you need is enough new
geometry that you can modify in your 3d application. Another cool thing about Zbrush is
Adaptive Skin, you can pretty much cycle through higher and lower subdivision levels by
adjusting the slider. What this means is that you can comfortably build your new mesh at a
higher subdivision level and reduce it on the fly when you’re ready. When the mesh is
completed I simply hit the “A” button to look over my mesh, then I go to the tool palette and
export the new geometry.

Well that’s it from the Zbrush aspect of things; I could have continued building out the new
mesh by repeating most of the steps mentioned above. Instead I’m going to move into the
3ds max / Polyboost workflow and demonstrate a few of the tools features.
3ds Max / Polyboost retopology Workflow
Before going over this method I wanted to talk a bit about Polyboost. Polyboost is a max
script that features a number of handy tools for modeling, texturing, UV mapping,
transforming, selecting, etc. Unfortunately it’s not a free max script but I will tell you that I
find it extremely valuable and have incorporated it into daily my workflow. That being said
I highly recommend this to artists who want to get a little more mileage out of 3ds max! By
the way you can view some of the additional Polyboost features on www.polyboost.com
This is a brief overview on how I get started rebuilding topology in 3dsmax, which in turn
will feel a bit like the Zbrush 3.1 workflow. Only now I’ll be working with a much lower
triangle count source mesh. Once I was happy with the high poly mesh I proceeded to go
down a few sub division levels and export it at a level 3ds max can handle, the midlevel
mesh that was exported came in at 66,048 triangles. The cool thing is that the midlevel
source mesh carries all of the silhouette detail needed for me to start building around. That
being said you can see from the image below how the selected tools in Polyboost work. The
main tool that I’ll be using is “PolyDraw”. Under PolyDraw are a number of sub tools that
I’ll be using as well, mainly the “PolyTopo” brush and the “Build” tool. I selected the tale
and hide the rest of the mesh so that you can see how the tools work.. Here are the steps
involved:
1. I make sure to set my "Drawn on" to surface mode. The "Surface" mode allows me to
select an object with the "Pick" button so that I can draw on the surface of that object.

2. Next, I click on "PolyTopo". Polytopo is a topology brush that draws surface lines across
your selected mesh. The Tool has a free form feel to it unlike ZBrush's which tends to create
a set of ridged interconnecting vertices.

3. At this point I start drawing on my mesh. I like to make sure to only draw on the surface
area I can view comfortably, much like ZBrush Polyboost has a hard time figuring out
certain angles.

4. When I want to view the mesh I just "Right Click", this deactivates the Polyboost tool set.

5. Then I continue retopologizing my new mesh around the tail by selecting the retopo
mesh and going into vertex mode.

6. Then under "Edit" I select "Build" and start adding additional vertices. The vertices will
automatically conform to mesh surface.

7. Now it's time for me to connect the vertices to make new faces. I do so by Holding "Shift
+ Drag", this will automatically create faces in between the vertices filling in the gaps.

And that's it, I simply rinse and repeat the process until the entire mesh is retopologized.
Modeling Application / Retopology workflow
So let’s say you don’t have access to Zbrush 3.1 or cool tools like Polyboost, what do you
do? Well in this case you’ll have to shift your workflow to sheer brute force optimizations.
This was a workflow that I often used before I got a hold of Zbrush and Polyboost, and to be
honest I still use it every now and then. You simply have to export a subdivision level that
conforms to the high poly silhouette and begin stripping away edges and faces. I know a
number of artists who actually prefer this method and have become very proficient at it.
This method takes quite a bit of patience and a mastery of your modeling tools!
There are a number of different tools like “Topogun, Blender, Nex, etc. that also do a good
job of building new topology over your source mesh. I guess you’ll have to find out what
works best for you and run with it!
Paragalis / Retopology / Optimizing the model:
Defining edge loops:
So now that my mesh is almost done I want to highlight areas within the model that require
most of the edge loops for posing and animation purposes. Now “I am not an animator” but
I’ve worked with enough of them to know what they like and what they don’t. Looking at
the image below you can see the areas that required the most edge loops for smoother
movement. The tongue, lower tentacles, shoulders, knees, and ankles get an additional
loop, or in the tongues case a bunch of them! Since this is a personal project I’m pretty
much guessing on what areas require what based on past creature work experience. Had
this model actually been created for a production I would have consulted with the animator
to see what I could “get away” with in the modeling stage. With that said I’m pretty happy
with the mesh as it is and will now move on to matching up the mid level poly and low poly
meshes.

Paragalis / Retopology / Optimizing the model:
The Final Results:
At this point I go over the mesh one more time, tweaking it here and there making any and
all necessary adjustments. You can see from the images below how the low poly mesh
matches up with the mid level poly mesh. Once I’m done with my tweaks I then weld all of
the vertices and move onto the next step, which would be the UV’s.

Step Ten: Creating the UV's
Now that my models completed and I can move onto creating the UV’s. Of course before I
start cutting the mesh up, it’s important for me to identify what’s going to be mirrored and
what’s going to be unique. This is an important step since the final texture is going to be
1024x1024 in size and every pixel counts! You can see from the image below how I broke
down the parts of the mesh that have unique space and the parts that don’t. For the most
part this creature is mostly symmetrical and I treated the UV’s as such with the exception of
the top part of its head, hump back, inside tongue and poisonous barb. The reason I left
those areas unique is so that I can add some interesting details within the texture work to
break up the symmetry. When it comes to symmetrical creatures I try to have the areas you
can’t view at the same time mirrored, and the areas you can view at the same time unique!
Now the last thing I want to touch on was the way in which I handled the UV chunks and
mesh itself. When applying UV’s like this I usually delete the parts of the mesh that will be
mirrored, and then I go ahead and work on the UV’s until it’s finished. Once the UV’s are
done I select the parts of the mesh that are mirrored and clone that side of the mesh over to
the other side. I then weld my vertices back into place when this is completed. I find this
workflow to be much easier than doing UV's for the whole mesh; it’s much more efficient
and saves on the production time! With UV’s completed it’s time for me to arrange the UV
chunks into the 0 to 1 UV space. I like to think of this step as one big jigsaw puzzle. With
that said it’s always good to try and make use of every pixel and leave as little negative
space as possible! You can see the final UV’s from the image below.

This is a “crucial” step in the creation process and should be handled with the utmost care.
Most problems with, texture clarity, and normal map bakes can be traced directly back to
the UV phase and is usually the culprit for many weird texture situations! Below is a short
list of things to maintain when working with UV’s.
1. Always go for at least 95% distortion free textures. Having lots of distortion will result in
blurry textures and bad normal map bakes.
2. Try and keep your UV chunks as vertical and horizontal as possible. This makes texturing
/ painting them easier, not to mention that it keeps your pixels aligned.
3. Make sure to maximize your UV space, keep the negative “black” areas to a minimum.
4. Keep the seams in places not easily seen by the viewer. Places like inner thighs, under
arms, back of the neck, underbelly, etc. are good places to have seems.
5. Always try and add creative mirroring when possible to maximize your pixel resolution.
Areas like hands, feet, neck, tail, wings, etc. are perfect candidates for this.
6. Always try and use large UV chunks and refrain from breaking your mesh up into lots of
small pieces. This just creates more seems which will create more headaches!
7. Always apply a generous amount of pixel ratio to the places most seen by the viewer! For
example the bottom of the creature’s feet didn’t receive the same pixel density, as its head
or any other place of interest.
Step Eleven: Baking out Normal and Light maps
With the UV’s in place it’s time to get to one of the most important steps of the creation
process, and that’s generating the normal and light maps! Now before I proceed with the
workflow I want to highlight the fact that I’m running a PC with Windows XP64, 8gb of ram,
Geforce 8800 GT 512mb, and a dual core 2.40ghz processor. The only reason that I’m
highlighting this is due to the mesh sizes that I’m going to import into 3ds max. Currently I
can comfortably import a model that’s roughly 5 million triangles in 3ds max 8 and around
6 million in 3ds max 9 without max crashing. With that said I would encourage you to do a
few benchmark tests to see what you can work with on your own system! Okay so it’s time
for the actual work flow.
Normal Map Generation
1. The first thing that I did was model UV's and offset all of the mirrored UV chunks. I used
"Chugnuts UV Tools" for this action and I highly recommend it to anyone using 3dsmax.

2. The next thing I did was hit "O" on the keyboard, this ensures that when I rotate the
models all I can see is a bounding box!. This is a key since moving a dense mesh around
your viewport can really slow down your system.

3. After that I proceed to import my high poly mesh, you can see the settings that I use from
the image below.

4. My step next was to make sure both the low-poly mesh and high poly overlap properly.

5. The next thing I did was hit "O" on the keyboard to bring up the "Render to Texture"
dialog box.

6. Once the dialog box came up I went ahead and adjusted some of the settings. For
Instance I made sure that “Projection Mapping” was enabled, and it was using channel 1 for
both the object and sub objects and that “Global Super Sampler” was enabled. By the way I
use “Max 2.5 Star”. The last thing I did was make 3ds max was saving the file to my desired
location and not the default (C:\Program files\Autodesk\3dsMax#\images)

7. With the initial settings in place, I moved on to adding the elements necessary for the
baking. I hit the “add” button underneath the “Output” rollout and selected “NormalsMap”
from the available elements list. Once I did that I picked “Diffuse Color” from the target map
slot. The next and last thing I did was select the texture map size that I want 3dsmax to
bake; in this case I picked 2048x2048.

8. Now that all of my settings are in place I went ahead and hit the "Pick" button . The "Add"
targets dialog box appeared and I selected the high poly target mesh, this applied a
projection to my modifier stack. As you can see from the image below the cage for the
projection is unadjusted and all over the place.

9. My next step was to go into the "Cage Rollout" and hit the reset button to get the things
back in order. Now my cage was reset, I went ahead and pushed the amount to around
0.75

10. Now everything was in order, I hit the “Render” button on the Render to texture dialog
box.

At this point I’m now going to go over my normal bake and investigate what needs
adjusting. The reason is no matter how perfect your mesh and cage is, you will most likely
have to make some small adjustments to the cage and in some cases the model after the
first bake! So you can see from the image below what needed fixing to get rid of the obvious
red spots. Now when I say “obvious red spots” I mean parts of the mesh where the cage is
intersecting with the low poly mesh geometry. Keep in mind that there are other instances
where you’ll have red spots no matter what you do, this is caused by overlapping or
intersecting geo, missing faces, missing UV chunks, etc. In Paragalis’s case most of the red
spots were due to the cage not being out far enough in certain areas. The only exceptions
were the armpit and inner thigh seem. That being the case and I went in and adjusted those
areas, you can see an example of one of the trouble spots in the image below.

“Special note”, you can hide the high poly mesh when making adjustments to the low poly
mesh. The changes you make will still apply once your finished and you’ve unhidden the
high poly mesh!” Now that I’ve fixed my trouble spots I did another render. You can see
from the image that I was able to get a cleaner normal map from the second bake, yay! Now
that the bake has turned out successful I went ahead and repeated the process for the
tongue in a separate 3dsmax file. That being the case you should always try “if you can help
it” to create your models in separate elements. This way you can bake out the elements
separately from higher resolution source meshes!
In closing, you can see that by following these easy steps that I was able to get a solid
normal map! The overall keys to achieving these results comes from having a clean low
poly model, clean high poly model, and solid “non-overlapped” UV’s, all in all there’s no
magic bullet to this and it takes a bit of practice and patience. With that said you can see the
normal map results below when compared to the high poly version.

Light Map Generation
All right, now I have the normals baked out and I can move into baking out the light map!
This step actually benefits from all of the hard work setup during the normal map bake.
While still having my normal map file open I went ahead and renamed the file “Paragalis
light map bake”. Once that was done I simply followed these steps:
1. I opened the material editor and created a solid white material. The next thing I did was
apply the material to both the high poly and low poly mesh.

2. After that I created a white floor “box” underneath the grid and move it a few units below
the model. Having the floor in the scene allows for shadows to be baked into the underbelly
of the mesh. The closer the floor is to the mesh the darker the shadows and vice versa for
lighter shadows. “Special Note” ordinarily it’s good to have some nice shadows underneath
the mesh to create depth but it’s not appropriate to have the shadows appear completely
black either. The same could be said for any and all intersecting geometry.

3. Then I went ahead and added a skylight to my scene and raised it above my mesh.

4. Then I hit the “F10” key to bring up the “Render Scene” dialog box. I then selected “Light
tracer” under the “Advanced Lighting” tab. As a side note there are a number of cool things
you can do under “General Settings” and “Adaptive Under sampling” for some nice bake
variants! With that said I highly recommend investigating what each setting does.

5. The next thing I did was hit “0” on the keyboard to bring up the “Render to Texture”
dialog box. From here I pretty much followed the same steps as in normal map workflow
with the exception of picking “Lighting Map” from the “Add texture Elements” dialog box
instead of Normals Map.

6. With my cage still intact from the normal map session, I went ahead and did the actual
light map render. You can see the results of the bake from the image below.

“Special Note” You can also create a light map based off on the low poly geometry instead
of the high poly source simply by leaving the “Projection Mapping” / Enabled box
unchecked. The reason you may want to do this is to create another light map connecting
the shadows to multiple elements.Case and point would be the creature’s tongue and body.
If you look closely at the image above you can see that the tongue feels disconnected to
where the rear of the tongue connects to the body. In this case I generated a second light
map, which I later blended with the baked light map. You’ll be able to see the results of this
in the next step.
Well that’s it from the baking side of things; this is pretty much my work flow for
generating both the normal and light maps within 3ds max.
Step Twelve: Creating the Textures
Before I start I want to highlight the fact that creating textures is my favorite part of the art
pipeline, with sculpting coming in a close second! There’s something about applying a
surface to an object that I find really appealing, whether it’s an organic creature, weapon,
person, you name it. The technique that I’m going to show for this particular model is
pretty popular and is commonly used in not only the game industry ("Resident Evil,
Assassins Creed, Gears of War, Half Life, just to name of few") but in the film world as well,
albeit at much higher resolutions. That being said the art direction I want to go in with
Paragalis is going to lean towards a photo real look, think “Walking with Dinosaurs”!
Creating the Diffuse Map
Before I start working on the skin I need to set up my texture sheet and model for better
viewing. This is an important step since this will be the foundation for the texture process!
First I opened my previously created AO/light map in Photoshop and created a few groups
representing each phase of the texture process. The next thing I did was duplicate the
background layer and moved it to the top of the layer stack. Once that was done I set the
top layer to “multiply” and reduced the opacity to 80%. This setup creates a nice overlay
effect giving me all of my shading and highlights. Now that my texture set up is completed
it’s time for me to adjust the material in 3dsmax.

The first thing I did was apply the diffuse texture to the “diffuse color” slot, and then I
raised the “Self-Illumination” to 100%. The reason I did this, so that I can see what the
actual texture is going to look like every time I save iterations in Photoshop. The only thing
that I’m focusing on at this point is the surface texture and color, that’s why I left the
normal map and specular slots empty. Having my normal map applied would be futile and
almost certainly hamper the diffuse map creation process. This is the case since I would
constantly be fighting the normal map due to the way it receives light and shadow! That
said I normally don’t apply my normal map until the diffuse and specular maps are
completed.

So the next thing I did was fill the texture with the appropriate colors indicating the various
surface properties. This is essentially a quick way for me to get a good feel for the creatures
color scheme, not only that but the separated colors also serve as a nice mask. I tend to
apply all of my base colors into one layer set, which falls under the “base colors” grouping.

With the color mask applied, I fill the body area with a base skin. The base skin serves as a
rough foundation, which allows me to pre visualize what the overall surface texture looks
like on the model. For this particular model I used a combination of tree bark, elephant
skin, and old paper set at different layer opacities within the Photoshop file. You can see the
final base skin composite below.

p://img198.imageshack.us/img198/8608/52378069.jpg[/IMG][/URL]

botanic

  • Hydlaa Resident
  • *
  • Posts: 61
    • View Profile
Re: Art With Botanic
« Reply #5 on: August 18, 2009, 06:01:49 pm »

Now I have a rough idea on what the skin looks like, it’s time for me to step back and
determine what areas are going to receive what color, material and surface values. I do this
by making a very quick and simple paint over. This is especially helpful since I’m not
working with any fleshed out color concepts and or illustrations.

With my color guide in place it’s time for me to assemble my references and materials. My
vision for the creature is something that would have fit right at home in prehistoric times.
The skin would be a cross between that of an elephant, rhino, pig, and perhaps a dinosaur.
That being the case I went ahead picked a few surface materials from my reference library,
as you can see the surface materials pretty much correspond to the creature skin in
question. “I also wanted to note that I have an extensive library of material references and
fabricated skins that I use to help aid with this process!”

With everything in place it’s time for me to start combining and layering the various
materials. Phase one is where I’m blending the materials A, B, and D. At this point I want to
give a good once over to the surface area adding in some nice sporadic splotches of skin. I
try to keep the skin interesting by varying the opacity, contrast, and color. Speaking of
color, I tend to match the color values by adjusting the “variations” as I go along. The
Image/Adjust/Variations works much better than Hue/Saturation because it shifts all of
the color values, unlike Hue/Saturation which move the image towards monochromatic
values. It’s really important to keep this is mind since the source materials carry a nice
sense of natural color embedded in the photo’s. This works wonders when it comes to
selling the photographic nature of a skin!

Continuing work on the base skin, I move into phase two where I’m refining the various
skin elements. I started to add in materials E and F, which provide even more interesting
results. This is especially true of material F, which is used primarily around the back of the
thigh and tail, having the big small detail keeps things interesting. Speaking of which it’s
always good to vary the scale of each skin element as well, since an animal’s skin is rarely if
ever uniform in size throughout!

Continuing work on the base skin, I move into phase four which has me combining all of the
material elements giving it a cohesive feel. I tend to spend a good deal of time at this stage
making sure things feel natural asking myself, “does this look right, does it feel out of
place”. What I like to do at this stage is to reference actual animals, I like to see where dirt
accumulates, bruises, color shifts, scars, bumps, you name it, my main priority is making
sure the whole skin looks “believable” and natural.
On a last note about the skin: I try to remove as much noise and intrusive light as possible,
leaving those elements in place can tend to make the texture overly grainy and disjointed. I
tend to remove the lighting and noise in the original material itself by first lowering the
contrast and also by setting my brush mode to normal and painting them out.

Continuing work on the base skin, I decided to add some color in the mouth area. I used
some references that I had of human gums, and fish gills “pretty weird huh”. Along with
adding those details I kept refining the color by adding elements that worked with the
creature’s physical body and nature. For instance I made sure to add some exposed skin
color around the mouth, back of the head, around the fingers, bottom of the tail, and the
base of the spikes across its back.

Now that I’m happy with the base skin it’s time for me to work on the tongue. The tongue in
particular needs to be interesting enough to stand out when launched but it shouldn’t take
away from the body as a whole! I used two main material elements reflecting the desired
surface type for the tongue base. Both elements were taken from photos of skinless salmon;
I thought it would work as a nice base due to the surface texture. Since I don’t want the
tongue to be as red as the source material, I shifted the color by using
“Image/Adjust/Variations”. I also want to point out that I varied the scale from the back of
the tongue to the front; this gives it a sense of overall scale and dimension.

Now that my tongue has a base material on it, it’s time for me to see how it integrates with
the body. As you can see from the image below its way too hot and monochromatic! With
that said it’s time to move on the next step, which involves painting and color blending.

Painting and color blending using this technique in particular is a tricky affair and it
requires a subtle touch. A number of years ago before normal maps and advanced game
engines artists were essentially painting in all of the values into their textures. Everything
from shading, mid tones, highlights, etc. were built right into the diffuse map, with specular
and bump maps playing a supportive role. Well in this day and age it’s important to have a
better sense of balance as texture artists have to think ahead in terms of how AO/light
maps, normal maps, specular maps, alpha maps, SSS, post processing etc. all work with
each other. In short the responsibilities for a texture artist have been expanded to some
degree.
As you can see from the image below I started to add my color blends along with the
shading and highlights. I do this by adding a few layers within the assigned group and set
those layers to “normal, color, and overlay”, I generally go over the whole body adding
color shifts where appropriate. Areas like the top head, tentacles, legs, back hump, etc. all
receive this treatment. The tongue in particular has been given a number of strong
highlights to really sell the clammy nature of it. I also blended a bit of color to offset the hot
pink that was evident throughout. I decide to use some sky blue for the highlights, blue
violet and orange for the mid tones, green for the shadows and a touch of purple for the
veins.

Continuing work on the color, shading and highlights, I kept refining the existing strokes
having them blend better with the skin. At this point I want to make sure that I don’t go too
far with my highlights and shading, at the same time I want to keep things fairly dynamic
and interesting. It’s really just a matter of making good choices and keeping the overall
contrast, and color temperature in check!

At this point I’m pretty happy with the overall tone of the skin and will now move onto
adding additional details. I made a few new layers within my layer group and started
adding a number of little touches to help sell the believability factor of the creature. I really
love this stage since it’s the little things that count, elements like scars, skin distortion, dirt,
blotches, slime, etc. I created a number of these elements from scratch before hand and
placed them where appropriate. You can see some of the results in the close up below.
By the way: I belief that a little goes a long way and I tried to make the extra details
noticeable but not overpowering!

In Closing: The method used to create the diffuse thus far is only one way to go and there
are number of other avenues that I could have taken. Instead I leaned towards the method
that best serves desired visual target.
Creating the Specular Map
With the diffuse map completed, it’s time to focus on the spec maps; I’m going to use two
different maps to sell the specularity of the model. The first map that I’m going to work on
is the color specular, a color spec is basically a map that alters the color of the specular
highlights. When it came to the creature, I used a variety of colors to help enhance and
offset the diffuse map colors. First I’m going to start on how I made the map in question.
The first thing I did was merge all of the layers in my diffuse map, once I did that I created
four new layers. The first layer is a de-saturated copy of the base layer; then I created a
second layer called color 1, the third layer color2, and then color3. From here I set the
numerical color layer properties to “color”, and started filling them with the desired colors.
First up is “color1”, which is green, I chose green because I felt that it would play off of the
semi warm mid tones throughout the creature’s skin. Next up is color2; I choose blue for
this because it offers up the highest level of shininess, which is exactly what I wanted to
help sell the moist and clammy nature of the tongue, back hump, and soft tentacle under
skin. Last but not least is color3, I used dark orange in some of the shadow areas to
basically warm up and play off the dark values in the diffuse map.

Now the specular color map is completed it’s time to move the specular level, otherwise
known as specular power “Special note: different engines and modeling applications have
various names for this”. The specular levels map basically alters the intensity and location
of the highlights based on their black/white values. 100% white receives the brightest
intensity of light and a 100% black doesn’t receive light at all! It’s good strike a middle
ground and incorporates a mix of white, grey and black. That being the case you can see
below that I took my color spec and removed the colored layers. Once I did that I darkened
the base layer by overlaying the light map on top of it, after that I merged the two. Then I
added a number of layers reflecting the highlights, pin lights, and darken areas. The
highlights layer was used to created broad lights across large surface areas and pin lights
were used for the opposite. The darken layer was used to tone down super bright spots
such as the tip of the barb and teeth, this is especially important since some game engines
will blow out anything approaching 100% white. On a last note I wanted to point out that I
refrained from using any kind of grain overlay often seen in human skin. I choose not to do
so due to the fact that the creature’s skin already provides a nice scattered effect. Adding
more grime and noise on top of the established skin would create a muddy effect when
shown in the light, this looks even worse when the resolution is reduced!

Creating the Bump Map / Normal Map
Well now that the spec maps are created its time to move onto creating a grey scale bump
map. A bump map pretty much uses the black areas to indicate depth, white areas
indicating height and grey staying neutral to the surface. Once the bump map / height map
is created it’ll be converted to a normal map overlay. This step is super important since the
bump map will serve as a base to accentuate subtle details within the skin surface, meaning
a bad bump can and will destroy the hard work done in the previous steps! That said I’m
going to explain how I created my bump map. The first thing I did was merge all of the
layers in my diffuse map, with the exception of the mouth skin, scars, teeth, underbelly
skin, and tongue distortion. I then created another layer and filled that layer with 50%
grey, once I did that I moved all of the un-merged layers on top of the grey fill layer. I then
took those separate layers and de saturated them, along with adjusting the contrast.
Now at this point I have to ask myself, do I want to use the skin that I made for the diffuse
map and bump that? Or do I want to use another skin source that’s a little more uniformed?
Well I choose the latter and I did so due to the erratic nature of the diffuse skin. If you think
back to when I was applying all of skin materials in the diffuse map you’ll see that it was
made up of numerous “clone/stamp” elements. While those elements were fine for selling
the diffuse, they simply wouldn’t hold up in the bump map. Bumping out the diffuse skin
would have resulted in a noisy output, which would break the illusion and believability I’m
going for. That said I used two different tillable materials as an alternative. The cool thing
about these materials elements is that they play off of the original skin quite well! So I
created two new layers and filled one with the big skin pattern and one with the small skin
pattern. Then I took my eraser brush and gradually blended the two patterns, the reason
for this was simple. Most creatures’ skin in real life is almost never uniformed and contains
slight variations throughout.
With my bump map completed, I moved on to generating the normal map, for this I used
“Crazy Bump”. Crazy bump is an awesome program that does exactly what it say’s, it bumps
stuff really well! All kidding aside Crazy Bump can produce other maps as well such as
occlusion maps, displacement maps, specular maps, etc. That said the normal map could
also be generated using the Nvidia normal mapping plugin. You can see the settings that I
used from the image below.
Now that I generated my normal map, it’s time to overlay it onto my baked normal map. Of
course before doing so I needed to make a few tweaks, first being the adjustment of the
blue channel. This is really important since having the blue channel unadjusted would
create a few odd results, most notably it wouldn’t behave well with my baked map. So in
order to get things straight, I to adjust the levels in the channel itself. The first thing that I
did was go into my channels and dialed the number down from 255 to 127. By the way the
way there are a few cool Photoshop actions that handle this.
Continuing work on the normal overlay, I took the mask that I created for the texture and
copied and pasted the new overlay onto the baked normal map. I then reduced the opacity
of the overlay layer to 45%; this was done so that the skin maintains a subtle feel to it. I
personally like bumped surfaces that don’t beat you over the head on how exaggerated
they are, as I said before a little goes a long way! Now that the overlay has been applied to
the baked surface, it’s time for me to re- normalize the map itself. I do this by running the
Nvidia Normal map filter with the settings shown below, this will re adjust the vector
values. I like to think of the previous step as the “Reset Xform” for Photoshop when it
comes to normal maps!
Step Thirteen: Applying the normal map
Okay, so it’s now time for me to put everything together and view it in 3dsmax! Here’s a
brief quick start guide on applying the normal map within 3ds max:

And that’s it, you should be able to view the mesh with the maps applied within the view
port or you can render it. “Special note” the model may not show up properly at first
depending on your video card. One thing that works for me is to hit “Zoom extends all” and
then go back to my perspective view port!
Viewing the applied Textures
Now comes the fun part, this is where I get to see the fruits of my labor. I begin by setting
up a few lights within my scene, a key light, fill light, back light and skylight. Once I do that I
make sure to add a light tracer and then render. “Special note” You should be able to view
the lighting set up within the 3dsmax 8/9 files.

Along with the render you can see how the various maps were used to achieve the final
results.


For good measure I also wanted to see what the creature would look like in a game engine,
so I imported the model into the Unreal 3 editor.

Final Thoughts:
I guess I’ll close by saying that there are number of different ways to approach the
modeling, sculpting and texturing process and this is by no means the be all and end all! As
a matter of fact I tend to work on different projects that require me to switch styles and
workflows often. As an artist I pride myself on being able to work within different styles
and genres, whether it’s gritty realism or stylized fantasy.
All in all I have to say that this was a fun exercise and I’m glad that I was able to share it
with the CG community. Hopefully I was able to introduce you guys to a few new
techniques that will encourage you to come up with some creation methods of your own.
With that said happy creating and good luck!


Cherppow

  • Hydlaa Citizen
  • *
  • Posts: 493
    • View Profile
Re: Art With Botanic (Dial-up warning LOTS of images)
« Reply #6 on: January 06, 2010, 04:15:09 am »
These are very intriguing and informative! Thanks for taking the time to share all this. :)

- Cherppow

Mekora

  • Hydlaa Citizen
  • *
  • Posts: 255
    • View Profile
Re: Art With Botanic (Dial-up warning LOTS of images)
« Reply #7 on: February 17, 2010, 10:00:42 pm »
 \\o// Truely Spectacular. I never knew it took so much time and effort to create just that one model. Great job

Nivm

  • Hydlaa Citizen
  • *
  • Posts: 271
    • View Profile
Re: Art With Botanic (Dial-up warning LOTS of images)
« Reply #8 on: June 30, 2010, 10:08:32 pm »
 This is a rather good tutorial, although I was hoping at the beginning it would say something about what the Plane Shift engine was capable of.

bloodedIrishman

  • Guest
Re: Art With Botanic (Dial-up warning LOTS of images)
« Reply #9 on: June 30, 2010, 10:18:52 pm »
So where's the tutorial in a 5-min condensed form?

Zweitholou

  • Hydlaa Citizen
  • *
  • Posts: 205
  • Art Department Leader
    • View Profile
Re: Art With Botanic (Dial-up warning LOTS of images)
« Reply #10 on: July 01, 2010, 03:53:33 pm »
Nivm:  I believe the PlaneShift engine can support anything created following this tutorial.  Specifically, PS supports low-poly models with diffuse maps, normal maps, specularity maps, luminosity maps, and displacement maps.  Blender supports all these too, though I'm not sure how they export.

Akkaido Kivikar

  • Hydlaa Notable
  • *
  • Posts: 726
    • View Profile
Re: Art With Botanic (Dial-up warning LOTS of images)
« Reply #11 on: July 02, 2010, 12:49:57 am »
With B2CS sort of not so good, using 3DS Max to import the blend file, then export to PS is the best option.

potare

  • Hydlaa Citizen
  • *
  • Posts: 333
  • Potare the great
    • View Profile
Re: Art With Botanic (Dial-up warning LOTS of images)
« Reply #12 on: March 07, 2011, 04:25:47 am »
Nice pic of creature Botanic :)

Caym

  • Traveller
  • *
  • Posts: 42
    • View Profile
Re: Art With Botanic (Dial-up warning LOTS of images)
« Reply #13 on: June 25, 2011, 02:14:16 pm »
A couple of questions:

- Why couldn't you just link to those tutorials instead of copy/pasting them? You do know that the internet is vast, vast place, and that it features lots of useful tutorials, and that it was created around this neat little idea called hypertext that frees you from the obligation of copy/pasting any information you find interesting.

Here, I'll do it for you:
Normalmapping in General http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html#1.1
Normalmaps Explained http://www.pinwire.com/art-design/generating-high-fidelity-normal-maps-with-3-d-software
Character Creation http://www.marcusdublin.com/ParagalisTutorialPage1.html

- I can't find any mention on those websites that one should "Feel free to re-post these anywhere"

What I could find though was things like "all rights reserved", "All artwork, content, and any other material on this website is © Marcus Dublin 2009,2010,2011" and such.
"Proclaiming I am thine trollop, 'tis not even a jest, 'tis but the truth." - Jekkar

BoevenF

  • Hydlaa Notable
  • *
  • Posts: 543
  • Amdeneir citizen, mostly travelling
    • View Profile
    • The Doømed Ones SVG
Re: Art With Botanic (Dial-up warning LOTS of images)
« Reply #14 on: June 26, 2011, 03:19:39 am »
Well, to be fair the tutorial for normal maps http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html#1.1 says:
"This documented is free to be distributed as long as it remains intact and unedited." I'm puzzled by the term "documented" in this form, but it seems unedited.