I previously went over the overall shading and rendering techniques that we used to create the cartoony look of Wing It!
But for the sake of not derailing that article too much I only glanced over the character shading aspects. That is something I want to redeem here in this article, going over the important aspect of the character shading in some more technical detail.
Even though everything needs to be rendered together and should blend seamlessly, we treated the environment and the characters quite a bit differently in terms of shading. One reason was that there are other requirements for the visual fidelity and lighting control that we would need, but also the shapes and resolution of the meshes were very different between the two.
The reason why a slight difference in the way environment and characters are rendered is something that we were fine with and even encouraging is that it can help to emulate the 2D charm of traditional cel animation where static environments are drawn differently from moving characters.
The importance in finding a good balance with a 2D hand-drawn looking style in 3D comes also with the style of the animation. To be able to sell these very whacky facial expressions and that over the top acting, the art style needs to support this as much as possible. And in 3D animation you start out with a certain disadvantage in this regard when going the conventional route.
Most important to understand is that, as is common for NPR, we are not doing the surfacing with different texture maps that describe the parameters of the surface and then let the renderer figure out the rest based on some BSDF. But instead, most of the light information that makes up the look is already part of the surface color as part of the shader. It is faked.
This is inherently something that brings the approach closer to how actual 2D animation works, where you don't have a perfect reference of how the character's surface looks and behaves to light in threedimensional space. So an overall approach was to throw away some of that extra information that we get in 3D and rebuild the information we need based on a more simplified 2D version. But more on what that means later.
Of course, this also means that the lighting process looks a lot different than in a more conventional production as now instead of just light sources, you also need to take into account the different lighting efffects that are already happening in the color of the shader itself.
First of all, before we dive into the technicalities, let me break down the different layers that make up the character shading.
As the base color we have a very simple hand-painted image texture that consists mainly of solid colors with some additional painted drop shadows and subtle brush stroke detail to indicate fur. This is the only layer with real manual control.
On top of the base colors we had different patterns to break up bigger surfaces. This also includes also gradients to add some fake shading information. The clothing of the charaters specifically had a patchy pattern procedurally overlayed to slightly shift the chroma of the solid base color to add variation and communicate the material. This just like what we did on most environment assets.
Especially shiny materials had fake reflections as part of the surface color as well. Those were done differently based on the specific case. Either with simple cell shading (e.g. dog nose), mapped as a drawn reflection texture or cell shading based on the faceted normals.
To work against the characters becoming too flat, making it look slightly cheap, and also to help with certain lighting scenarios that are important to communicate the mood, like backlit scenes, we decided to add a layer of toon shading that would allow us to brighten/darken parts of the characters based on an angle relative to the camera.
To really make the characters stand out and creating interesting/appealing lighting we had a layer of rim lights that we could place at a certain angle around the character, emphasizing the silhouette.
On top of the surface color and the fake rim lights we had outlines those were based on the surface color and drawn inside of the silhouette of the objects entirely. These outlines became a crucial part of the style and where instrumental in making the characters read well.
Both the fake rims and these outlines have been distorted with a noise texture to make the edge where they bleed into the character surface just a bit fuzzy to make it less clean. To avoid swimming of this noise texture throught the moving characters we animated the noise pattern on 2s along with the characters, creating a slight flicker between frames.
To tie everything together there is a subtle paper texture overlay on everything, just like there is on the shading of the environment.
After all of those layers come together in the surface color it is rendered subsurface scattering shader. This helps a lot to get rid of a lot of the 3D features of the surface in the lighting and blends everything together. To even increase this effect we used normals that had been blurred using Geometry Nodes.
Okay, this might all be great but you're probably wondering how these different things like outlines are actually done now and where Geometry Nodes come into play.
The way these different effects are created is heavily powered by Geometry Nodes. But instead of changing the actual shape of the geometry in some way or generating new geometry like we did for the outlines of the environment objects, this is purely based on generating surface attribute data that is used by the shader. It's a very powerful functionality of Geometry Nodes to generate data to be used by the shader using all of the operations that are available for complex geometry processing.
Of course you are still bound to the resolution of the mesh for that data, but with further processing in the shader this allows for a very powerful combination of tools.
Let's take a closer look at the 3 main elements: Outlines, Rims and Toon Shading.
These are all based on the same core setup and building up on each other. The idea: Identify edges that are part of the silhouette, based on the camera angle and generate a distance field that can be used in the shader.
The distance field approach allows us to define the thickness of lines in the shader dynamically. It's essentially a measure of how far away each point on the surface is from the silhouette. So by rendering them different depending on that distance that allows to create these kind of outline effects.
The keyword here are signed distance fields (SDFs), if you want to learn more about it. Though here we only used them in a simple capacity.
It's also important to note that this means all these effects only work for the camera angle. Any change in the angle to view the character from would mean that all the data needs to be recalculated. We only did this for the camera of the shot, so navigating the 3D viewport breaks the illusion.
The first fundamental step is to identify the edges of the mesh that make up the outline from the camera view.
The way I solved this is to flatten the entire mesh in the view direction of the camera. Then any edge with an angle between the 2 faces it connects to that is greater than 0, which means it is not flat, can be identified as an outline.
Probably the trickiest part in all of this is to remove edges that are occluded by some other parts of the mesh. The most accurate way to figure out the distance field in screen space is to flatten everything, but that means that edges that might be behind a surface can produce outlines on top, since everything is happening at the same depth.
Something that helps with this issue is the fact that this setup works per object. While that does mean that different objects cannot share a silhouette, it also means that the setup has an easier time with occlusions since the objects don't influence each other.
My main solution to this was to cast rays towards the camera to see if the edge should be occluded.
Besides just the occlusion there are a bunch of other things that are done for cleanup to catch some edge cases and create cleaner lines. This is especially important for temporal stability, as every single pose and camera angle, so potentially every frame, might have its own challenges. So any impurity can end up with relentless flickering in the result.
I won't go into detail here what these things are individually though, but they definitely caused some headaches throughout the production.
The fake rims should only appear on the outer shape, unlike the outlines that also just appear on overlapping geometry. So to generate the rim in the shader we just need the distance to the silhouette and base the thickness on a direction where we want the rim light to appear.
For the silhouette we can build upon the outlines that we already identified. Starting from all the edges that represent an outline in the mesh we can further isolate any edges that are part of the outer silhouette by casting a ray going from the camera and checking if it hits any point on the mesh behind the outline.
The method of generating a distance field in screen space is then the same as for the outlines in general.
On top of simply the distance to the silhouette we can also make use of the normal direction of the silhouette. By extracting that information in the flattened mesh and using it in the shader for the direction of the rim, we can get rid of relying on the actual mesh normal. This helps a lot with the 2D look, as we are getting rid of more information that can expose the 3D nature of the mesh.
Now on top of the information we gained from the rims for the silhouette we can go on to use this for the toon shading. The idea here is, instead of doing cell shading on the actual 3D geometry, which can very easily look quite cheap and expose the 3D nature, to generate normals in Geometry Nodes that represent a simplified version of the mesh, only using the shape of the silhouette to create something like a blob shape based on this.
Since we already have the silhouette edges and their direction, the only missing element to do this (in the way that I opted for) is to identify the center of the shape. That way we can kind of interpolate between the normals of the silhouette on the outside of the shape, towards the normal pointing to the camera on the shape center.
The tricky thing here is that we cannot guarantee to have edges available that nicely represent the center. For the outlines that wasn't an issue, since the mesh will inherently always have edges around the outline, but in the center that is not necessarily the case.
I mentioned that this approach to shading drastically changes how you have to approach lighting as well, since a lot of the effects that conventionally in 3D you would use actual light sources for are now faked as part of the surface color. So that means the control over these effects needs to be conveniently available in the lighting workflow.
To make this possbile we used a method that we aptly described as 'viral' python scripts. Simply, we have a python script attached to an empty object that has a bunch of custom properties defined. The python script itself is attached as one of those properties and set to run on file open. This means that any scene that somehow references that empty object will cause the python script to run automatically and the behavior spreads that way. It works like magic... or a virus.
What the script does in this case, is that it collects all of the custom properties on the object that describe the 'lighting settings' and creates a version of all of them on all view layers, while also hooking these new properties up to the empty object version with a driver. That means any custom property created on one of these lighting rigs (empties) will automatically become available as a view layer property in any shot that contains the rig and can be controlled via the rig.
View Layer properties are easily available to the shader using the Attribute
node. So that way it is very easy to set up new shader properties that can be controlled with a rig.
In the past we did this with object properties. But this method is a lot cleaner, as you don't need to worry about duplicating the property on all objects that might need it and the rig can be detached from the actual character itself as a separate asset with its own library override.
You can find the setup with the mentioned script here.
I prepared an example setup of just the head of the dog character that you can use to just play around with, or try and adapt for your own setup using the information from this article. The main important element is the GN-distance_to_silhouette
Geometry Nodes modifier to generate the attribute data used in shading. That also needs a reference to the camera object. We just created a dummy object that would automatically jump to the active scene camera using drivers.
And then, of course the shader. This is all hooked up to the lighting rig parameters, but the individual nodegroups for the rims and the outlines also just work by themselves based on the generated data.
While I'm overall quite happy how far these techniques got us towards the goal we set ourselves for the look in terms of style and quality, I wouldn't feel confortable blindly recommending to copy and paste this approach whenever you need outlines. So let me go over some of the limitations and issues with this setup.
First and foremost it is far from infallable. There are several cases where lines will either not connect properly or the distance field considers lines that it shouldn't. And the big problem with these issues is that things might pop in and out of existance on a frame by frame basis, since they are quite fragile. So for Wing It! we needed to fix some things for individual frames here and there.
Also, just because of the way they are generated, the lines cannot have unlimited thickness. All lines are created from the same distance field, so they cannot overlap. You can already see this issue in our case all over the place if you pay attention to it.
And additionally this technique requires a relatively high mesh resolution. The higher the resolution the more accurate the distance field. As an example, the simple default cube would not be able to support the distance field that would be needed to render its outlines this way. This technique is generally more suited for meshes with organic shapes and a naturally high subdivision level.
All in all there are certainly things that could be improved about this method. In the near future when Grease Pencil will be compatible with Geometry nodes, this kind of thing could potentially be improved immensly by making use of the Line Art modifier, which does a lot of things that are fancier than what I recreated here. And in the further future there will be more modular solutions that are aiming at this exact issue natively in Geometry Nodes with Grease Pencil, which will open up all sorts of new possibilities.
But regardless I'm still quite happy with the look we could achieve using this method. We were able to push the style towards the concept art overall a lot more than I personally had originally anticipated.
Love it !, i just wish you'd go a bit more over the node setups a bit more in depth as I'm learning concepts but with my minimum knowledge of GN its hard to imagine how to go about it. regardless this is a really great article and what I was saying might be a bit out of the scope.
@Luciano A. Muñoz Sessarego Yes, I was thinking about it and I wish I could have done that, but that would have really blown up the scope even more, since not all of these things are entirely trivial to untangle. So for this it was better to focus more on the big picture.
But the core nodes are really just (heavily simplified):
Set Position
for space transform and flatteningDelete Geometry
and Edge Angle
for identifying the outlinesRaycast
for the different cleanup stepsProximity
and Store Named Attribute
to generate the distance fieldAnd for shader nodes, it's just distorting and remapping the distance field with some math nodes.
@Simon Thommes Thanks for that extra clarification! <3
Great article Simon. Thanks for putting all this together, it's a really awesome resource!
Thanks for sharing! I really appreciate the studio working out these workflows and sharing them with the community. Many of us don't have the expertise of a whole studio available to ask, so you're filling an important role.
Join to leave a comment.