With our latest open movie 'Charge', we were aiming for high fidelity realism using Eevee with a strong visual impact. One big challenge to giving emotional impact to our main character’s expressions is making the deformation of his facial skin feel fleshy and realistic. Having stiff facial expressions can ruin the immersion in an emotional moment of the film and push you into the uncanny valley very quickly.
So something that we wanted to have since early in the production were wrinkle maps that would best be triggered automatically using tension maps on the deformed mesh and ideally these wrinkle maps would even be fully procedural and omni-directional. This would grant us full freedom in the wrinkles generated from deformation of the mesh with minimal manual iteration on them. Important to note is that these wrinkles would not replace our entire system for realistic face deformation. They would only be complementary to what we achieve in a more directed manual approach and add another layer of detail and interaction.
In this blog post I want to share our results, talk about the journey it took to get there and how it works in more technical detail.
In the end we opted for using this technique only for the micro detail, as we already had the lower detail levels covered with manually sculpted displacement/bump maps that are triggered by the rig. So in our final result the effect is quite subtle and only shows in close-up shots. So we used it only for the highest level of detail on a skin cell level to create fine directional bump under compression. It does help a lot with the fleshiness and making the skin feel like it is actually being compressed, rather than just moving around.
The same technique can also be used for medium level detail. We decided against this for our character, as we already had a working, more directable solution by the time I got around to the procedural system and manually revised details are usuallt more accurate than a fully procedural approach. For the micro level we needed the procedural solution though.
One indication of successful wrinkle map that you can observe is how a lot of the procedurally generated wrinkles follow the same flow as the sculpted, broader wrinkles. This is a testament to how accurately the skin deformations have been considered in the rig but it also shows how, when used right, this method can give great results with little effort once put into place. In other areas the flow does not quite match (e.g. around the mouth). This is where it would have been useful to have this technique as a tool already in the creation of the facial shapes to monitor unwanted shifting of skin tissue. Seeing the wrinkles already appear in the sculpting process of the shapes gives very useful feedback that helps with consistency.
There are some issues with it that could be iterated over, for us we ended up putting the emphasis on the micro detail. Manual iteration for something like this will always be better than a fully procedural solution, so the combination of both is probably the most efficient solution for a nice result.
The most important thing to understand for a good result with this technique though, is that because the wrinkles are fully procedural, it will only display the actual deformation of the geometry. So if the shifting of the tissue is fundamentally off on a geometry level, the wrinkles will only emphasize this, rather than fix it. That said, this also makes this a useful tool, to investigate how the tissue is deforming on a detailed level, while working on the shapes.
Find a simple example file with the setup here.
Very early on into the production when we were doing some RnD to see what techniques we could possibly use to achieve a high fidelity facial performance. So I was investigating ways to retrieve tension information from the mesh deformation with geometry nodes. The first results were looking relatively promising but it was quickly clear that I'd need to spend a lot more time to figure out a system that is high-quality and flexible enough to fit our needs.
At this stage we decided to focus on manually sculpted expressions that we blend in the rig as shapekeys, displacement maps and bump maps. We would need manual control over the broader shapes regardless of how procedural wrinkle maps would end up and this would give us the highest level of actually controlling each shape.
That way procedural wrinkles based on a tension map would be a nice touch on top down the road but we wouldn't rely on it.
But how does this actually work now?
Well, there are multiple different ideas combined to achieve this. First I’m comparing the final, deformed mesh with an undeformed reference mesh to calculate two individual tension maps using Geometry Nodes. I then pass the resulting maps as attributes into the shader where I use them to generate the wrinkle map that I use as a bump map. This way every difference between the reference mesh and the deformed mesh will be reflected in the wrinkle map. This includes also the blended displacement shapes that have been manually sculpted, as well as any animation on the base mesh performed by the armature.
The first of the tension maps is relatively simple. It’s just a comparison of the face area before and after deformation A_Base/Deformed using a division and mapping the result so that it ranges from 0 for full compression to 1 for infinite stretching, while 0.5 means an equal face area. The formula used for this is:
1 - 2^( - ADeformed / ABase )
The directional tension map to retrieve the stretch direction to generate the wrinkles is a bit more tricky.
The base concept is to compare the length of the edges of the deformed and the undeformed mesh and then looking up the direction of the edge in a given UV space. When an edge got shorter it has been compressed and the wrinkle should appear orthogonal to it, when it is longer it has been stretched and the wrinkles should run parallel. By weighting the corresponding direction by how the length of the edge changed and averaging the vectors of all edges connected to each point we should get a resulting direction map. There is one crucial issue though.
The problem with these directional vectors is that the direction of an edge has two possibilities and for the wrinkles we don't care about left/right, we just care about horizontal/vertical (and anything in between). But if we just add the vectors together as they are, opposing directions will cancel each other out. So when generating the stretch map we need to use a vector space where both vectors for a direction and its opposite mean the same thing. The solution for this issue is to use the periodicity of polar coordinate space and simple multiply the angle component of the polar coordinates by 2. That way, two vectors that are exactly opposite in euclidean 2D space result in the same vector of this alternative vector space.
Now, when we average the vectors of the edges connecting to a point, vectors that were initially opposing are now aligned and don't cancel each other out, but add up instead. This exactly what we need in this case.
Another challenge was the fact that restricting yourself to the edges of a mesh will leave some deformations unaccounted for and lead to inaccuracies. A square for example can be deformed into a rhombus shape, without changing the length of its edges. So far we are not accounting for this type of deformation.
The way I solved this in our case was to triangulate the meshes in two ways and repeating the same described operation on the triangulated meshes. And then averaging everything together on the vertices. Here is where the current implementation gets a bit limiting, as it only works with meshes that fully consist only of quads. It also comes at a noticeable performance cost for high resolution meshes, so this is definitely something to be improved in the future.
Now all deformations should be accounted for and the resulting directional tension map can be output as an attribute to be used in the shader. It's important to note that this is not to be interpreted as an euclidean direction vector but still using the adjusted vector space. That means it needs to be transformed back before it is used. But that should be done in the shader, rather than on the vertices, because otherwise face interpolation would mess things up.
To generate the wrinkle map itself I knew would be difficult to pull off in a way that both looks believable and holds up with the animation of the skin tissue over time, as strength and direction of the compression change. On top of that it shouldn’t be all too draining on the performance of the shader, although that was less of a priority in our case. This could also be done by manually creating wrinkle maps for different directions and blending them together, I went with a fully procedural approach.
The way I ended up doing it was to divide the UV space into cells and within each cell squash a perlin noise according to the direction that the tension vector is giving us. This noise was then randomized and repeated 4 times to blend between different grid alignments and hide the seams that the cells create. This method creates a result that is quite nice, while it isn't perfect, as it cannot create longer wrinkles that continue through multiple cells. But I found it to be the method with the best behavior for animation.
One very important thing to keep in mind about this technique is how important the actual deformation of the geometry is, as all the information used comes directly from that. So if a system like this is already in place when the facial shapes are being sculpted that can be of enormous help to always have a reference of how the tissue is deforming and where it is compressing. That is useful both for the accuracy of the generated wrinkles, but also for easier iteration on the shapes themselves as there is a direct reference point without having to eyeball the shift of skin tissue.
Some notes about potential adjustments to this technique:
Overall I'm really quite happy that we were able to achieve what we set out to in terms of facial performance. Though the wrinkle only contributed to a small extent I am also really happy about the tech that came out of this and can hopefully be helpful for us in the future, as well as others.