To achieve a higher level of realism in Project Heist, the Blender Studio team needed to find a workflow for layered sculpting in Blender. Here you'll find the process and key insights, as well as what this means for future Blender development. More detailed information on the workflow will be shared at the Realistic Human Research.
For the creation of Einar, the main character of the new Open Movie Project, we set ourselves a high goal: Hyper Realism! This was a big challenge since we've never done a movie in this style before and our pipeline was not specifically built for it.
Next to that, we also knew that the features available in Blender would need to evolve to fit this goal. While not every feature could be improved, a lot of attention was given to EEVEE, and massive improvements were brought to the hair system. Unfortunately, other areas could not get as much attention, so here's what the art team at the studio cooked up as a temporary workaround.
We started out by seeking advice from artists who had experience with achieving realism in Blender. Sculpting feedback and advice from Daniel Bystedt, realism for static portraits and resources from Kent Trammell and a rigging & texturing workflows for high detail results from Chris Jones were been a big help.
Originally, we were aiming for a pure wrinkle map approach for facial details. But we realised we needed more control over the facial deformations than just shape keys and ABC displacement maps, especially because of the advanced age of our character. So we invested more into our own workflow.
It turns out that layered sculpting would be the way to go forward. It allows to sculpt the skin details separately from the broader shapes, which makes it easier to adjust the sculpt later on. We could also sculpt and blend detailed shapes based on the current expression or pose.
A good standard to base the facial sculpting on is the Facial Action Coding System or "FACS" in short. It's an extensive list of "Action Units" that make up the range of movements the muscles in our face can perform. While we could've sculpted all individual action units and construct a shape key based rig, we decided otherwise.
We wanted to still leverage our fully bone-based rigging setup of CloudRig that all of our characters are now based on. Every sculpted shape was instead used as an additional Corrective Shape Key and a displacement map, on facial shapes that would need them. All eye and head movements were still achieved with bones. Bones would also allow more non-linear movements like sliding skin in arcs and along solid bone structures.
For the FACS that we did want to sculpt, we needed to optimise this for Eevee rendering. So we ended up grouping the necessary Action Units into 6 Facial Shape maps to save on memory for textures. This didn't include all possible Action Units, rather just the ones we couldn't accomplish with bones alone. We also gave ourselves some freedom to mix the action units to fit the shapes we wanted to add to the rig.
The first experiments to sculpt in layers didn't work that well. We tried to do this with just the factory default features inside Blender, by sculpting on different Multires modifiers with the same base mesh, baking the details on each and applying them on each other via displacement maps. This gave something resembling sculpting layers but had very big issues:
Yet, with this rather limited workflow we could sculpt a couple detailed wrinkle maps on top the base mesh. However, wanted to sculpt individual shapes based on the FACS system. So this workflow was not enough.
We also at first baked displacement maps directly from the multires modifier, which currently is still inaccurate and unstable. This often did not give usable results. We had to find a different workflow.
How do we sculpt in layers then, if it's not natively supported? Thanks to the very active Blender community there is an add-on available online that makes this possible. The Sculpt Layers Add-on became instrumental in already testing the workflow and getting results before there's an official implementation in Blender.
The add-on is in essence a way of regularly saving sculpted deformations to attributes and applying those attributes back onto the multires grids. It's a hacky, slow method of achieving layers but it works.
There was still a lot of room for human error though. The first big mistake was that we rushed the retopology on which all the sculpting layers were stored and baked to. And updating the topology meant re-projecting every single sculpt layer onto a new multires object. This led to the labour intensive and time consuming process of cleaning up shrinkwrap projections and multires artefacts. The lesson was clear: The retopology must be locked before starting on the final layers.
Sculpting and eventually exporting/baking the sculpt layers was straightforward, but adjusting the sculpt layers afterwards was not. Any sculpt layer would be used as a base level shape keys for the rig and/or a high detail height map for displacement & shading. There are detailed notes already available but they will be reworked soon to be more fleshed out and readable. There were many situations where we afterwards made corrective shape keys or other shape changes to the head and expressions. These then needed to go back into the sculpt layers. This process was possible, but complex with much room for mistakes. Sometimes this could explosively or incrementally corrupt the layers. The workflow was an exercise in constant caution.
To achieve more accurate displacement map baking we had to work on a different method. The alternative to multires baking would be baking from a source object to a target object. This is currently not natively supported for displacement, but we figured out a workaround. By using geometry nodes to compare the surface distance on two different meshes and storing that difference to an Attribute, we were able to bake this attribute to a texture.
This worked well but had some drawbacks:
The baked result was always flat shaded, so the only way too bake smoother maps was to subdivide the source and target object further.
This put a huge memory (90+ GB in our case) requirement on the baking process, so this is not something anyone can just do on their personal computer.
Just in case you have a supercomputer, we shared the geometry nodes baking setup here:
Eventually we discovered that sculpt layers offered a unique opportunity, if used properly: Layered sculpting in tangent space! Here is why this is exciting.
Shape Keys and the sculpt layers add-on by themselves work simply in object space, meaning that each vertex is moved along 3 axes based on the single objects origin. Moving points around works great that way, but rotating entire areas is impossible. However, multires subdivisions are stored in tangent space, meaning each vertex is moved along 3 axes based on the direction of the base level surface. So what if these features are combined?
Let's give an example of how this can be used. For context, We were sculpting clothing deformations with sculpt layers. This is faster, cheaper and more optimised to do than detailed cloth simulation.
With this method we could for example sculpt all folds and details on the jacket in Multires. Then we add a shape key to rotate the arm (which is stored on the base mesh in object space). In sculpt mode we then add a sculpt layer and make the changes on the multires modifier (which is stored in tangent space). When toggling the shape key, the base mesh vertices will move in object space, but the sculpted subdivisions will follow the direction of the base level faces in tangent space.
We ended up barely using this tangent space workflow because of time limitations and technical issues. Unfortunately, due to this functionality depending on Python scripting and hacked add-ons, even in the best case scenario the workflow would be incredibly slow performing. Having shape keys and sculpt layers as separate, but co-dependent deformations is also tedious to manage. This would be a big use case to address for development of official layered sculpting in Blender.
While we faced the challenge of creating a realistic human character with a certain awareness of Blender's limitation, it's always surprising how many issues come up, but also how many unexpected solutions are available.
A very important outcome of this process is that a lot of reference material is now available to test and benchmark future tools and workflows, and even more will be shared once the project is wrapped up.
We will also go way more into detail on the workflow and exact insights that were worked out in the Realistic Human Research Documentation. Since the workflow is very specific (including some hardware requirements) it's not time to turn it into an actual tutorial, yet.
The focus will have to first be on sharing what we know with the development team and the community at large.
I wanted to add as well that I hope that at some point you can finish the last step that the workflow needs which to make it a true competitor (and beyond) to what other softwares have. I am talking about supporting vertex colours in the multi resolution modifier. This information is now sculptable but is not captured in the modifier, only in the base sculpt, making it almost useless. If it was captured at the highest resolution and kept all the way throughout it will allow for a true UV-less workflow for quick prototyping and then bake all the vertex paints into textures. Blender is uniquely positioned to take advantage of the vertex color layer system it has so artists could paint all the layers, albedo, subsurf, roughness... etc in vertex paint so they can be used UV-less in the material editor to full effect as this is already supported by the Vertex color node in the shaders.
This workflow would close the loop in the production pipeline and bring blender on par with the only issue I've always had with the multi resolution workflow and make it exponentially useful as people will not only not have to UV to get advantage of this system, but not have to bake all the textures until the end (and then improve them from the vertex color info), but could make prototyping and connecting a breeze.
@Nacho de Andrés Thanks for the feedback! I agree that color attribute support is a big thing for multires. We need to see when this can be supported, because ideally the multires feature should be redesigned to support much more than just colors. Layers, VDB and texture painting for example come to mind :)
Thanks for the article and the detailed explanations on how things work. I love the new approach and that the Sculpt Layers addon is used to facilitate the challenges encountered. I hope there is a way to introduce similar functionality in the package (minimising the damage to the add-on creation).I am pleased to see how so much progress has been made on this workflow
Join to leave a comment.