Facial Action Coding System’s (FACS) are a feature often used within animation pipelines for visual effects and games production. This process involves capturing a range of facial expressions using photogrammetry techniques, which can then be used to reproduce realistic facial deformation for digital characters. The most fundamental part to this pipeline requires the use of blend shape deformers. Blend shapes are a tool used in 3D software to interpolate between two shapes made from the same numerical vertex order. This effect can be multiplied and a mesh can be “deformed” and stored in a number of different positions at once. For this project I will be using blend shapes to deform facial expressions. When multiple expressions are combined together you can begin to build a facial rig which acts as a puppet that can be used to create animation. In terms of realism, blend shapes work really well if the expressions are modelled using good anatomical reference. The alternative is to use FACS scans, which require a different approach, but can save a lot of manual work if the data is optimised correctly.
Blend shape rigging can produce some great results, but if you want to push the realism even further you can also introduce blended deformation on the shader level. With the use of texture blending, you can add variations of high resolution wrinkle deformation using either normal maps, or displacement maps. This technique can enhance the appearance of skin stretching, or compressing, and enables you to create surface deformations that would otherwise be impossible to achieve with blend shapes alone. The theory behind this technique is fairly straight forward, but once combined with a rig consisting of large sets of shapes, it can become quite complex and will require an organised workflow to link everything together. Throughout this post I will be summarising my workflow for using FACS data, and breaking down my research into the texture blending process for anyone who might be interested.
Creating the Neutral Expression
The first stage of blend shape rigging is modelling all the facial shapes. The first and most important shape is the ‘neutral’ expression, which will act as a baseline for all the other expressions and combine everything together into one facial rig.
I like to work up the neutral shape to the highest level I can, taking it as far through the look development stages as possible. The reason for this is that once the rigging process has began, it is better to avoid making any major changes to the neutral shape. Once I am happy with how the model and textures look when rendered, I can then move onto the other expressions. For this project I used scan data to create all my facial shapes, but the principles covered in this post can be applied to hand sculpted characters too.
Most of my freelance work over the past six years has been related to photogrammetry pipelines; whether it be capturing data, or handling the post processing and optimisation stages. When optimising facial scans, I have found there are various ways you can approach this. The workflow I prefer requires two pieces of software, Wrap3 and ZBrush.
See below a time-lapse of this process.
The steps I use for cleaning up a facial scan:
- Remove unwanted geometry or stray parts from the scan (prepare for wrapping)
- Wrap base topology in Wrap3
- Import into ZBrush and cleanup topology flow
- Subdivide base mesh and project scanned details; starting with a low subdivision level and gradually working up
- Clean any noise from projected scan and remove hair, skull cap, etc
- Apply surface details using a range of techniques; alphas, hand sculpting, high pass from diffuse map, etc
As mentioned above, there are a few options I use for tertiary detailing; either using hand sculpting, or applying a high-pass from the scanned diffuse texture. Another option is to use a multi-channel Texturing XYZ pack. For this project I tried a combination of all these techniques, but Texturing XYZ was the fastest approach to achieving realistic looking skin.
Facial Blend Shapes
The second stage of the pipeline is to create a set of blend shapes that isolate individual muscle groups across the face. When these shapes are combined together they should provide a full range of motion in order to achieve realistic facial animation. The human face has a complex anatomy made up from 43 different muscles, which makes it extremely hard to recreate and look believable. Rather than representing every individual muscle on your facial rig, you can use blend shapes to represent them in groups which create different shape variations depending on whether the muscles are expanding or contracting.
The number of blend shapes required on a rig will be determined by the specified rigging pipeline for the character you are creating. For the rig I have built, the aim is for it to be integrated with the Apple AR kit for real-time facial tracking. For this I needed at least 51 facial shapes. When using scan data, I have found that you can extract a number of different facial shapes from a single scanned expression, so the number of facial scans can be much lower.
For this project I used a set of FACS scans captured by Ten24, which can be found on the 3Dscan store. The data was great to work with and contained all the detail needed for this project. There were a few shapes that I had to create myself, but as mentioned earlier, every data set is different and ideally you would plan the list of FACS before capturing them.
I used 14 scans in total from this set. The FACS marked green on the contact sheet (right) were to be processed into blend shapes, and blue for extracting wrinkle map details. Some wrinkle map expressions where combined together to minimise the number of maps used in the skin shader. The intention for this pipeline was to replicate a rig setup that could be used within a game engine, so optimisation was an important factor.
Using the cleaned neutral scan, I began wrapping my selection of FACS expressions. To do this you can use a piece of software called Wrap3 by R3DS. This software has a set of tools that are perfect for working with facial scans. It has features to help you align all your FACS, as well as the wrapping process too. With ‘Optical Flow’ and ‘Blend Wrapping’ you can quickly create a set of models to use as a starting point for your blend shapes. Optical Flow works by aligning texture values from one piece of geometry to another.
Here is an example of the brow furrow expression (image right). You can see that the texture on the original scan and wrapped base mesh are matching. There are some differences in shadow occlusion and blood flow variation, but otherwise the skin appears to sit in the same position. Some expressions can be so extreme that the optical flow is unable to find similarities between the two texture maps. In this case I would place control points to help the optical flow match them better.
To learn more about this workflow, visit the R3DS website where they have a set of tutorials that explain the process in more detail.
Once I have wrapped all the scans, I then begin isolating the areas needed to create the individual shapes that make up the facial rig. I find that it helps to test these shapes by adding them to the rig one-by-one as they are being created, to help reduce trouble shooting later on.
Creating Wrinkle Maps
I have found that the key to wrinkle blending is to split up the details into secondary and tertiary forms. I then composite everything back together again when building the skin shader. For the workflow I am covering on this character, the wrinkle maps will only be blended across the secondary forms, therefore no tertiary detailing was necessary on the expression shapes.
The first step was to bake a ‘base displacement’ for the neutral expression – this also only required secondary forms, so no surface detailing. You can use either normal or displacement maps for this, but I was working in Maya with offline renders, so using displacement maps worked better for me.
This process was applied to all the expression shapes that I wanted to use for extracting wrinkle maps.
See gif below showing extracted wrinkle maps (the exposure was increased on these maps for presentation purposes).
Once I had all my wrinkle maps baked from ZBrush, the next step was to composite them together in the Maya skin shader. Before I could do this I needed to generate a set of masks to control where these wrinkle maps would appear on the face. There are custom tools that can be used for generating these maps automatically, but for this project I painted them by hand in ZBrush.
Once all the modelling and texturing had been completed, the next stage was to compile everything within Maya into a single face rig. I found that it was best to start building the rig at an early stage as some shapes required a lot of testing before they started to work nicely in combination with each other.
For this project I spent a lot of time working in the Maya Node Editor, as this is where you can link together all the texture blending and corrective shapes. I have included an example of a basic node graph; linking three wrinkle maps together and compiling them into the displacement channel of a skin shader.
The final node graph I used was more complex than the example above, but I thought it would be better to show a simplified version for demonstration purposes. Once you understand how to link up one wrinkle map, the rest are fairly straight forward and you can repeat the same process each time.
The three main steps I use for linking up wrinkle maps:
- Expression Maps – import the expression map and subtract from it the base neutral displacement; this is to avoid creating a multiplied effect when all layers are composited together (in step 3)
- Masks – apply the masks by linking the blend shape weight to drive its visibility, so that the mask acts as a switch to turn on/off the isolated wrinkles
- Compile Wrinkle Maps – add the blended wrinkle maps together along with the base neutral displacement and tertiary details
Colour Map Blending
Colour blending is another technique that can be used to improve the realism of a facial rig. It can help create the illusion of blood flow variation, and the appearance of skin stretching, or compressing. To apply colour blending to the skin shader I used a similar approach to the wrinkle setup; using masks to isolate and drive the visibility of the colour maps. Some of the more subtle colour blending can be easily disguised by the blend shape deformation, so it is easier to see its effect in UV space. The video below gives a better demonstration of this.
The aim of this project was to explore a number of different facial expressions and how they interact when deforming between each other. I am pleased with how this character has turned out and have really enjoyed the project. It has been a good opportunity better understand the FACS process and to explore some new techniques such as texture blending.
See below the final rendered sequence for this project, displaying a full range of motion across all blend shapes and textures blending used to create the facial rig. The rig is made up from 51 blend shapes, 9 corrective shapes, and 18 blended textures for both colour and displacement channels in the shader setup. All the shapes were either made from FACS scans, or hand sculpting in ZBrush.
It has been really fun to push my understanding of facial modelling and how to create realistic facial animation. The purpose of this project was primarily focused on building a FACS rig, therefore I didn’t spend too much time on the animation and look development stages. If I were to revisit this character, I would like to improve the animated sequence and also push the look dev further by adding some hair to the character.
This process might appear to be quite challenging at first, but after enough repetition you can begin to see a pattern emerging within each step. When manually building a facial rig there can be a lot of repetition involved. Even if you only build one rig using this same workflow, you can still gain a pretty solid understanding of the processes involved.
To push this character even further, I thought about adding a second wrinkle map system for blending the tertiary details. This would have been nice for some of the more extreme expressions that create a lot of skin stretching, but would only be necessary for improving macro renders of the skin.
The facial rig featured on this post is one I have built for a larger collaborative project with VFX supervisor Marty Waters, where we will be exploring real-time facial capture for virtual production. The next step for this character is to rebuild him inside Unreal Engine. Hopefully once I have learnt this process I can write another blog entry on this project.
I hope this post has been useful to anyone reading. For those interested in seeing more, I have added some work-in-progress shots to the FACS Pipeline highlight on my Instagram page. If you have any questions please feel free to leave a comment below. I would be happy to hear from you.