Created using Nvidia audio2face
I used this rig for my entry to a monthly animation contest called the 11 second club. The contest has you create an animation to a short sound clip. I placed 19th out of 200 plus entry.
The toughest portion of the rig was the facial set up. I wanted a fast responsive set up that didn't limit the shapes the eyes and mouth could make. After doing research on the Lego franchise games and movie I decided to use real geometry and mask it with curves.
This is the way it looks like the Lego movie was made. They took the resulting geometry and projected it to texture to layer onto the face. As opposed to the Lego games which used geometry controlled by blendshapes.
The major hurdles to overcome were how to detect multiple objects and conform to the new shape based on the input. The teeth, top and bottom, had to be clipped by the outlines of the mouth, the black back of the mouth had to completely cover the inside of the mouth regardless of shape. I came up with a simple fast way to do collision detection along each vertices. The collision needed to be real time without slowing down the rig. I wrote a MEL script to automate the procedure.
The rig that does the collision detection lives off to the side of the main rig. This data is transferred to the eyes and mouth assets through a blendshape. All of the resulting pieces are placed on the face and bent to the correct angle to fit. This could be taken one step further by being rendered flat then importing the animated sequence and layered into the texture on the face.
By using this setup, I can quickly make edits to any piece of the facial rig to fit a new character. In addition, I can add features like beards or mustache that adapt to the facial movements without having to make a large amount of blendshapes.