Quantcast
Channel: Computer Graphics Daily News
Viewing all articles
Browse latest Browse all 1295

Relicts Shortfilm | Making of

$
0
0
[ #Relicts ]
"Relicts" is a dark and mysterious fullCG short movie currently being in development. Today Arkadiy Demchenko will explain about how the team work with Facial Scan and Rig to bring the realistic CGI Character into the movie. Check it out:

We pay a lot of attention to details and believability of our main character, Girl, and in this story we want to tell you more about our efforts to bring her facial mechanics to the next level.

Our Girl character was completely hand-sculpted in zBrush and meticulously re-modelled with animation-friendly topology in Maya. Then we've spent quite a lot of time making her facial rig based on manually sculpted FACS shapes, but faced a lot of issues in the process - uneven skin movements, unnaturaly looking expressions and a considerable amount of time needed to make even that! Making a realistic facial rig via hand-sculpted shapes proved to be a very challenging and time consuming task, so, when later on we were approached by R3DS guys with a proposal to try their 3d scanner and software they develop for this very task, we embraced the opportunity right away.

The general idea is this - we scan a real actress in a default relaxed facial pose and, let's say, with a kiss, then use their software to track the skin movement between these two scans, and, finally, transfer this skin movement to our Girl topology. As the result, whatever behavior the actress had when making a kiss, we get the same on our Girl's face. More to that, we also get a color map for this kiss pose with all the wrinkles and blood flow changes - when kiss gets activated on the character, we can add realistic whitening and reddening of the skin.

It sounded like magic, so, to really push the effort (and R3DS guys patience) we've made a list of over 100 different facial poses and brought not one, but two actresses to the shoot (both are called Anna, and that still causes some confusion in our discussions).

R3DS scanner is actually a cage with almost 50 DSLR cameras focused onto the head of the person sitting inside. There is also a set of flash lights attached to the cage and a white sleeve/curtain around the whole thing. You push a button and all lights flash at the same time into the white curtain that illuminates the person with soft and even reflected lighting, reducing highlights and shadows... and all cameras take a photo at the very same moment.

Repeat that for each facial pose and each actress and you've got yourself a long flashy entertainment! Our shooting session took almost the whole day and resulted in thousands of 5k photos.


The next step is to reconstruct each pose from a set of photos as a 3D mesh - this process is called photogrammetry. We used RealityCapture for that. First of all, you need to align images correctly in space, the way they were taken by cameras in a studio. It happens automatically for the most part, but we had to define some control points to connect several pieces together into a single model. Since the cage didn't move, you can reuse the camera positions data for other facial poses and save a lot of time.


High quality reconstruction of each mesh took around 30 min and produced a very dense result - 20 million triangles. It's a good idea to use RealityCapture internal tools to remove the garbage polygons around the model and reduce the polycount, keeping the shape intact - we stayed within 3-5 million range for the final mesh:

Now you can project color data from photos onto the mesh and bake it into automatically generated UVs - we used 8k resolution for the color map:

Scans might need some manual clean up work in areas like eyelashes or sticking out hair, that are too fine and messy to produce a nice mesh:

Now we have a manageable mesh and a color map for each facial pose, and the quesion is - how do we apply this data to our character? That's what R3DS Wrap is made for. It can do a lot of useful things, but here is the main workflow.

First of all, we project our character topology onto the scan with default relaxed pose, using control points to match the main facial landmarks as close as possible - that's our base mesh, but now looking like the actress:

Then we use OpticalFlowWrapping feature to deform this base mesh to match each scanned pose. The trick is - Wrap uses color map to track the skin movement, so, it doesn't just project base mesh vertices to the closest points on the scan, but finds their proper positions on the changed face. As the result, our vertices move as if glued to the skin, just like those markers we've painted on the actress face, and we get a topological mesh that can act as a blendshape target and produce a natural deformation.
After that we can seamlessly transfer color data from that fragmented UVs of the scan into UV space of our topological mesh:
Wrap finds features to track automatically for subtle changes, but needs help with stronger ones, especially in uniform areas with not enough variations, or for new areas that get revealed (like rolling out lips). In practice, we ended up putting a control point in every marker/feature we see and adding some more using a measured guess and estimation - that's a lot of points and probably an overkill, but works right away most of the time without too many trials and errors (that, actually, can take more time than placing all these points):

Another useful trick we came up with is to wrap base mesh in two steps. First time we wrap the base mesh to a scan using control points and high smoothness settings - that produces a very nice and even deformation, but not exact. Then we transfer texture and wrap the result again without any control points and with lower smoothness - it makes a very close match. Transfer texture again and the pose is ready for export. You can reuse the network for other poses and calculate the whole graph in one go. Here is how it approximately looks like:
Now we have a set of actress facial poses, but applied to character's topology, and it's a matter of simple shapes mixing in Maya (could be done in Wrap too) to transfer these deformations back to our Girl.
Of course, stronger the difference between the character and the actress - more unnatural the result might look. We were lucky to get a lot of nice poses from one of the actresses.

The next huge step is to make a working facial rig out of these shapes we've produced - extracting and splitting the areas, mixing and blending, building controls and dependencies... But that's a whole different story for the next time. Meanwhile, check out our short clip of this whole process:


If you're interested in our project, don't want to miss new tutorials/breakdows and want to support our endeavor, please, visit our website and subscribe/share our social channels:
Website: relicts.com
Facebook: facebook.com/relictsmovie
Vimeo: vimeo.com/relicts
VK: vk.com/relicts_movie
Instagram: @relicts_movie
Twitter: @relicts_movie


  More about [post_ads_2]

Viewing all articles
Browse latest Browse all 1295

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>