Skip to content

Animation Development Lip Sync

Selecting a reference video:

I started off with a clip from Netflix’s Witcher: https://youtu.be/uy4XNZX6gJw

I liked the sense of balance with the characters and am a fan of the Witcher so I was familiar with the characters, how they acted and talked etc. After starting the scene I needed to download the clip and change the file format as Maya didn’t allow the MP4 format.

 

The section I used was of Geralt’s (Henry Cavil) retaliating to Jaskier (Joey Batey) the point of the scene is that Geralt’s character cannot escape his fate and needs to find it before it finds him. The hope of using this clips quickly dissipated however as the constant camera cuts was very distracting and without animating both characters and setting up cameras within Maya meant that a lot of the lip movements would end up interpreted rather than directly referenced.

 

I decided to look at multiple other sources for a clip that would work better and wouldn’t include any crazy movements so the facial animation could be read properly. I started looking through YouTube and came across these 3:

Viva La Dirt League: The Importance of Looting Read Dead Redemption” – Good lines and actions but camera cuts to much

Good Will Hunting: You’re Move Chief Scene – Good scene but dialogue goes over time limit per sentences unless dialogue is cut short

Ghost Rider: Jail Fight Scene – The editing means much of this will be interpreted and highly exaggerated

Deciding on the Good Will Hunting scene as it was visually the most clear but also didn’t have overly exaggerated reactions and meant I could add some personal touches to the animation to also align with whatever character I wanted to go with. Alec shared a series of free rigs including the characters below:

https://agora.community/content/aang

 

 

 

 

 

While I originally began animating with the Aang rig when using the Witcher scene I decided to check out the Michiru and Link rigs and both of which  had more lip controllers so I decided to use the Michiru Rig instead because I liked the 2D style in the shaders. I began by posing my character and setting up the scene with rough blocks so I could focus on the movements with the bounds of the environment in place such as making a quick bench and having Michiru sitting on it. I immediately moved a section of the hair as I covered a section of the face and I felt it might prevent my sequence from reading as clearly because it will cover the eyebrows and upper face.

Starting the lip sync animation I researched into animation styles and work flows with these videos being some of the most useful links:

These links cover the shapes the mouth makes with video 3 (Improve Your Lip Sync Animation In 3 Easy Steps) covering the ideal that Alec mentioned that people blend letters and sounds when talking and that this should be reflective within animation otherwise it will look uncanny. Sir Wade Neistdat says “One of the things I done wrong and I see others do with lip sync they try to over annunciate everything that you say” I mad sure to keep this in mind during the blocking out of my scene by focusing on the vowels or consonants that have most emphasis on the word even if that means missing the mouth shapes of some words because it will look more real such as in the word “Thought” as the G sound is almost non existent etc.

 

First Pass Rough Block out: Rough_Pass

Link:

Studio Reference: I didn’t focus on a particular style of animation but wanted to try and make realism my main focus as I looked at animation from EnterYes! and Sony Pictures. As it is 3D I didn’t want to spend to long on the background and wanted to get the movements as clear and readable as possible first but I did look into what types of work both of those studios make an because of Sony’s attention to their animation style in Spiderverse and EnterYes’s work on their game pieces. I wanted to attempt this in a different way through portraying realism with a 2D filter over my character to really let my facial movements and readability show through.

I focused on some of the subtle movements and started of with a block stepped layout before moving on to smoothed tangents so I could make sure I have the right mouth shapes before focusing on the smaller details, after making progress I moved on the the second phase which was uploaded to Sync Sketch and I reached out to Kevin Ridgeway who has worked for Sony Pictures and I asked for feedback (He gave his feedback through zoom so I cannot post it) and I also asked a mutual contact at Enter Yes! for feedback who in turn asked their animator (They are a Junior animator but feedback is feedback).

Here is some of the feedback and the reference link: https://syncsketch.com/sketch/MmU0NTJjMGU1/

Link to next version:

I tried to implement some of these changes within my next iteration. As well as individual feedback received from Alec, I made a number of changes covering the motion of the main character in the scene, from subtle arm movement to give a sense of realism so the body wasn’t completely static throughout the dialogue. I also tweaked the head movements so there wasn’t as many key frames because it felt very stiff almost robotic (this issue was also mentioned during feedback from Kevin and Kyle etc.) I also added a HDRI in the background because it looked nice and added to the scene layout.

Link to final version:

Reflection:

I was excited to attempt this particular piece of coursework because although I had done many parts of the animation pipeline before I had never attempted Lip Sync animation so it was definitely out of my comforts zone. Learning a new rig was also really good as I know as an animator I will most likely be told to use others rigs as I move forwards so this chance was really helpful.

If I was to redo this assignment I know I would have spent longer on the animation of the secondary actions for the parts of the rig like; The Ears, Tail, Eyebrows, Eyes etc. I did attempt a version with blinking but it constantly felt like way to much or not enough and I knew I didn’t have the time to correct it.

It can be found below:

Also I would have rendered out the full sequence within Arnold but my machine could not handle the render time but here is a nice picture that it rendered out

Published inLip Sync

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Skip to toolbar