Vertical Slice – Portfolio

Concept art and test model/anim

 

 

 

 

 

 

 

 

 

 

 

Grass

 

 

 

 

 

 

 

 

 

 

 

 

Mushroom

 

 

 

 

 

 

Landscape Materials (and textures)

 

 

 

 

 

 

 

Smart Material/Texture guide (not my assets, just showing material)

 

 

 

 

 

 

 

 

 

 

 

FaultE (everything)

 

17 unique FaultE anims (used in final)

 

3 climb anims

 

8 walks and 3 beams (blended overtop)

 

2 pushes

 

Scans

 

one by one

 

 

Carniviper and Evil Mushroom remodel and rigs

 

 

 

 

 

 

 

 

 

 

Carniviper and Evil Mushroom’s 4 animations

 

All character animbp logic and states (and montages + blendspace setup)

 

Niagara Particle FX (smoke, leaves, spores, bubbles, scan, beam and thrusters)  (beam/scan/thrusters spawn – scripting)

 

Cloth Simulation

 

Level Design for first two playtests, final lighting and atmospheric fog set up in finalgame (0:15 onwards)

 

Textured Posi

 

Vertex Painting Material setup

 

 

 

https://www.youtube.com/watch?v=3Awzd32S188&ab_channel=TheCondorgaming

Vertical Slice Group Project

Pre-Production: Paintings and test model/animation

We started out by using the wall in class, writing up genres of stories and types of games we all wanted to work on. This was followed by a vote, where we all voted out the least favourites. We ended up going for a Sci-Fi Fantasy Walking Simulator, didn’t sound like the most interesting game but definitely had some scope for cool art. Then we brainstormed ideas out from what the storyline could be since this was the main factor of walking simulators. This was all recorded on our miro board by Sam.

(This blog is less detailed than I hoped in terms of documenting my development process because I just lost 240 screencaps </3)

 

 

 

 

Then we organised what role each member would take on, with me being given character modelling and art direction. I was happy enough with this since I was hoping to work on either character modelling or character animation. Using the brainstormed ideas from Miro I then made some concepts. In terms of art direction, I was hoping to go for hand painted and stylised assets – so I painted my concepts over sketching.

 

 

 

 

 

I really like how the coloured thumbnails came out (first two). I wasn’t 100% on what a walking simulator entailed but I imagined we could play into loneliness, maybe an archaeological robot alone on an alien planet that he was sent to for documentation. I was thinking of games like ICO and shadow of colossus, playing a lot with the scale of the forest to help convey that theme of isolation. The concept on the right is looking at some potential assets, some fungi, ruined technology (perhaps this add a mystery element) as well as a strange alien plant creature. This was a little muddy but I still like how they came out.

 

Another game I thought would be good for environment inspiration was the last guardian, from the same team as ico and shadow of the colossus, with similar themes.

 

 

 

 

 

 

 

I loved this design I found on Sketchfab while I was looking at design incorporating robotics with cloth, so I thought this would be a good model to learn from. We wanted a medium poly style so I used this model as inspiration, but obviously not recreating it. I approached it in a weird way, placing different cubes and using quad draw to create the faces with cleaner topology. I realised I could model it in pieces easily since the character wouldn’t require organic deformations, so I modelled it similar to how it would be structured in ‘reality’, with the helmet as a separate piece.

 

 

 

 

 

 

 

 

I also hadn’t modelled a character in a while and never modelled a robot character, so using this model as inspiration was useful, but as you can see I started to branch from it. The group were mentioning Ghibli and breath of the wild as inspirations, so I added a Ghibli screenshot as I modelled to see how he would fit into that world. I wanted him to look quite peaceful, so I gave him weaker “features” (chin) as well as bad posture.

 

 

 

 

 

As the ideas progressed from the first three concepts, I added a bit more colour to better match the whimsical themes the group were heading towards. I also played with the idea of a companion in game, a native butterfly or a fairy creature, as well as a hard to make out mushroom creature across the river (to tie into the fantasy creatures). This painting wasn’t as strong as the previous few, I’m still trying to figure out painting but I think the simplification in the previous environment shots was what I liked so much about them, this more detailed attempt ended up much muddier.

As mentioned I was hoping to work on character animation too, so I also mocked up some idle animations to add some life to the character. This wasn’t rigged or anything, more just having fun seeing how this guy might act if we went down this route.

 

I liked the idea of having two binocular eyes and a little antenna, I felt like it made him feel a bit more animalistic. I thought it might’ve been cool for him to have no real arms, just a long cloak and legs, thinking we could maybe play into birdlike movements for him as he trotted along and inspected his surroundings.

At this point Jordan wanted to draw up final concepts for me to model from, his concepts were going a different route from what I was doing so far so I moved away from the character for a while and decided to make some assets we could maybe use as guides for environment art style. I still found this really useful as research and practice into hard surface characters though.

 

Environmental: Grass, Landscape and Mushroom assets

 

 

 

 

Since we were looking at things like Ghibli and BOTW for art inspiration now, I thought we needed a grassy plains, so I found some grass in games I thought fit this style, BOTW, Genshin and The First Tree. They were low poly, tall and felt fluffy.

 

 

https://www.youtube.com/shorts/8z645zPawhY?&ab_channel=Financian

This 30 second video from Stylized Station was what I really based the grass on, as well as the video above from Marpetak. Marpetak went into more detail, teaching me about vertex painting in order to change how the wind in unreal effects the mesh, as well as aligning the normals to point up, so the shading works correct in unreal engine.

 

 

 

 

 

 

 

 

 

 

 

This is the grass I created. From my research I read that Unity is better with alpha transparency while Unreal Engine can handle polygon count better for large foliage instances, so I started by modelling this simple grass mesh in Maya and duplicating it, making a small bunch of grass. I also eventually made a much lower poly version with less blades as an LOD, this would spawn in place of the original when you are further away, optimisation like this is important for the game to have better performance.

I have a fairly surface level understanding of Unreal Materials, but this setup wasn’t crazy, just some textures to add variation in colour and simplegrasswind, an in engine preset for adding “wind” to assets, affecting the world position offset. The above videos use very similar techniques, while my final grass ended up a little different the logic behind them is very similar.

 

 

Loved how the grass came out even on the first pass, think I managed to achieve similar enough to the references I looked at and as time progressed it deviated and changed to fit our final style easily since it was set up in way that’s easy to edit. I did notice the gaps without grass were noticeable when I just applied a green colour to the surface, leading to me working on the landscape materials we would use to paint our level with.

 

 

 

 

 

 

I attempted to make a landscape material for this grass, with similar colours and texture variation in it, as well as normal information faking blades of grass. I set the grass up with virtual runtime textures (wasn’t used in final level). This would let the grass sample the colour the landscape was painted, then apply a gradient between the two, making it look like a coherent part of the landscape. I eventually developed a much stronger landscape material with a kind of with a modular set up, but this was useful for learning how to approach this and worked well for the time being. I was a big fan of where the grass was going at this point and was getting good feedback from the tutors, but I’m not sure why the group didn’t go the grassy plain route in the level.

 

 

This is a video from much later with my more finalised grass, it was set up 100% parameterised, allowing our level designer to customise it to her liking. It was set up with LOD’s, automatic decimation of tris based on camera distance as well as two meshes in foliage mode, one less dense that culls at a further distance, and more dense version with a smaller cull distance, helping blend the grass as it despawns. This in addition to the landscape I eventually developed made it very hard to differentiate grass from landscape material at a distance, as seen in the video above the cliffs, I included some empty areas with no grass in the video to compare the two when close up, if the grass gradually decreased in height in these points it would be very hard to notice.

 

 

 

 

 

 

 

 

Along with making grass early on, I also made some mushrooms. I modelled them from a cylinder in Maya, I cleaned up triangles even though they were going into a game engine to be triangulated, wasn’t sure if I had to but I guess it’s still good practice maybe. For textures I eventually used my smart material which provided a pretty solid base, then pushed the highlights and ambient occlusion using dirt and metal edge generators, only taking the colour information and applying a blur slope filter. I then used a curvature generator to add the emissive elements. One thing that caused problems I had in my last project was with the lightmap density, so I worked on this early on so I didn’t run into any issues, the one on the left in the last picture is what I managed to get.

 

 

 

 

 

The last image is a screencap of them in game, while I like how they came out I think they could fit in a little better, I’m guessing it’s just since they were made so early on in production, they don’t hit all the beats of the style we ended up going with, but I didn’t have time to come back to them or play with their textures anymore. I do like how they look but there’s something missing.

 

I loved the materials and their setup shown in this video (stylized station again). Substance Designer is very powerful, I didn’t end up using it since I struggled to figure it out but some of the ideas from this video definitely translated, while none of my materials ended up similar to his,  some material set up ideas did and the vertex painting he shows makes an appearance later on in the project when I was working with Ben’s cliff’s. Another great video I found with really nice landscape materials were from this youtuber, who doesn’t have any breakdowns but has some really pretty environments. https://www.youtube.com/channel/UCV2yUfvVDuJtEDMDzEgkQ0g

The Dreamscape packs in the Unreal Marketplace also had some very nice environments, but you have to buy their packs. https://www.unrealengine.com/marketplace/en-US/product/dreamscape-nature-mountains They were great inspirations though.

 

 

 

 

 

 

 

My first few attempts at creating seamless textures didn’t work out well, I thought they were seamless when making them but obviously not, definitely took a bit of trial and error to get something that worked well, and I did change my approach up quite a bit after figuring it out. I feel like I really captured the aesthetic of the first image with the two photoshop files though, but changed what I was going for with the textures before the final landscape material was made.

 

 

 

 

 

You basically had to offset the image so the corners would be at the middle, then make it manually seamless that way, one method I found useful was using the content aware fill as this would take information from there you highlight and it somehow fills in the gaps, a really powerful tool in Photoshop. I know (if I understood the software) I could’ve made a much more efficient and probably better looking material in Substance Designer but the landscape material was made when I had a lot of work going on and Photoshop felt safe.

 

 

 

 

 

The last three images are my final landscape material set up and the textures I needed for it, seamless textures created in black and white in Photoshop, allowing me to make them any colour in Unreal Engine. To create Normals from this I used this tutorial: https://www.youtube.com/watch?v=YJqWHsllczY&ab_channel=SaeedMandegarTutorials I wanted to learn Substance Designer but I barely scratched the surface with it, I am much more comfortable in Photoshop, especially since I didn’t have a lot of time to learn a new software at this point. In order to add further variation I used a texture file within Unreal (macrovariation) which would blend overtop what was painted.

Despite being pretty simple textures I think they served their purpose really well, after making the grass, the sand/dirt was simple since there was much more empty space. In retrospect I do think I lost a little bit of the hand painted touch I had in the first few attempts, probably because there were weeks between the different landscape versions with many different attempts. I did learn alot when it came to this material in particular though, and think the way its setup makes it very intuitive when editing the parameters, allowing colour and tiling customisation for both the standard texture and the variation overtop.

 

I’m really happy with how this material came out and the way I structured the variations let you actually paint if you lower the opacity, creating really pretty and natural gradients. I think the grass and dirt fit well in with the environment and isn’t too simple, but not too realistic to no longer fit the style. The macro and micro variation i set up within the material adds a nice painted effect as seen in Ghibli environments, with the darker blues etc. – while also helping to make the texture a little more interesting.

 

Art direction: Texture guide and Smart material, ended up used on majority of assets

 

 

 

 

We had a few comments regarding the consistency of our assets being made very early on so I thought making a guide when it came to texturing would be a good idea. I developed a smart material that would create a hand painted effect with some normal stylized rendering techniques like gradients to guide the eye upwards, this was all procedural and only required a decent bake. The first image is a file I typed up based on my research, breaking down the different aspects of the style as well as how to achieve this within Substance. Then you can see how the material looks on a rock asset, as well as the example JadeToad that comes with Substance Painter.

 

This is how I set up the smart material, as you can see its all based on procedurals and generators meaning it would apply to any model based on its baked information. It definitely isn’t a final look, more a base to work from since trees would be reacting differently to light than rocks, so you would push highlights further/less etc. I hoped it would keep a level of consistency among all our environmental assets. These are all based on techniques I use normally when texturing, making heavy use of folders with masks, as well as when going for this hand painted look which would normally be used in conjunction with actual hand painting for further detail – this non destructive and procedural workflow made it easy to transition into making it a smart material that could be easily manipulated to change colours and push aspects.

These are few of Ben McCullough’s models with my smart material applied to them, I think they worked really well with his  work since it would often be modelled well/baked from a sculpt so the smart material had alot of information to work with. (Ben’s assets !!!)

 

 

 

 

 

 

 

FaultE

This is Jordan’s concept work as we neared a more final design for FaultE. The group were wanting the model to be done much earlier than when design was finalised, so I had to start modelling without concept, then merge my work with his design once I received it. I do wish I had a final concept to work from from the start because now when you look at the concept art, certain aspects aren’t in the final model. This is also because there were discussions between me and Jordan involving the cloth aspects, leading them to be much more simplified. We were worried about how we would handle the cloth and didn’t want to keyframe animate it, so we kept it as a cape not interacting really with his shoulder, so there would be minimal complications with cloth simulation, while also letting us have the option to easily hand animate if our plans to simulate didn’t pan through.

 

 

 

 

I lost my documentation screenshots but I was able to salvage some obs footage while I modelled. I just modelled it in pieces, before working from Jordan’s concept I used my Knight armour set from last module to estimate proportions. There were alot of great references on sketchfab too when it came to modelling a robot, I wanted it to seem like it was a real robot but I wasn’t 100% on how the machinery and pivots would ‘function’ but they helped me figure out the leg and elbow/shoulder joints. You can see in the video there was a little bit of experimentation required for the ‘bicep’ and the legs before figuring them out. I was trying to focus on having clean topology throughout so I could hopefully use this as a portfolio piece, and I think it came out pretty nice.

 

 

 

 

 

This is my collection on sketchfab for this project, it had a lot of robotic stuff but also some environmental assets and creatures. Not 100% sure what in particular I was looking at for topology references or as inspiration but whatever I have used is definitely in there https://sketchfab.com/matthewshannon/collections/coollg

 

 

 

 

 

 

 

 

Some of my research into cloth simulation and approaches to model clothes and cloth assets, as well as then my own (early) experimentation in Marvellous Designer in creating a cloth poncho/cloak. Learning about the patterns was quite fun but I don’t think I scratched the surface, I was going to approach some fashion design students regarding patterns but decided to drop using Marvellous eventually, which I detail below.

 

I first was using Marvelous Designer to try and make the clothing but the effect was way too realistic and didn’t fit well with what we wanted, I do think I’ll come back to the software in the future because it’s really cool and the cloth simulation in it is very pretty. We were considering using it’s cloth simulation on our animations baked as an alembic file but this is very performance heavy, better for cinematics, and wouldn’t respond to player inputs like rotation, so I was researching Unreal Cloth sim at this point. To model the clothes however I ended up using ncloth simulation in Maya, then adjusting this using the soft selection tool to get less realistic folds, building a stylised scarf and cape. I think it came out really well and the scarf is almost 1:1 from the concept art, we just changed the cape as discussed. I also added a few cables to the head above the right shoulder for some more asymmetry and to better fit the concept, the last clip in Unreal engine looked great so I was happy with the model. I then got started on the rig, since (while extremely limited and with alot of help) I was the only person that had rigged previously.

 

 

 

 

 

There were plenty of resources given to use for rigging, skin weight painting from alec and some general rigging theory. The most useful video to me was probably the series from Mike, rigging a humanoid character from start to finish. I pretty much followed this for creating my basic controls and skeleton, while I didn’t fully understand the orientation of joints part I just copied what Mike had and eventually figured it out. Creating the Null groups and naming everything was probably the most tedious part, while the align script given to us by Mike made this easier, it stopped working for me after a while I had to do it align manually. The theory behind what Mike was doing all made sense and working through the videos, creating a character alongside him, was very helpful. Since I had something a little more complex with my character going on though, I had to figure things out by myself.

 

 

My main struggle came from figuring out how to make an IK/FK switch. With what I now understand this is actually pretty easily set up, but for some reason I thought it should have automatic IK/FK matching while swapping between the two versions, which caused a lot of issues. Once I stopped trying this however it all worked fine, I just had to figure out a way to keep the hand controls visible when swapping between modes. Again, this is a pretty simple fix, either having one control or, instead of having the controls grouped under the wrist, they would be constrained to both the IK and FK wrist, with their influences being controlled via the connection editor connected to the IK/FK switch. Instead of doing this to every control, I added a locator that would control them all (acting as either the ik/fk wrist).

 

 

 

 

 

 

 

This details my full IK/FK setup once I got it figured out. There are 3 arms (IK, FK, Bound). The 2 modes would be constrained to the bound arm, with their influences (in channel box) being controlled by the IK/FK switch custom attributes I added (IK,FK) using the connection editor. I used the expression editor to have a simple line of code, saying if IK equals 1 in this switch, then FK will be 0. I then hid the FK attribute in the switch to simplify. Then, to make it nicer to use, I set their visibility to be influenced by the switch, hiding the FK switches in IK mode and vice versa – this was done using set driven key in the animation tab. This same general method was used on the locator influencing the hand controls mentioned above.

I wouldn’t have figured this all out without some help from Daryl, who sent me over a robot rig he made that had an IK/FK switch. This, along with many YouTube videos, feedback from Mike and this video in particular https://www.youtube.com/watch?v=8HX7Np8eeo8&ab_channel=StanAbraham all helped me get my head around it.

 

 

Rig came out great and Jordan said he had a good experience when animating with it, so I’m pretty proud of it, especially for my first real rig and for figuring out that FK/IK switch. It isn’t the most complex rig, as seen in the video its a fairly standard character, but it served its purpose well and we never ran into any issues.

We both decided we really didn’t want to hand animate cloth and thought simulating would work best in game, so I looked into our options. I was planning on Unreal Cloth sim, and if this didn’t pan out my back up plan was to bake the deformations from a cloth simulation (maya or marvellous) to a dynamic joint system within Maya, which would then drive the animation on the cloak in game.

 

 

 

 

 

Many tutorials for cloth didn’t involve character’s, but I managed to find a few videos going into a bit more detail. The videos were very long and experimental though, so it was a lot of information to navigate – they helped me get the core setup down though.

 

 

 

 

 

Cloth simulation wasn’t that complicated once I figured it out, its just very heavily under documented for Unreal, and it’s not the prettiest simulation out there. You need a separate uv map for the areas that will be simulated. Then you basically paint the influence, similar to painting skin weights. Next you set up the fake collision, since it doesn’t collide with the actual mesh in game, but instead simplified capsules. You can edit a few settings to create slightly different simulations, but it isn’t that powerful in this version. I found having the influence of simulation just below 1 the best for keeping the stylised shape of the cape. It does get a little more detailed when going into the cloth config, this video https://www.youtube.com/watch?v=NIwU2WJeco0&ab_channel=JohnConnor breaks it down pretty well, but you mostly influence the stiffness as well as self collision, with the self collision not being amazing most of the time – its kind of hit or miss. The base cloth sim isn’t great but with a few tweaks I managed to get something the group were happy with.

I feel like I could’ve maybe spent some more time going into the config and figuring out how to get it behaving even more realistic, but the group were happy with what I achieved and this was more of an accessory to the game anyways, so since I had more important stuff to work on I left it.

 

 

I think for real time simulating this is pretty effective though. It doesn’t behave 100% accurate when jumping but the rotations and collision all look great. I would’ve liked more time to mess about with the self collision though and try and maybe make it a little less elastic. One downside that is with lower fps in game, which my pc would have in our level, it tends to break or not function as well since its calculated in real time.

 

 

 

 

 

Texturing was done with the same methods as before, detailed in the texture direction file. I used my smart material as a base and used masks to have different coloured areas, mainly red, blue and a darker deeper blue. There is also a lighter shade on the plated areas but this is subtle, as well as brown for dirt. I added some details with normal information like the stencil on his shoulder and fabric to his scarf and cape (separate file I can no longer find). To keep consistency I created a sperate smart material from his scarf and used this on the cape too. One effect that created a nice subtle fabric detail was the watercolorfx in substance painter, adding normal information where the water colour would “clot on the paper”. I also added emissiveness to his eye and hand.

 

Happy enough with the textures, I was going to add more obvious differences in the shades of blue for his armour, metallic gears and plating, but kept it subtle since a lot of our assets so far where kind of sticking to one colour. The dirt on the feet and legs, as well as in any cavities adds a subtle touch which looks better while in engine. The paint chippings and metal edges looked pretty nice too.

 

 

This is the final version of FaultE, the model and the rig in Maya. I think it came out great and I’m really happy with the rig too, managing to figure out an ik/fk switch that was simple to use and worked well, I went in having no idea how to rig and, while I don’t really love doing it, I feel comfortable creating custom rigs for character’s that are also user friendly to animators other than myself.

 

Carniviper and Evil Mushroom: Remodel/Retop and Rigs

 

 

 

 

 

 

Due to some other group member’s changing what they were working on, Gina (game design) ended up making the organic creatures. The models came out cool but had some topology issues, I’m guessing this wasn’t covered much in the game design course. Since I was rigging the character’s I was asked to also take over the model to make it better suited for animation. I wanted to keep the design as close to hers as possible so she still had her creative input, but I had to remodel some aspects completely like the legs to be modelled with animation and rigging in mind. The first image shows what I was sent (left) and my retopology/ remodel (right).

 

 

The rig was pretty simple, just rigging some IK legs, and a few joints to influence the mouth and spine/ mushroom flaps, then painting skin weights. I like how it came out and the rig was easy enough to use, facilitating all animations we needed to make.

 

 

 

 

 

 

This model was sent a little unfinished since Gina was struggling with the topology again and thought it would be better if someone from animation took over. The first picture shows what I was sent and what I ended up with after remodelling. Again I wanted to keep it as similar to what Gina was going for so she still had her contribution. This rig was a little more experimental than the above mushroom, since I thought hand animating a slither wouldn’t be amazing.

 

 

 

 

 

This rig was set up as an IK Spline, I then created clusters for each vertex control point which would be influenced by our controls (just parented beneath them) – they are also constrained (their null groups) to the yellow control, made to influence height easily in both automated slither and manual animation mode. To create the automated slither effect, I used a nonlinear deformer on these controls with a sine control. Manipulating this sine’s attributes would allow us to influence the slither, changing the wavelength. This could just be assigned to the mesh, but since we want to export to a game engine I needed a rig to get the animation in game. https://www.youtube.com/watch?v=58F15mLA6Uo&ab_channel=CharlieVelazquez this video outlines this technique in detail.

To make it more user friendly, I added a control using similar logic to the IK control I made for FaultE, which was connected (via connection editor) to this sine and its attributes. Activating automated slither mode, then moving the slither along the snake would create this animation effect for you.

 

 

I enjoyed making this rig, it was a bit more experimental but I think it gives a really effective technique for in place snake animations, and made animation much less time expensive. The controls are pretty intuitive but if I had more time I would eliminate the yellow and red controls, having one that would influence both, I was the only person using this rig though so I didn’t mind. I love the slither control and how I set that up though.

 

Animations

 

 

This is the final version of FaultE’s animations in game that I worked on. I had 8 walks (when camera orientation was locked, like when using beam), 3 animations for climb (idle, climb, root animation at top), 3 animations for the beam (start, loop, end), 1 scan animation and 2 for pushing (idle and moving). I worked on these in Maya then just exported the skeleton, I only really had one pass of each animation due to the workload we had to get through, so it was definitely quantity over quality. I know I can achieve a much higher standard of body mechanics in many of these animations but I was told to just pump out animations and not to be perfectionist with them, since we had alot of animations required by game design.

3 climb anims

 

The final mantle animation at top was our first root animation, done day of submission since we were told we needed it to progress through the level, not really happy with the weight at the top or the awkward hand movement at the end of it but it is what it is. The blendspace blends the idle and the moving well though, I like how they came out and the climbing has a nice loop.

 

8 walks and 3 beams (blended overtop)

 

Pretty content with how these turned out, obviously not perfect. Backwards and forwards walk are the strongest with the strafe being a little awkward, again we only got one pass at these animations really in order to stay on top of the workload. This sounds like a lot of work (8 looping animations) but once I had the forwards, backwards and strafes, I went back and manipulated those via graphs and some clean up to create the others. The 3 beam animations were required since we don’t know how long the player would use the beam, so one for the start, middle(looping) and end.

 

2 pushes

 

They came out pretty good too, the push has a nice weight to it and I like how the arm extends to the rock to push it. The collision is faked with the rock since its an in place animation, so any clipping is dependant on the asset, just how the push mechanic was set up.

 

Scans

 

Scan works well and slowing the movement speed with it feels cool in game, I like how you can scan while walking, or scan while standing still, starting to walk while scanning works well since I played it overtop our normal walking blendspace.

 

 

This is the final version of FaultE’s animations in game that I worked on, one by one in Unreal. I am really happy with the quantity I managed to make, but there are definitely improvements to be made on the majority of the animations. They were ‘placeholders’ that I never got time to replace with the finalised versions, so they all had one pass at animation.

 

 

This is the final version of Evil mushroom and Carvniviper animations I made in game, just an idle and a walk animation for both so only 4 animations total. I really like how goofy the mushroom walk cycle is and the snake slither looks well too. (I had to go into unlit mode because my pc struggles with fps)

 

 

I also had to set up all the logic and the states/montages responsible for the animation, which was alot to learn and take on on top of everything else. I definitely am interested in it now, despite hating every minute of creating this. It got pretty complicated with how I had to set certain things, such as the timed idle states leading into idle actions or the montages that would have different slots (in skeleton) to separate the upper and lower body, overlaying animations. This video was used initially https://www.youtube.com/watch?v=1K-Hyu4Xn3g&ab_channel=MattAspland

As it got more complicated, separating upper from lower body, having timed idle states, playing montages and root animations etc. I relied on Prismatica Dev’s Advanced Animation Theory UE4 course/series? linked here https://www.youtube.com/watch?v=flHL3qJB3_I&ab_channel=PrismaticaDev with a lot of videos going into detail with topics like; root motion basics, blendspaces, montages, layerd blend per bone, anim states, anim bp and additive animations.

 

Particle effects (smoke, spores, falling leaves, swamp bubbles, scan, beam, thrusters)

 

Last module I enjoyed working on Niagara particle effects in Unreal, they aren’t as scary as real particle effects and its still a really powerful system. this video shows some of the stuff I worked on, they were a lot of fun to play around with whenever I had free time and added a lot (I think) to the game, even the subtle ones like the thrusters made that double jump animation more interesting.

For the smoke I used a tutorial, incorporating flipbook animation from present files within Unreal. https://www.youtube.com/watch?v=tTTL_bzQLuY&ab_channel=Sir_FansiGamedev . The other effects were done with the knowledge I learned last year when making dust and flames. The beam when picking up objects took a little more effort, having to add a custom parameter for the end location, which would be set to the object being picked up on an event tick.

 

 

 

 

This is some of the scripting I set up for the particles, spawning on child actors (last pic) that I placed on our character in different spots. In order to prevent them from being spammed, I had to tie them to variables for events, such as when jump value is 2 or when hand scan is true. This also has some of the scripting set up for the animation montages that blended upper body and lower body, since they were tied to the same events. I normally hopped in a call with Scott to help figure out any problems I was having with scripting.

 

 

These are the different systems I created, the beam and scan use the same set up. The smoke uses a flipbook animation within the system. The bubbles, thruster, smoke and falling leaves use similar set up, with added effects like gravity or different spawn rate and locations, or different meshes. The meshes I had to make were very simple, just a leaf model similar to Jordan’s leaves and a sphere for the other emitters.

 

Some Level Design (for our playtests and final lighting and atmospheric fog in mainlevel)

 

I had to take this over while Sam was away for a while, just for the first two playtests. My playtest levels looked okay (2nd one was in collaboration with scott) but didn’t have the theory that game design students do, so the common feedback was they looked nice but lacked playability/gameplay. Sam came back and she took over from me, but I came back at the end to try and pretty some things up with lighting, fog and godrays in our final level too. Godrays are cool and fog adds nice depth to the forest, making our final level look much nicer than it did (at end of video ^).

 

 

Lighting/fog before and after

 

 

 

Lighting/fog before and after

 

 

 

 

 

 

 

 

 

 

Wasn’t 100% on what to do with the lighting of the environment but it definitely needed changed from what we had previously. I think I got some really cool effects with the god rays coming though the trees and past the asteroids though, and the scene felt more saturated and nice to walk through. The group all seemed to think it was a big improvement too. Really like the 4th screenshot and the lighting coming over that small cliff.

 

 

Vertex Painting Material setup

 

I also set Ben McC’s cliff material up with my grass texture that we painted the landscape with. Ben had the tops painted green but we thought it would blend better with the landscape if we could have the same material on top. I set up vertex painting, which influenced the alpha channel of lerp nodes, switching between bens cliff material and my grass material, allowing the level designer to paint this to better blend the cliff with the landscape. https://www.youtube.com/watch?v=Lz_tqHAS-Kk&ab_channel=3DAssetLibrary

 

Texturing Posi

 

We were getting ready to submit and still hadn’t received Posi’s final textures so I also quickly threw this together, similar texturing workflow to FaultE. I added normal details for nuts/screws on certain panels as well as the same symbol featured on FaultE’s shoulder, to add some connection between the two. The green emissive ‘eye’ was added to try to incorporate something similar to Scott’s UI when Posi followed you in level and to better link with FaultE, this would switch off when the actual UI appeared.

 

 

 

 

 

 

Overall super happy with the work I done, especially the landscape materials, grass and character. I can now also say I am able to set up an animation blueprint with many different states, state machines within states, blendspaces and layered blending through animation montages. I’m happy with the rigs but I would’ve like to clean up all my animations, quantity was prioritised over quality. The group work was pretty good (disregarding the members who didn’t contribute work or communication on both the animation and game design sides) but I wish our final game was a bit more impressive and had some more gameplay, since we put in a lot of work. This project wasn’t great for my portfolio, hoping to focus on solid animation and organic modelling, but modelling a hard surface character and rigging were still useful skills to practice. I did get to learn more about Unreal materials which I wanted to do, and got to play with particle effects which is always fun. The (limited) scripting I did was pretty stressful but I started getting the hang off it by the end and the two scripters in game design were always helpful. If we didn’t have some group members not contributing, we could’ve ended up with a really solid final game because we made something pretty decent with the smaller group, especially from the art side, making some really cool assets and materials.

Personal Development

In support of my portfolio / showreel, I animated a 3D scene to continue my personal practice & development. I animated a 3D shot in Maya focusing on dialogue and acting, for my last project I used a Miles Morales rig from the Agora Community, so I went there first to look for this project. I wasn’t sure what audio I wanted to animate, but I thought choosing a rig first would help narrow it down.

 

 

 

 

 

I found a Thor rig which seemed pretty cool, not too simple so I’d get experience with little bit more of a complex rig. I set up MGpicker, a very intuitive and animator friendly Maya picker tool, with the a powerful feature that enables you to create your own customised picker without any prior knowledge in coding and hooked this up to the Thor rig with the file included by Agora. I tested some facial expressions and liked the range I could achieve with the rig.

 

 

 

 

 

I also looked at an Aang rig, which was a little more simple and was advertised by Agora as being for facial animation. I attempted to recreate the same expression as before to see how I liked it. I didn’t like the blue eyes in the Thor rig, these eyes were much more expressive.

 

 

 

 

 

Aang had a lot more automated features in the rig, with sliders allowing you to pucker the lips or puff the cheeks out. It also had a studio library file included, with a bunch of pre-set facial expressions already set up for you. I decided to go for the Thor rig since I thought it would be more of a challenge, since it didn’t have any of this automation. Also because I haven’t seen it used much online in other showreels, so it’s a little bit more original.

 

 

 

 

 

I also noticed the Thor rig was game-ready so I tested it out in Unreal, I thought it might’ve been cool to render this shot in Unreal for my showreel, since it wouldn’t take long and you can make a pretty enough scene easily. I built a small scene using assets from the Vertical Slice project and brought Thor into Unreal, but I noticed the UV seams were pretty visible in Unreal and thought. if this was going in my portfolio, having scenes from my Vertical Slice project and my Lip-sync animation in the same environment might be a bit repetitive, so I decided against rendering in Unreal.

 

 

 

 

 

Sir Wade had a lot of useful videos online, going through how to find an audio and what to avoid/look for. His ‘secret’ workflow when it came to dialogue shots and lip-sync tips, like acting towards the camera, having the mouth shape before the audio happens (light is faster than sound) and having asymmetry/imperfections.

 

 

 

 

I found an interview Sir Wade did with the head character animator on How to Train Your Dragon, it gave an insight into acting through animation, his workflow and general tips on animating. This video from James Baxter (worked on a lot of Disney titles) was more useful and much shorter, mostly going over the mouth opening and closing as well as the timing for this, when to hold and when to close suddenly etc. in order to avoid a boring linear up and down. He also simplified the mouth shapes and combined shapes into one, so I guess you can get away with combining sounds/mouth shapes especially when the character is talking fast or slurring words, limiting these mouth shapes migth be easier in 2D animation though which is what he was covering. I also looked at a live action analysis of the over the shoulder shot, which I though would be useful to me since I had two characters talking but not enough time to animate both of them fully.

 

 

 

 

I also collected some material to look at in terms of the actual mouth shapes for sounds, finding a YouTube video breaking it down a bit and some pictures I could look at while working.

 

 

I watched a lot of interviews and clips from Chris Hemsworth since I thought it would be cool to have Thor’s actual voice in the dialogue, but not just taking from a scene in the MCU, since it would be recognizable and the watcher might compare my shot to that scene. Sir Wade mentioned have some sort of change in emotion/tempo or interesting sounds in the audio and I thought this clip was good work with and had an interesting popping sound in it.

 

 

 

 

 

I found an interview with a Dreamwork’s animator that was more focused on the lip-sync and acting than the previous interview. He went into this idea of subtext, going beyond just animating what is said and going into the thought process of the character, a useful exercise he mentioned was animating the same character and audio in 6 different ways (different emotions, gestures, emphasis etc.) I obviously don’t have time for this but it was useful just to think about. I took some notes similar to how he did in his example to help inform my animation, typing out a simple script and what I thought the character of Thor would be thinking. I also checked out twitter to see how other animators approach this, alot using reference.

 

 

 

 

 

 

 

I did some further research into the two rigs for examples of dialogue shots and manged to find some on YouTube and Art station. I really liked the first one, showing the capabilities of the Thor rig and how much personality can be added.  I wanted to do something similar with the style of my animation, the cartoony/stylised 3D style in these clips were very engaging. From comparing the first and second rig, I thought the eyes made a big difference on the animations, with the first one (edited from original rig) being more appealing.

 

 

 

 

 

 

 

 

 

I recorded some reference footage (bunch of clips on timeline, maybe 12 takes) and reviewed what I liked and didn’t like about them. I had the idea to have the pop sound be from him removing his finger from his mouth after eating, seeing something similar in Spider-Verse. I used this as reference for the style, as well as when building my little scene too. I also changed the eyes, just applying different lambert materials to parts of the eye to humanise the character a bit.

 

 

Some embarrassing reference footage, while I didn’t like the acting much in this one (kind of boring) the timing and actions felt nice and I had a bunch of other clips I could reference that had more personality in them.

 

 

 

 

 

 

 

 

I started blocking out the scene and testing camera angles, if I established the framing it would help with the rest of the animation, since I could kind of cheat, animating towards the camera. I wanted to focus on the pop first since it was the main reason for choosing this audio.

 

I went straight in on the pop, blocking out the main poses and timing for it. I was happy with how it was looking already, it felt very satisfying and had a bunch of personality with the cocky/playful eyeroll.

 

 

 

 

 

Then I blocked out the poses leading up to the pop, where he would bite the food, I liked pushing the anticipation for the bite, it was pretty exaggerated but still felt believable in the animation. I’d push this a bit further later on too when adding more squash and stretch to the head. Animating with the animpicker was very useful since I could animate with no controls on screen.

 

 

 

 

 

 

 

 

 

Next I went in and added all the key poses I wanted to hit throughout the animation, I was worried about the flexing pose, and the pose after it, it felt like a lot of movement for this scene, but I didn’t want to stay with the same silhouette throughout, since he was already resting on a table so had a restricted enough range.

 

 

This is the rough blockout at the end, once I was happy with the pop. I think it is a little awkward but it felt nice enough to move on and the individual poses were quite strong and fit the audio.

 

 

Taking from my reference footage, I added the head shakes/nods throughout and some follow through/anticipation for the poses, just cleaning up parts and adding keyframes throughout, trying to be mindful of secondary actions and offsetting a lot of movement, since Sir Wade mentioned asymmetry and imperfection in his video.

 

 

 

 

 

Looking at the 11secondclub, some online livestreams focused on lip-syncing and lip-sync tutorials in Maya helped refresh me on animating the lip movements in my animation.

 

This was the first pass(?) of lip-sync, I tried to simplify and combine shapes as much as possible to see what I could get away with, so the mouth wouldn’t just be going up and down and he wouldn’t be overenunciating. I needed to fix the timing towards the end and add more mouth shapes. The ending definitely had some questionable timing and needed more keyframes.

 

 

 

 

I noticed my graph was looking ugly so I used auto tangents, and spline tangents on certain peaks, adjusting where I though necessary. I also played around with the timing using the dope sheet but I’m still not 100% from when he says arrows onwards so I had a bunch of iterations of the file, letting me experiment without losing previous progress.

 

 

 

 

 

 

 

 

 

 

 

 

I then went into the details, adding finger animation and making adjustments on these grey controls to add ‘contact’ between the table and the fingers and elbows. I tried to clean up the graphs but it was getting confusing with the amount of keyframes, since I had a lot of offset happening and probably didn’t have the cleanest workflow over these few days. I decided to bring the Aang character in and animate his “lip-sync” which was just vague mouth movements since it was an over the shoulder shot. I thought he was too bright and took away from Thor so I adjusted his materials exposure levels.

 

 

This is showing the collision I faked with the table, as well as the little ‘set’ I built. I had some difficulties with the Aang rig but managed to get it in by copy and pasting it from another maya file.

 

I added some camera animation, with subtle rotations and very slight zoom in to give it a more natural feel, as well as adding depth of field. I thought the depth of field would help take away focus from Aang since I wanted the main focus to be on Thor, especially because I spent only a few minutes on Aang’s ‘animation’.

 

 

 

I’m quite happy with how it turned out, there’s some things that could definitely be improved but I rushed through this and have spent too long looking at this animation now to progress much more. It’s similar in style to the stuff I was looking at, and definitely can see the spiderverse inspiration just in the framing and sucking ketchup from his finger,  it’s not too derivative, I tried to keep it dissimilar in the rest of the scene, adding much more energy and movement. I’m not 100% on Aang being added, it feels a little weird and think it might be better without but I wanted the over the shoulder shot, hopefully people don’t actually look at Aang’s animation though. There’s some moments in Thor’s animation that are a little awkward but I’m happy enough, I might’ve pushed the fingers a little too much though, and had moments where they seemed quite stiff and lifeliss. I forgot to document but I added a sphere to his finger with a transparent blinn material to make ketchup, which had a null group constrained to the finger control, then I just keyframed it to shrink and hide once it went in his mouth. It definitely would’ve been useful to send it to one of the members of staff for some feedback but it was so rushed in the last few days I didn’t really have time to.