Studio 2 – Week2

Welcome to week2. As usual time is flying fast & we had our pitch presentions this week.  The modelling is well underway & this week I got my hands onto Zbrush 4R8. I have to admit I was very excited to see some of the additions in this version. The gizmo for the transpose is very handy for small changes & the fact that the original transpose tool still remains for making bigger changes like posing etc is very convenient.

One new addition is the Live Boolean feature. I haven’t really used it so far but I can already see some of the areas I plan to try using them on like the hair. Just a gist of what I have been able to understand about this feature so far.

We always had the Boolean features available to add, subtract or give the intersect result while modelling. But we needed to do this along with dynamesh & it was always very hard to envision the end result of the action until we had done the entire process. Now with this live Boolean feature, it renders the end result on the fly giving us a lot of flexibility to see exactly the result we will be getting so we can modify it until it is just perfect. We are also able to use the various brushes to make changes on the Boolean subtools while live Boolean is on. So any modifications required at that time is very easy to do on the fly as we go along. Once we have the required result we can just create a new mesh by clicking on the “Make Boolean Mesh” Button. This results in it creating a new geometry of the result which not only stores all the original mesh geometry at the level of density it was but also all the original polypaint data. The only areas where it is remeshed is where the Boolean was applied.


Eg: Lets take the frog example from Joseph’s Zclassroom lesson.

Here is an example of a very high dynamesh frog model with 2 very high dynamesh models of rocks.

He places the rocks into the frog model to try to achieve a chiseled look.


But with turning the Live Boolean feature on it shows the end result render real time to help with better placement & sizing.

Once the desired result is achieved & the new mesh is created using the make Boolean mesh button, the end result has new remeshed geometry only in the Boolean areas & all polypaint data still stored.



  1. Boolean Process. (2017, June 13). Retrieved October 06, 2017, from
  2. ZClassroom – ZBrush Training from the Source. (n.d.). Retrieved October 06, 2017, from
  3. Y.(2017, July 30). Live Boolean in ZBrush 4R8 (With Array Mesh). Retrieved October 06, 2017, from
  4. ZClassroom – ZBrush Training from the Source. (n.d.). Retrieved October 06, 2017, from

Studio 2 – Week1

Welcome to my journey of trimester4. This trimester I will be working on my own & a project that has me very excited. I’m sure working on my own will come with it’s fair share of challenges as well as positives. More on that as the trimester progresses.

Now onto this trimester’s project. I am aiming to create my own fictional character. This will include designing the concept, modelling the same, 3D printing it & finally physically painting it. This process involves a lot of new aspects & unknown territories for me & hence the learning curve will be pretty steep.

To start with the modelling, I shall primarily be using ZBrush & this week itself, there were a few concepts I came across that were interesting.

One such was using Morph Targets. Morph Targets, is a handy tool to help store the configuration of the geometry as a baseline to switch back to later. Eg: I needed help to choose polygons on the face on the inside of the lips to create polygroups for easier reposing. For this I created a morph target of the original shape & then smoothened the face to distort it until the gap between the lips grew far enough for me to select the polygons easily & create the polygroups for the top & bottom of the face along the jaw line. Once this was done, I clicked on the switch button & it was back to the original position of the lips.

Another feature under the morph target is the Create Diff Mesh. This is the process of creating a mesh from the difference between the original morph target saved & the new mesh created. I used this technique to make the basic shape of the feathers for the headgear of my character.

For this I first created an alpha for the shape of the feather in photoshop. Post that in zbrush I stored a cube as a morph target.

Then I extracted the feather using the alpha.

Finally using create diff mesh I got the shape of the feather. I reset the pivot point to the center of the extracted mesh &  then mirrored on the other side & dynameshed together to get a feather.

This technique has been very handy to be able to create complex shapes very easily in a very fast manner.



  1. (2017, June 05). #AskZBrush:. Retrieved September 29, 2017, from
  2. Morph Targets. (2014, September 29). Retrieved September 29, 2017, from
  3. ZClassroom – ZBrush Training from the Source. (n.d.). Retrieved September 29, 2017, from
  4. ZClassroom – ZBrush Training from the Source. (n.d.). Retrieved September 29, 2017, from Target
  5. ZClassroom – ZBrush Training from the Source. (n.d.). Retrieved September 29, 2017, from

Studio 1 – Project Journey – Week 12

Color Corrections


Color correction is a vital part in the postproduction. It refers to the process where every individual clip is altered and corrected to match the color temperature of multiple shots.  Color is fundamental in design and visual story telling, as it conveys much more than story telling. This tool is vital for postproduction as it balances the colors, making the whites actually appear white and the black appear black.

The goal of doing so is to match the video footage as to a standards set by the visualizer and the colors as viewed by the human eye.

Also if the shot is outdoors, the exposure of sun and the time spent in sun is going to change the temperature of the video and will make the clip brighter than the rest. And this is why color correction is so important as it will make your shots seamless and make your video look like they have been shot at the same time.

Color corrections can be done using primary and secondary tools. This tool has been used in many action movies like Transformers, Black Hawk and many horror movies like The Ring and Saw; to make the movie look more natural and closer to the way human eye views the object.



@. (2015). Understanding Color Correction vs. Color Grading for Post Production. Retrieved December 9, 2016, from

H. (2013). How to colour-correct your 3D renders. Retrieved December 10, 2016, from


Studio 1 – Project Journey – Week 11

Dr Strange Animation


Doctor Strange is a American superhero film based on the Marvel Comics.  His character is a usually gifted neosurgeon who only loved money and is a sorcerer with many super powers.

As an animator what makes Doctor Strange stand out from both the Marvel universe and many other blockbuster competitors are the spectacular trippy visual effects.. It’s a journey through time, space, the astral plane and something called the ‘mirror dimension’ in which the reality folds according to the will of the movie’s heroes and villians.  The space and landscapes become increasingly distorted to the general public. That means the roads and the skyscrapers, roads of new york  surrounds are constantly being folded and warped, ultimately creating an effect of kaleidoscopics array of buildings that are harder to escape.

Marvel studios got ILM to make those insane bending buildings in Doctor Strange. The final visual effects were using Industrial Light and Magic, ILM, a visual effect tool used for many motion pictures like Star wars, Iron man, Matrix, Jurrasic park , Pirates of the Caribbean to name a few. ILM has been a constant innovator in visual effects especially digital.



@. (2016). How ILM Made Those Insane Bending Buildings in ‘Doctor Strange’ Retrieved December 2, 2016, from

Brady, R. (2016). The Architecture of “Doctor Strange” Retrieved December 2, 2016, from

Suderman, P. (2016). The many inspirations for Doctor Strange’s trippy visuals, from Steve Ditko to The Matrix. Retrieved December 2, 2016, from

Studio 1 – Project Journey – Week 10

Skybox in Games

Skybox is a panoramic texture drawn behind all objects in games represent the background or great distance. E.G The sky.

Skybox helps sets the mood and atmosphere of the world you’ve built.

Understanding skyboxes

A skybox is split into six textures representing six directions visible along the main axes (up, down, forward, backward, left and right). The six directions offer a panoramic view. After skybox is generated, the texture images will fit together seamlessly at the edges to give a continuous surrounding image. This can be viewed from “inside” in any direction. The panorama is behind all other objects in the scene. It rotates to match orientation of the camera. A skybox is an easy way to add realism to a scene. It puts minimal load on the graphics hardware.


Skyboxes consist of six panels. They fold together and form a seamless scene in every direction. With skyboxes it is easy to just drag-and-drop simplicity. Here’s how easy it is to add to any game:

  1. Take the models (linked below) from the catalog
  2. In Studio, click Insert > My Models
  3. Select the skybox you want to apply and click it to improve the look of your game

Skybox in Unity

the Standard Assets package in unity comes with a number of high-quality skyboxes (menu: Assets > Import Package > Skyboxes).


The skybox is a material using one of the shaders from the RenderFX submenu. If you choose the Skybox shader, you will see an inspector like the following, with six samaplers for the textures:-


The Skybox Cubed shader requires the textures to be added to a cubemap asset (menu: Assets > Create > Cubemap).





Unity – Using Skyboxes. (n.d.). Retrieved November 25, 2016, from

These High-Res Skyboxes Make Games Beautiful — Fast. (n.d.). Retrieved November 25, 2016, from

Studio 1 – Project Journey – Week 9

MassFX for 3ds Max enables you to add realistic physics simulations to your project. This plug-in makes 3ds Max-specific workflows, using modifiers and helpers to annotate the simulation aspects of your scene.

MassFX uses rigid bodies: During the simulation these objects do not change shape. the rigid bodies can be one of three types:

  • Kinematic:Kinematic objects are animated using standard methodsThey can also be stationary objects. A Kinematic object cannot be affected by dynamic objects in the simulation but can affect them. A Kinematic object can switch over to Dynamic status during the simulation.
  • Dynamic:The motion of Dynamic objects is controlled by the simulation. They are subject to gravity and forces that result from being struck by other objects in the simulation.
  • Static:Static objects are like Kinematic objects but cannot be animated. However, they can be concave, unlike Dynamic and Kinematic objects.

MassFX additional features:

  • TheMassFX Visualizer displays simulation factors such as contact points and object velocities. This feature is key for debugging simulations.
  • MassFX Explorerworks with MassFX simulations. It is a special version of Scene Explorer.

Use constraints (Eg such as with a hinged door) to allow objects to restrict each other’s motion.



MassFX. (2016). Retrieved November 19, 2016, from

3ds Max Help. (n.d.). Retrieved November 19, 2016, from

Studio 1 – Project Journey – Week 8

Face Morphs:

Morphing but can be used to change the shape of any 3D model. It is also used for lip sync and facial expression on a 3D character. The modifier provides many channels for morph materials and targets.

Morpher modifier is used to change the shape of a patch, mesh, or NURBS model. You can morph World Space FFDs and shapes (splines). The Morpher modifier also supports material morphing.

Facial Animation

Create a character’s head in an “at rest” pose to achieve lip sync and facial animation. The head can be a patch, mesh, or NURBS model. Modify and Copy the head to create the facial-expression and lip-sync targets. Select the “at rest” head and apply the Morpher modifier.

Morph Targets for Speech animation

Speech animation uses nine mouth-shape targets. You might want to create extra morph targets to cover additional mouth shapes in case your character speaks an alien dialect.

Include cheek, nostril, and chin-jaw movement when creating mouth-position targets. Examine your own face in a mirror. You can also put a finger on your face while mouthing the phonemes to establish the direction and extent of cheek motion.

Expression: Morph Targets

For a character, create as many expression targets as necessary. Sadness, Joy, sadness, evil, surprise, can all have own targets. Depending on the personality of the character, like a happy target, may not be necessary.



Morpher Modifier. (2014). Retrieved November 9, 2016, from

Create funny face animations. Morph them ALL! (n.d.). Retrieved November 9, 2016, from

Studio 1 – Project Journey – Week 7


CAT Rigs

CAT (Character Animation Toolkit) is a character-animation plug-in for 3ds Max. CAT facilitates character rigging, non-linear animation, layering animation, motion capture import, muscle simulation and more.

The CATRig is the hierarchy that defines the CAT skeletal animation system. It is a flexible character rig that is designed to let you create the characters you want without having to write scripts. It is also adds speed and sophistication to rigging.

The CATRig character keeps the structure as generic as possible. This is enabled by the CAT’s modular composition design. This a key feature that makes it flexible tool. It is this flexibility that allows users to add and remove different rig elements to get the exact skeleton you need for your character.

Each rig has its procedural walk-cycle system,own layered animation system, and pose/chip system. Each rig element also combines geometry with special capabilities specific to its function.


Animating with CATrig

CAT’s FK/IK rig-manipulation system lets you push and pull the rig parts into your desired pose, whether in IK or FK. Walk-cycle sequences, CATMotion allows you to create customized walk cycle and direct the character around the scene. No need to place individual footsteps.


Animation is created in CAT’s nonlinear animation (NLA) system. This is made possible by the Layer Manager. One of the key advantages of CAT’s NLA system is that you work directly in an animation layer. You do not need to to go back out and tweak the source animation elsewhere


3ds Max Help. (n.d.). Retrieved November 4, 2016, from

3ds Max Help. (n.d.). Retrieved November 4, 2016, from

Studio 1 – Project Journey – Week 6

Hair and Fur  for Tangled and Brave and Zootopia

Pixar’s animators and their technological counterparts have looked ways to make the animated world look like real world. Pixar developed an entirely new hair stimulation software- known as Taz.used for  movies like Zooptopia, Tangled, Brave to name a few.

For Brave and Tangled; the hair required much greater hair-to-hair collision, which means it needed to look more flowy and full. Hair was modeled using series of mass and springs and to avoid tangling or stiffness of hair. With this new software, hairs would be dealt with as one group and the hair stimulation could be multi threaded. That’s in turn would solve the daunting task to make hair stimulation look real.


Dealing with fur was another challenging task for animators to create verisimilitude of an animal only body in the movie Zoopotia. The fur technology makes animals look realistic and believable. Disney’s team of engineer’s introduced- iGroom; which helps shape about 2.5 million hairs. It is a fur-controlling tool, which has never used before. This software gave animators tonnes of flexibility. They could play with the fur, give it texture, fluffiness- brush it, shape it and shade it. You can push fur around and find the form you want.



Brave New Hair. (2014). Retrieved October 30, 2016, from

Lalwani, M. (2016). Fur technology makes Zootopia’s bunnies believable. Retrieved November 4, 2016, from

Studio 1 – Project Journey – Week 5

Color Scripts

There’s a science to choosing color schemes in a movie, to make the movie visually more attractive and to provide psychological assists, and how designers use complementary pairs in a movie’s art design.  Ralph Eggleston from Pixar was the person who introduced color scripts. He suggests that it provides a definite color palette for the movies. It defines the lighting, the color scheme for the Pixar movies.

Color script serves a functional purpose in animation. Its provides the director all clues he can get from start to finish of the movie on screen. Color script is an early attempt to map out the color, emotion and moods for the film.

Having a color script will not make or break your animation but it can definitely help the studio to evolve new ideas and figure out different approaches to early stages of story telling. The first attempt of any animator is to set the mood for the project.  Color script is not about making a pretty piece of art; it evolves throughout the early stages of the film, hand in hand with the story development.

It is often best to start from a traditional predefined color scheme. Analogous, Complimentary and monochromatic color schemes are just a few of the traditional color schemes available as a starting point for designers




Colour Script. (n.d.). Retrieved October 22, 2016, from

Creating a Color Script – Mike Cushny. (2011). Retrieved October 22, 2016, from