The Iron Giant!!!

introbg My final blog post will be on the 1999 animation The Iron Giant. It’s one of my favourite movies ever and I remember watching it like ten over times when I was a kid. Produced by Warner Bros. Animation, it is directed by Brad Bird, who later went on to write and direct other animations such as The Incredibles and Ratatouille.

The movie is based on the book The Iron Man by Ted Hughes. Unfortuately he did not get to see the final version as he passed away while it was still in production. The movie is set in America in 1957 and it deals with things like cold war paranoia, weaponry and innocence. The central core to the movie, as Bird had told Warner Bros. when explaining the idea was that the giant was a gun with a soul.


The Iron Giant uses both traditional and computer animation, and it was animated like an assembly line. Bird did not use the then-current mode of feature production when it came to assigning animators. The practice at Disney had long been to assign a specific character to one animator so that an animating supervisor would only be responsible for drawing one character. He decided to play an an animator’s strength and assigned them entire scenes based on emotion or action, regardless of which character appeared.

the-iron-giant-brad-bird-1999 As for the giant, they used computer animation to create him as CGI would give him the mass and solidity and also give the impression that it’s from a different place. Bird says that the “separation between the 2D-animation and the CGI is something that helped establish the fish-out-of-water facet of the story.”.

Screen shot 2013-01-21 at PM 02.33.18

As Bird did not want the giant to look so perfect that it lost the hand-drawn look – something that creating it in CGI would do, they took months to create a computer program that wobbles the lines of the giant to make it as if it was done traditionally by 2D animation. Existing special software was also extended and modified to accomplish some things, like the aiding in the shading of the giant, varying the lightening and darkening of some frames and altering grain patterns to affect the giant’s realistic inclusion in the 2D animated world.

Gonna watch it again later yay!!!!!


The Matrix – The magic behind Bullet Time


The Matrix (1999) is an American science fiction action film directed by Larry and Andy Wachowski. The Matrix is set in the future where reality as perceived by humans is actually the Matrix, a simulated reality created by the sentient machines to pacify and subdue human population, while their bodies’ heat and electrical activity are used an an energy source. When computer programmer Neo learns of this, he is drawn into a rebellion against the machines, involving other people who have been freed from the ‘dream world’ and into reality.


The film is most known for popularizing a visual effect known as ‘bullet time’. It is a shot effect that progressing in slow-motion while the camera appears to be moving throughout the scene at normal speed. The directors’ approach to the action scenes drew from upon their admiration for Japanese animation and martial arts film, and the fight choreographers and wire fu techniques from Hong Kong action cinema was influential upon subsequent Hollywood action film production.

Each camera is a still-picture camera, and not a motion picture camera, and it contributes just one frame to the video sequence. When the sequence of shots is viewed as in a movie, the viewers sees what are in effect two-dimensional ‘slices’ of a three-dimensional moment. Watching such a ‘time slice’ movie is same a a real-life experience of walking around ‘in the scene’, from different angles. The positioning of the still cameras can be varied along any desired smooth curve to produce a smooth looking camera motion in the finished clip, and the timing of each camera’s firing may be delayed slightly so that a motion scene can be executed.


For The Matrix, the camera’ positions and exposures were previsualized using a 3D simulation. Instead of firing the cameras simultaneously, the visual effects team fired the cameras fractions of a second after each other, so that each camera could capture the action as it progressed, creating a super slow-motion effect. When the frames are put together, the resulting slow-mo effects approached the equivalent of 12,000 frames per second, as opposed to the normal speed of 24 fps for film. The cameras at each end of the row were standard movie cameras to pick up the normal speed action before and after. Because the cameras can be seen as the rig is in a circular motion, computer technology was used to edit out the cameras that appeared in the background on the other side.

Bullet time effect is used to illustrate the character’s exertion of control over time and space in the movie.

The Avengers – Hulk


Marvel’s The Avengers was a highly anticipated blockbuster and no doubt it was big hit in cinemas when it was released last year. A main character in the movie was the Hulk, played by Mark Ruffalo. In the movie, he turns from a normal human into a buff green shirtless killing machine. Industrial Light and Magic (ILM) was responsible for this CGI and they did such a great job that they were just recently nominated for an Academy Award for Best Visual Effects.


Here’s kinda how they did the digital double. ILM used motion capture to catch the emotions Mark Ruffalo potrayed on screen. Every bit of Hulk stems directly from Mark, from the pores on his skin, to the grey hair of his temples, right down to using a dental mold of Mark’s teeth as a basis for Hulk’s teeth. Their strategy was to work to out rendering and texture issues on the Banner (Hulk before he turns human) digital double until it looked indistinguishable from Mark Ruffalo.

The realism of this digital double is fucking awesome!

The realism of this digital double is fucking awesome!



As the Banner and Hulk shares the same topology, they were able to transfer textures, material settings and the facial library for animation. This gave them a decent base to start working from but with their significantly different proportions, there was a lot of retargeting work that need to be done. They tried to be economical with their poly counts but with Hulk they made a conscious decision that he was going to be extremely dense in his resolution for a better mesh. By working ike this, they never came up short on resolution for all of the close-ups and detailed shape work that was required to represent the anatomy under Hulk’s skin. They then incested in a robust multi-resolution pipeline so that the model was manageable for the artists to work with.



Here’s an interesting behind the scenes video!

Terminator 2: Judgement Day

cgi4 (1)

Canadian director James Cameron directed The Terminator (1984). He is well known for his use of cutting edge visuals and effects technology. The Terminator is his first groundbreaking sci-fi blockbuster movie in the visual effects arena. He pushed the boundaries of special effects with The Terminator. It was during a period of time where Hollywood was experimenting with new means of visual effects through the production of films that fused the genres of science fiction and horror.

Seven years later, Cameron came back to direct Terminator 2: Judgement Day. Judgement Day came back even bigger than before, in terms of CG. It was the first film to feature a computer generated main character. The VFX in the film was completely top notch for that period of time. Not only was there the CGI Terminator, it also morphed and regenerated body parts. And on top of that, it could also turn into a mercury like liquid metal that seeped through little cracks. The movie paved the way for all the other VFX-laden movies.

Most of the effects was provided by ILM and the creation of the visual effects took 35 people altogether that included animators, computer scientist, technicians, and artist. It took ten months to produce, for a total of 25 man-years. And despite the large amount of time spent, the CGI sequence was only a total of five minutes on screen. But all this work was worth it because the visual effects team won the 1992 Academy Award for Best Visual Effects.


For the scene featuring Sarah Conner’s nuclear nightmare, the people from 4-Ward Production constructed a cityscape of Los Angeles using large-scale miniature buildings and realistic roads and vehicles. The pair, after having studied actual footages of nuclear tests, then simulated nuclear blast by using air mortars to knock over the cityscape, including the intricately built buildings. 4-Ward created a large layered painting of the city augmented with a radiating blast dome and disintegrating buildings created with an Apple Macintosh program called Electric Image. They also contributed a number of shots showing molten steel spilling out of a trough onto the floor, and used real mercury directed with blowdryers to create the eerie shots of the shattered T-1000 pieces melting into droplets and running back together.

Pirates of the Carribean: Dead Man’s Chest


Davy Jones stars as the protagonist in the second installment of the Pirates series. He is completely CGI and everything about him is so believable it’s crazy! Of course the team responsible for this had to be none other than Industrial Light and Magic.

The production shot real actors on set and digitally replaced them. In order to do this, each actor was scanned and modelled. They wore a motion capture suit which enable them to be replaced in post production. ILM was unable to rely on traditional MoCap or hand animation as there were multiple issues. It had to be done in special studios with multiple cameras and the cameras and tracking markers are special expensive equipment used only in a calibrated environment. Also, the data needed to be cleaned up tremendously as the data stream has both noise and errors. The whole process is complex to set up, and it’s also expensive and highly specialized therefore it wasn’t used. ILM created an innovative new system called Imocap and that allow onset and on location motion capture to elicit the most believable look and performance possible out of actor Bill Nighy.

pirates-topperHe wore a pair of gray ‘pajamas’ with reference dots placed around the suit and his face, and his performance was captured entirely on set as he interacted with other actors. This improves the performance of the other actors as they would have someone ‘real’ to interact with, and it also gave the animators a highly detailed reference.


Being ILM, they made a breakthrough with Imocap when they only had to film with a single onset film camera instead of multiple cameras when using MoCap. A single camera removes the many restrictions motion capture process gives. With Imocap, motion capture could be done on set. The approach is to model the actor’s range of motion and then they used an elaborate system to fit the range of possible motions the actor could do, to the data from the single camera source.

Besides Imocap, the other challenges ILM faced with the character of Davy Jones was his 46 flopping tentacles. ILM wanted the tentacles’ curling and movement to reflect Davy Jones’ mood, not just lifelessly bob around, but they didn’t want an animator to have to manually manipulate each and every one frame-by-frame so to solve this, their programmers added a sort of inter-tentacle motor to automatically move them around. Mathematical expressions and/or keyframe motion fed to motors in the joints between the cylinders making up Davy Jones’ 46 tentacles caused them to bend, curl, writhe, and perform in life-like ways. “Stiction” kept the tentacles from sliding.


As the computer knows what the actor’s limbs could do from any one frame to the next, it can ignore a lot of mathematical possibilities and add to the solution. Once the solution is constrained by this virtual range of possible motion, a single camera can produce a very powerful motion capture data stream. While the motion capture system worked extremely well, the lip sync was not done this way and instead hand animated.

djFor the tentacles, an articulated rigid body dynamics engine was utilized to achieve the desired look. Each tentacle was built as a chain of rigid bodies, and the articulated point joints served as a connection between the rigid bodies. This simulation was performed independently of all other simulations, and the results were placed back on an animation rig that would eventually drive a separate flesh simulation.

Forrest Gump


Forrest Gump is undoubtedly one of America’s most loved movies. Tom Hanks is such a brillant actor! Chromakey technology was made use of for the film. They used archival footage of many famous moments in American history and used it to simulate Tom Hank’s character in it. Special effects artist Larry Butler developed the chroma key process for the film The Thief of Baghdad for which he won an Academy Award. Back then, chroma keying was a chemical process performed on film negatives. Today it is all done digitally.

ForrestGumpJFKScreenshotTom Hanks was first shot against a blue screen, along with reference markers so that he could line up with the archival footage. The voices of the historical figures weren’t originally from the footage, they were hired voice doubles. To ensure that the voices matched while the people spoke, special effects was used to alter the mouth movement.

front_shotAbove is an example of a blue screen. Besides using chroma keying for the historical scenes, there was a character, Lt. Dan (Gary Sinise) who had amputated legs in the movie. The actor wore a pair of long blue socks, and they just keyed out the blue and that’s how they made the legs vanish. Industrial Light and Magic was hired to make this sequence. Aren’t they amazing!!!



ImageParaNorman is a stop-motion animation about an 11 year old boy who can speak to zombies and ghost. . More than 300 people at LAIKA animation studio are responsible for bringing the film’s hero, Norman Babcock, to life. To animate the faces, they’ve taken advantage of rapid prototyping (i.e. 3D printers) to produce thousands of tiny faces, from the most subtle changes of expression to the most extreme. LAIKA haven’t shied away from using some CG effects, but the digital effects team works extremely closely with the other designers to ensure that everything has the same look and feel.

ImageThe animators at LAIKA pioneered the use of rapid prototyping colour 3D printer. It allowed them to unite the versatility in design and mechanics of CG with the richness and solidarity of a physical object. Effects like the translucency of human skin can be seen with a process like this.

ImageTraditionally in a typical stop-motion animation, the individual facial expressions would be sculpted by hand out of clay, however with ParaNorman, they built up a library of 8800 3D printed faces for the main character and that gave him about 1.5 million different expressions. About 3.77 tonnes of printer powder were used by the four printer that were working on ParaNorman. They worked a total of 572 days, churning out faces from the lower eyelids down to the chins of the main characters.

ImageImageI think this 3D printer thing is pretty cool. LAIKA studios experimented with it when they did Coraline, however with ParaNorman, they are taking this rapid prototyping to whole new heights.

The Hobbit: An Unexpected Journey


The big deal about Peter Jackson’s trilogy The Hobbit: An Unexpected Journey is that it was shot at 48 frames per second on the Red Epic camera in full 5k resolution. It was shot digitally, not film, on memory cards that was about 128 gigabytes each. So why shoot at 48 frames per second, you may ask. The usual cinema film is shot and projected at 24 fps, while The Hobbit is twice as much. When project at 48 fps, the result will look like it’s at a normal speed, but the image has hugely enhanced clarity and smoothness.

According to Peter Jackson, “Looking at 24 frames every second may seem okay – and we’ve all seen thousands of films like this over the last 90 years – but there is often quite a lot of blur in each frame, during fast movements, and if the camera is moving around quicklu, the image can judder or ‘strobe.'” A higher frame rate gets “rid of these issues” and makes the image “much more lifelike.” He also notes that filming at 48 fps makes the 3D images less taxing to watch.


The Red Epic is an epic camera and for the making of The Hobbit, it required them two (for each set of camera) of the Red Epic as they were shooting in 3D. And the problem they faced that the lenses they used were so large that they could not get an interocular similar to the human’s eye. So what they did was that they shot through a mirror on a rig. The left camera shoots through a mirror, and the right camera bounces off the mirror so that both of the filmed products are overlayed on screen.


They hired specialist firm 3ality to build a rig that enabled them to change the interocular and the convergence point as they were shooting. There were various rigs for all the different types of shooting, eg. a crane rig and a handheld rig. The handheld one, also known as the TS5, was small and light and it allowed the Peter Jackson to shoot in tight/cramped corridors or caves. Altogether, they have 48 Red Epic cameras on 17 3D rigs.

Screen shot 2013-01-09 at PM 11.24.37

Though the Red Epic is epic, it naturally desaturates the footage so on set, they had to over exaggerate the colours to counter the desaturation that was going to happen on screen. Above is an example of the forest scene on set. Besides the forest, the did some colour test before filming and realised that if there wasn’t enough red, it would turn really yellow and react differently than normal skin that has blood running through it. To counter the problem, they had to add alot of red tones to the actor’s make up. Though it looks reddish when not seen on the camera, when they’re filming, on screen it’ll look like normal flesh tone.

Here’s an interesting behind the scenes video on the 3D rigs and cameras they used!