Film Industry: Revolution after Revolution! Across the Screen

house

The process of making a modern film can be divided into three stages: Pre-Production, In-Production, and Post-Production with subsequent processing, as we mentioned above.

The preparatory stage includes the development of the script, casting (selection of actors), the choice of the main shooting location and other organizational activities. The shooting process was until recently considered the main stage in the production of a film, but today it is markedly different from how it was approached in the last century. The sets have been replaced by large pavilions with monochrome surroundings, usually blue or green. The actors interact with “conventional” objects: roads, vehicles and furniture. Even their clothes may differ from those in which you see the characters of the film on the screens. Of course, this is not the case with all scenes.

For example, dinner at a restaurant is easier to shoot in natural surroundings. But shooting complex scenes that endanger the health of the actors and crew, which previously required the involvement of specialized equipment, stuntmen, has now moved almost entirely to the monotonous pavilions. This technology is called “Chrome Key” (Chrome Key) and has its roots in the 30’s of last century, when the engineer Linwood Vann suggested its use. There have been several attempts to use chromakey in cinematography, but the first successful realization can be considered the English film Baghdad Thief directed by Ludwig Berger, Michael Powell and Tim Whelan, shot in 1940, where scenes with a flying magic carpet were filmed on a monochrome background. The picture won three Academy Awards, including Best Visual Effects.

Chromakey Chroma KeyThe essence of chromakey, which is also sometimes called keying (from the phrase colorkey – key color), is to impose on the objects of the foreground required background scene (static or dynamic), which displaces the monochrome color. It is obligatory that foreground objects do not contain chromakey colors. In exceptional cases, the traditional blue or green colors are replaced by other colors. This relatively inexpensive method allows combining scenes of practically any complexity. However, just drawing the right background is not enough. After all, the main action takes place in the foreground. And if you add to the frame three-dimensional model of the car and the plane can even beginner 3D-designer, then, when it comes to computerization of the current characters and protagonists require accurate technology to capture.

You’ve probably wondered how Gollum was created in “The Lord of the Rings” and what resources, tools and efforts it cost James Cameron to bring his legendary “Avatar” to life.

To help in these cases came the technology Motion Capture (motion capture), which is a system consisting of dozens of light sensors connected to the desired object (usually a person in a special suit) and connected to a computer through a special interface. This allows real-time animation previously drawn in a graphics program character, which receives information from the sensors, exactly reproduces the movements of the person on his simulated character. The developer of the motion-capture technology is considered to be Digital District. Using its methods were filmed animated movies “The Last Fantasy” (2001), “Polar Express” (2004) and others.

Every year the technology has improved, and in 2009 allowed Cameron to shoot “Avatar”, which made another revolutionary breakthrough in movie-making technology. Here, for the first time, a specially developed motion capture system (Performance Capture) was used, and while previously the motion capture system was shared, for this picture Cameron divided it up. A camera mounted on the head was responsible for capturing facial expressions, and body movements were captured by a separate remote camera.

For the 20th Century Fox film studio, Sony developed special cameras that were aimed at the actors’ faces (Facial Camera) and were attached to special helmets. The camera on the hero’s head no longer followed the sensors, but special markers in key places on the character’s face, which were driven by the facial muscles. It allowed to make the characters absolutely realistic: the suspended apparatus recorded facial expressions, muscle movements and eye pupil movements with unprecedented accuracy (with this method approximately 95% of the actors’ actions were transferred to their digital copies) and make the avatars and inhabitants of the alien people of Navi so realistic. And the remote camera gave the film crew a much wider space for motion capture (up to 6 times the previous capability).

Unlike previous motion-capture systems, where the digital environment was added after the actors’ movements had been captured, the new virtual camera allowed Cameron to monitor from the monitor how the virtual copies of the actors interacted with the digital world of the movie in real time. The director could adjust and control scenes just as he would in a conventional shoot, seeing not the actors against the backdrop of a film set, but the film’s characters in a rainforest. The new approach allowed the crew to partially forego the makeup artists and technicians responsible for creating alien images. Now you can recreate any fairy tale or fantastic character in any environment, choose your age, without wasting time on complicated makeup, which in addition causes discomfort for the actors.

Avatar” movie consists of 40% of live shooting and 60% of computer graphics. It was a new level of interaction between game and computer cinema. But this is by no means all the technological innovations we can find in Avatar.

The film is made in stereoscopic three-dimensional format (we will return to this in more detail below), for which the director used his own technology Reality Camera System with two high-resolution cameras, increasing the depth of perception. To create the world of Avatar, more than a petabyte (1000 Tb) of digital disk space was needed to store all of the film’s graphic files (plants, animals, insects, rocks, mountains, clouds).

In comparison, when working on the film “Titanic”, also filmed by Cameron, it took only 2 Tb to create and then sink the ship and thousands of passengers. Although the filming of Titanic required much more scenery, including an almost full-size replica of a ship over 200 meters long.

Currently, when creating a fully realistic action of a virtual character, you can’t do without Motion Capture technology. But at the same time, you can not allow mistakes in the implementation, because the slightest defect and the wrong move will look unnatural and destroy the effect of immersion in the film. That’s why animators are trying to make the movements of its characters more pronounced, it allows you to increase perception. A similar problem, although to a lesser extent, is the animation of various animals and fictional characters. Producer has to look for real-life analogues animated creatures, to carry out their shooting, and already on the basis of the collected material trying to embody the fantasy writers.

In the computer industry there are many animators who have succeeded in creating three-dimensional computer graphics (CGI – Computer Graphics Imagery). One of the most popular companies for the development of CGI-effects in Hollywood is Industrial Light & Magic (ILM), created by famous director George Lucas in 1975, at the time of filming “Star Wars”. The company has created special effects for several hundred of Hollywood’s highest-grossing blockbusters over the last 35 years. ILM alumni founded such studios as Pixar and Kerner Technologies, specializing in physics simulations.

Ellen Page and William Defoe on the set of clips for Beyond The SoulsIn 1993, Steven Spielberg’s Jurassic Park was released, a film that could also be considered revolutionary. Initially the authors of the film were not planning to use computer graphics in so many scenes and were supposed to recreate the dinosaurs using traditional and tried before methods: puppet animation and controlled robots. However, it turned out to be not so simple and without computer graphics the director would not have been able to achieve such a high level of realism. Of course, computer graphics were used in cinematography before “Jurassic Park”, but in a slightly different way. For example, they drew a virtual computer world in the film Tron (1982), or created special effects depicting things that usually have no analogues in reality. Think of the T1000 liquid metal robot in the second episode of Terminator (Judgment Day, 1991). Prior to Jurassic Park, no one used computer graphics in big cinema to depict realistic objects, much less animate animals, let alone extinct ones. No one had any idea how possible it was or what it might entail, so Steven Spielberg originally wasn’t going to spend the budget on risky experiments. But after ILM Studios drew a test sample of a dinosaur, animated it and combined it with nature photos, Spielberg opted for computer graphics, and subsequently all the scenes that were going to be shot with frame-by-frame puppet animation were drawn on the computer.

As production capacity increased, it became possible to computerize modern cinema on a larger and larger scale. But to design a scene on the computer is not the whole process, as you still need to give the picture an artistic and realistic look. To do this, teams of graphic specialists are put to work after the movie is shot. Post-Production also involves several stages of work, involving not only the manipulation of the footage, but also the creation of various objects and the environment, seeming to the viewer natural and obligatory when viewing the film. In the final version of the film, the appearance of the actors is often heavily redrawn, and some quite natural scenes are almost entirely created by 3D-artists. First of all, a rough version of the footage is collected in a software package for video editing. Unlike amateur home editing, which consists mainly of slicing clips with the addition of a few effects, professional filmmakers carefully evaluated and processed each frame. Editing takes place during the whole process of Post-Production, from the initial processing of the footage and ending with mastering the final media (cinema film or DVD/Blu-ray discs), which uses professional and expensive software.

The recognized industry standard in film editing software is the American corporation Avid Technology. Many ordinary users working with camcorders and video capture cards, it is familiar from its subsidiary company Pinnacle, which produces software consumer and consumer level. Most popular with filmmakers is Avid Media Composer, which won the ACE Technical Excellence Award from the American Association of Film Editors after the release of Avatar. Also increasingly popular in recent years is Apple’s Final Cut Studio software package. In the first stage, the footage is subjected to rough editing in order to determine the order and duration of scenes. The result of this work is transferred to the recording studios, computer graphics and effects.

One of the most interesting and directly computer-related parts of Post-Production is the creation of computer graphics. It is so complex that it is also divided into stages, covering the development of the 3D setting, the animation of all objects and the creation of the 3D environment. The most popular software package for creating three-dimensional scenes in today’s cinema is Autodesk Maya, but Softimage and Cinema 4D are also widely used. From the sketches of the artists to create a full three-dimensional environment and characters, and in order to make the viewer believed in what is happening on the screen, all the details of the scenes must be as plausible. Therefore hundreds of gigabytes of textures are created for the virtual environment, which can be culled from real life or generated from the synthesis of various materials. In addition, artists use shaders to set the light-reflecting properties of the material on which the corresponding textures are superimposed. However, this is not enough, because, for example, pebble stones on the beach immediately give away their flat nature, if you apply dynamic lighting. At the same time, drawing each stone individually is an absolutely impossible task. Therefore, 3D artists create shadow maps for all surfaces. If everything is done correctly, the result is a virtual environment that is virtually indistinguishable from the real thing. However, this kind of realism looks good on static pictures. To make them look more alive and in the movie, you need to animate not only the main characters and the foreground, but also the entire environment. This stage is no less difficult than the previous one, and very often its implementation is entrusted to individual specialized studios.

It should be noted that animation techniques can be very diverse in nature. Animate the grass on the lawn and walking on it is not the same. If the animated man, the experts resorted to motion capture Motion Picture. Quite differently with the animation of the environment, because it is impossible to set your own rule of motion for each blade of grass or leaf. The same applies to the animation of water, fire, smoke, etc. Here come to the aid of resource-intensive calculations. To correctly simulate the mobility of these elements, animators set the dynamics of movement of the environment. For example, setting the parameters of wind and assigning the appropriate properties of the material, grass can be achieved at once an incredibly believable picture of movement on the screen. In order to realistically fit an actively moving character into this environment, the animators have to resort to simulating collisions of objects. Very often these tasks are solved with the help of individual software packages, such as Houdini from Side Effects Software, but sometimes even the many features of existing programs are not enough to create a realistic picture for a particular movie with its fantasy world. In this case, animation studios write their own applications, filters, create new effects and physical properties of the environment. They even resort to the help of scientists. For example, to realistically show the water flowing around the virtual ship, the Russian studio Mainroad Post, which made special effects in the film “Admiral” (2009) had to appeal to the Institute of Oceanology, Israel. In addition, special effects should include not only the drawing and animation of events and surroundings. For example, in today’s cinema, to age the hero is no longer necessary to impose a thick layer of makeup on the actor. And computer programs cope with this.

Related Posts